The State of RIS Today

A lot of attention is paid to imaging IT systems, like PACS and VNAs, and EMRs these days, but Radiology Information Systems (RIS) play a very important role in the success of the Radiology service line within an enterprise.

The industry and market for RIS has changed a lot since their introduction, with two core markets (with different needs) evolving.

I recently wrote an article for HealthCare Business News, titled A tale of two kinds of RIS solutions, on the subject. The article is here.

AXIS Imaging Interview: Three Tips for Setting Up Remote Reading for Radiologists

Remote reading is a hot topic for many working in diagnostic imaging. It is highly desirable for many Radiologists who want to telecommute like the rest of us. And, as cloud-based imaging IT solutions become more popular, the ability to read remotely (Rads won’t be going to the data center to read) will become the standard.

AXIS Imaging interviewed me to get three tips on setting up remote reading for radiologists. The article is here.

AXIS Imaging Interview: How to Prepare a Successful Vendor RFP

AXIS Imaging recently interviewed me to get three tips on preparing for a Request for Proposal (RFP). The article is here. Enjoy!

There are obviously a lot more recommended practices when doing an RFP, but these three are always good to consider. I tried to provide guidance that applies to both IT and equipment purchases.

SIIM19 – Will I See You There?

In less than two weeks, the SIIM Annual Meeting will be taking place at the newly opened Gaylord Rockies Resort in Aurora, Colorado. I am looking forward to catching up with old friends and meeting new ones, along with learning lots of new information.

This year, I am (co-)chairing three sessions and participating in a fourth. Here is a summary (my role). All times are in local MT.

If it is your first annual meeting, I recommend you go to the First Time Attendee Meet-up on Tue 25-Jun at 6:15 pm in the Adams C/D Lobby room and attend the New Member Orientation: Intro to SIIM session on Wed 26-Jun at 9:45 am. Jim and Rick are great educators and mentors.

Also, be sure to check out the pre-conference sessions, the Hackathon and Innovation Challenge events, as well as the #AskIndustry, Learning Lab, and Scientific sessions. And be sure to put the SIIM 2019 Reception on your calendar. It is great to connect with peers and review the scientific posters.

While you are on-site, don’t forget to share information and activities (and selfies!) using the SIIM Twitter handle @SIIM_Tweets (remember to follow SIIM for great info on imaging informatics throughout the year!) and the official 2019 Annual Meeting hashtag: #SIIM19.

I hope to see you there!

MIIT 2019 – An Overview

On Friday, May 10, I once again have the pleasure of co-chairing the Medical Imaging Informatics and Teleradiology (MIIT) conference at Liuna Station in Hamilton, ON.

The program for the 14th annual MIIT meeting is stellar, we have a record number of sponsors, and—thanks to lower registration fees and new group discounts—many people are already signed up to attend.

Program Highlights:

  • AI Strategy of CARRoger Tam will enlighten us on the Canadian Association of Radiologists’ strategy for AI.
  • Cloud Services for Machine Learning and Analytics – Patrick Kling will reveal how cloud-based solutions can address the challenge of managing large volumes of data.
  • Patient-Centered Radiology – Dr. Tessa Cook (@asset25)will provide insight into their progress on this topic at UPenn.
  • Collecting Data to Facilitate Change – Dr. Alex Towbin of Cincinnati Children’s Hospital (@CincyKidsRad) will show us how to use data to support change management.
  • Panel on the Future of DIRs in Canada – In this interactive session, we will discover what has been accomplished with Diagnostic Imaging Repositories (DIRs) in Ontario, and what’s next. I will moderate a panel with leaders from SWODIN and HDIRS.
  • Practical Guide to making AI a Reality – Brad Genereaux (@IntegratorBrad), with broad experience working in hospitals, industry, standards committees, and technology, will help attendees prepare for this new area.
  • Healthcare IT Standards – Kevin O’Donnell, a veteran of healthcare standards development and MIIT, will provide an overview of developments within the DICOM and HL7 standards, and IHE.
  • ClinicalConnect – Dale Anderson will provide an update on this application (@ClinicalConnect), used by many organizations in the local region.

If you can attend, I am sure you will find the event educational. There are lots of opportunities to interact with our speakers and sponsors. If you are not from the region, you may find a weekend getaway to the nearby Niagara on the Lake wine region enjoyable.

And don’t forget to follow MIIT (@MIIT_Canada) on Twitter!

Enterprise Insight in Today’s Consolidated Enterprise

As health systems acquire or partner with previously independent facilities to form Consolidated Enterprises, and implement a Shared Electronic Medical Record (EMR) system, they often consolidate legacy diagnostic imaging IT systems to a shared solution. Facilities, data centers, identity management, networking equipment, interface engines, and other IT infrastructure and communications components are also often consolidated and managed centrally. Often, a program to capture and manage clinical imaging records follows.

Whether the health system deploys a Vendor Neutral Archive (VNA), an Enterprise PACS, or a combination of both, some investment is made to reduce the overall number of imaging IT systems installed and the number of interfaces to maintain. An enterprise-wide radiation dose monitoring solution may also be implemented.

While much has been written on strategies to achieve this type of shared, integrated, enterprise-wide imaging IT solution, there are several other opportunities for improvement beyond this vision.

In addition to imaging and information record management systems, enterprise-wide solutions for system monitoring, audit record management, and data analytics can also provide significant value.

Systems Monitoring

Organizations often have some form of enterprise-level host monitoring solution, which provides basic information on the operational status of the computers, operating systems and (sometimes) databases. However, even when the hosts are operating normally, there are many conditions that can cause a solution or workflow to be impeded.

In imaging, there are many transaction dependencies that, if they are not all working as expected, can cause workflow to be delayed or disabled. Often, troubleshooting these workflow issues can be a challenge, especially in a high-transaction enterprise.

Having a solution that monitors all the involved systems and the transactions between them can help detect, prevent, and correct workflow issues.

Audit Record Management

Many jurisdictions have laws and regulations that require a comprehensive audit trail to be made available on demand. Typically, this audit trail provides a time-stamped record of all accesses and changes to a patient’s record, including their medical images, indexed by the users and systems involved.

Generating this audit trail from the myriad of logs in each involved system, each with its own record format and schema, can be a costly manual effort.

The Audit Trail and Node Authentication integration profile (ATNA), part of Integrating the Healthcare Enterprise (IHE), provides a framework for publishing, storing, and indexing audit records from different systems. It defines triggering events, along with a record format, and communication protocol.

Enterprises are encouraged to look for systems that support the appropriate actors in the ATNA integration profiles during procurement of new IT systems and equipment. Implementing an Audit Record Repository with tools that make audit trail generation easy is also important.

Data Analytics

Capturing and analyzing operational data is key to identifying issues and trends. As each system generates logs in different formats and using different methods, it often takes significant effort to normalize data records to get reliable analytics reports.

Periodic (for example, daily, weekly, or monthly) reports, common in imaging departments for decades, are often not considered enough in today’s on-demand, real-time world. Interactive dashboards that allow stakeholders to examine the data through different “lenses”, by changing the query parameters, are increasingly being implemented.

Getting reliable analytics results using data from both information (for example, the EMR and RIS) and imaging (for example, modalities, PACS, VNA, and Viewers) systems often requires significant effort, tools to extract/transform/load (ETL) the data, and a deep understanding of the “meaning” of the data.

Enterprise Insight

Implementing solutions that continuously and efficiently manage the health of your systems, the records accessed, and operational metrics are important aspects in today’s Consolidated Enterprise. Evaluating any new system as to their ability to integrate with, and provide information to, these systems is recommended.

Imaging Exam Acquisition and Quality Control (QC) Workflow in Today’s Consolidated Enterprise – Part 2 of 2

In my previous post, I discussed common challenges associated with the imaging exam acquisition workflows performed by Technologists (Tech Workflow) that many healthcare provider organizations face today.

In this post, we will explore imaging record Quality Control (QC) workflow.

Background

A typical Consolidated Enterprise is a healthcare provider organization consisting of multiple hospitals/facilities that often share a single instance of EMR/RIS and Image Manager/Archive (IM/A) systems, such as PACS or VNA. The consolidation journey is complex and requires careful planning that relies on a comprehensive approach towards a solution and interoperability architectures.

An Imaging Informatics team supporting a Consolidated Enterprise typically consists of PACS Admin and Imaging Analyst roles supporting one or more member-facilities.

Imaging Record Quality Control (QC) Workflows

To ensure the quality (completeness, consistency, correctness) of imaging records, providers rely on automatic workflows (such as validation by the IM/A system of the received DICOM study information against the corresponding HL7 patient and order information) and manual workflows performed either by Technologists during the Tech Workflow or by Imaging Informatics team members post-exam acquisition. Automatic updates of Patient and Procedure information are achieved through HL7 integration between EMR/RIS and the IM/A.

Typical manual QC activities include the following:

  • Individual Image Corrections (for example, correction of a wrong laterality marker)
  • DICOM Header Updates (for example, an update of the Study Description DICOM attribute)
  • Patient Update (moving a complete DICOM study from one patient record to another)
  • Study Merge (moving some, or all, of the DICOM objects from the “merged from” study to the “merged to” study)
  • Study Split (moving some of the DICOM objects/series from the “split from” study to the “split to” study)
  • Study Object Deletion (deletion of one or more objects/series from a study)

QC Workflow Challenges

Access Control Policy

One of the key challenges related to ensuring the quality of imaging records across large health system enterprises is determining who is qualified and authorized to perform QC activities. A common approach is to provide data control and correction tools to staff from the site where the imaging exam was acquired, since they are either aware of the context of an error or can easily get it from the interaction with the local clinical staff, systems, or the patient themselves. With such an approach, local staff can access only data acquired at sites to which they are assigned to comply with patient privacy policies and prevent any accidental updates to another site’s records. The following diagram illustrates this approach.

QC-1

Systems Responsibilities

Another important area of consideration is to determine which enterprise system should be the “source of truth” for Imaging QC workflows when there are multiple Image Manager/Archives. Consider the following common Imaging IT architecture, where multiple facilities share both PACS and VNA applications. In this scenario, the PACS maintains a local DICOM image cache while the VNA provides the long-term image archive. Both systems provide QC tools that allow authorized users to update the structure or content of imaging records.

QC-2

Since DICOM studies stored in the PACS cache also exist in the VNA, any changes resulting from QC activity performed in one of these systems must be communicated to the other to ensure that both systems are in sync. This gets more complicated when many systems storing DICOM data are involved.

Integrating the Healthcare Enterprise (IHE) developed the “Imaging Object Change Management (IOCM)” integration profile, which provides technical details regarding how to best propagate imaging record changes among multiple systems.

To minimize the complexity associated with the synchronization of imaging record changes, it is usually a good idea to appoint one system to be the “source of truth”. Although bidirectional (from PACS to VNA or from VNA to PACS) updates are technically possible, the complexity of managing and troubleshooting such integration while ensuring good data quality practices can be significant.

The Takeaway

Often the QC Workflow is not discussed in depth during the procurement phase of a new PACS or VNA. The result: The ability of the Vendor of Choice’s (VOC) solution to provide robust, reliable, and user-friendly QC tools, while ensuring compliance with access control rules across multiple sites, is not fully assessed. Practice shows that vendors vary significantly in these functional areas and their capabilities should be closely evaluated as part of any procurement process.

The Case for Vendor Neutral Artificial Intelligence (VNAi) Solutions in Imaging

Defining VNAi

Lessons Learned from Vendor Neutral Archive (VNA) Solutions

For well over a decade, VNA solutions have been available to provide a shared multi-department, multi-facility repository and integration point for healthcare enterprises. Organizations employing these systems, often in conjunction with an enterprise-wide Electronic Medical Record (EMR) system, typically benefit from a reduction in complexity, compared to managing disparate archives for each site and department. These organizations can invest their IT dollars in ensuring that the system is fast and provides maximum uptime, using on-premises or cloud deployments. And it can act as a central, managed broker for interoperability with other enterprises.

The ability to standardize on the format, metadata structure, quality of data (completeness and consistency of data across records, driven by organizational policy), and interfaces for storage, discovery, and access of records is much more feasible with a single, centrally-managed system. Ensuring adherence to healthcare IT standards, such as HL7 and DICOM, for all imaging records across the enterprise is possible with a shared repository that has mature data analytics capabilities and Quality Control (QC) tools.

What is a Vendor Neutral Artificial Intelligence (VNAi) Solution?

The same benefits of centralization and standardization of interfaces and data structures that VNA solutions provide are applicable to Artificial Intelligence (AI) solutions. This is not to say that a VNAi solution must also be a VNA (though it could be), just that they are both intended to be open and shared resources that provide services to several connected systems.

Without a shared, centrally managed solution, healthcare enterprises run the risk of deploying a multitude of vendor-proprietary systems, each with a narrow set of functions. Each of these systems would require integration with data sources and consumer systems, user interfaces to configure and support it, and potentially varying platforms to operate on.

Do we want to repeat the historic challenges and costs associated with managing disparate image archives when implementing AI capabilities in an enterprise?

Characteristics of a Good VNAi Solution

The following capabilities are important for a VNAi solution.

Interfaces

Flexible, well-documented, and supported interfaces for both imaging and clinical data are required. Standards should be supported, where they exist. Where standards do not exist, good design principles, such as the use of REST APIs and support for IT security best practices, should be adhered to.

Note: Connections to, or inclusion of, other sub-processes—such as Optical Character Recognition (OCR) and Natural Language Processing (NLP)—may be necessary to extract and preprocess unstructured data before use by AI algorithms.

Data Format Support

The data both coming in and going out will vary and a VNAi will need to support all kinds of data formats (including multimedia ones) with the ability to process this data for use in its algorithms. The more the VNAi can perform data parsing and preprocessing, the less each algorithm will need to deal with this.

Note: It may be required to have a method to anonymize some inbound and/or outbound data, based on configurable rules.

Processor Plug-in Framework

To provide consistent and reliable services to algorithms, which could be written in different programming languages or run on different hosts, the VNAi needs a well-documented, tested, and supported framework for plugging in algorithms for use by connected systems. Methods to manage the state of a plug-in—from test, production, and disabled, as well as revision controls—will be valuable.

Quality Control (QC) Tools

Automated and manual correction of data inputs and outputs will be required to address inaccurate or incomplete data sets.

Logging

Capturing the logic and variables used in AI processes will be important to retrospectively assess their success and to identify data generated by processes that prove over time to be flawed.

Data Analytics

For both business stakeholders (people) and connected applications (software), the ability to use data to measure success and predict outcomes will be essential.

Data Persistence Rules

Much like other data processing applications that rely on data as input, the VNAi will need to have configurable rules that determine how long defined sets of data are persisted, and when they are purged.

Performance

The VNAi will need to be able to quickly process large data sets at peak loads, even with highly complex algorithms. Dynamically assigning IT resources (compute, network, storage, etc.) within minutes, not hours or days, may be necessary.

Deployment Flexibility

Some organizations will want their VNAi in the cloud, others will want it on-premises. Perhaps some organizations want a hybrid approach, where learning and testing is on-premises, but production processing is done in the cloud.

High Availability (HA) and Business Continuity (BC) and System Monitoring

Like any critical system, uptime is important. The ability for the VNAi to be deployed in an HA/BC configuration will be essential.

Multi-tenant Data Segmentation and Access Controls

A shared VNAi reduces the effort to build and maintain the system, but its use and access to the data it provides will require data access controls to ensure that data is accessed only by authorized parties and systems.

Cost Sharing

Though this is not a technical characteristic, the VNAi solution likely requires the ability to share the system build and operating costs among participating organizations. Methods to identify usage of specific functions and algorithms to allocate licensing revenues would be very helpful.

Effective Technical Support

A VNAi can be a complex ecosystem with variable uses and data inputs and outputs. If the system is actively learning, how it behaves on one day may be different than on another. Supporting such a system will require developer-level profile support staff in many cases.

Conclusion

Without some form of VNAi (such the one described here), we risk choosing between monolithic, single-vendor platforms, or a myriad of different applications, each with their own vendor agreements, hosting and interface requirements, and management tools.

Special thanks to @kinsonho for his wisdom in reviewing this post prior to publication.

Imaging Exam Acquisition and Quality Control (QC) Workflow in Today’s Consolidated Enterprise – Part 1 of 2

As existing healthcare provider organizations merge and affiliate to create Consolidated Enterprises, image acquisition workflows are often found to be different across the various facilities. Often, the different facilities that comprise the Consolidated Enterprise had different procedures and standard of practice for image acquisition and Quality Control (QC), along with different information and imaging systems.

Standardizing and harmonizing enterprise-wide policies, especially for imaging exam QC, can have significant benefits. A failure to standardize these workflows in a Consolidated Enterprise may result in inconsistent or inaccurate imaging records, which can lead to reading and viewing workflow challenges. These are compounded with a shared imaging system, such as an enterprise PACS or VNA, and can result in delays in care and patient safety risks.

There are generally two areas worth evaluating for optimization:

  • Technologist imaging exam acquisition workflow (Tech Workflow)
  • Imaging record Quality Control workflow (QC Workflow)

Here, we will explore Tech Workflow. QC Workflow will be covered in a subsequent post.

Throughout this discussion the term Radiology Information System (RIS) is used, which can be a standalone system or a module of an EMR.

Tech Workflow

The use of DICOM Modality Worklist (DMWL) for the management of image acquisition is well-understood and broadly adopted. However, the process of marking an exam as “complete” (or “closed”) following acquisition is less standardized and varies across different vendors and healthcare enterprises. The subsequent QC and diagnostic reading workflows rely on the “completion” of the exam before they can begin. For example, an exam that is never marked as “complete” may not appear on a Radiologist Reading Worklist, and an imaging exam that is marked as “complete” when it isn’t will be available for Radiologists to read with only a partial set of images.

Imaging Technologists typically interact with the following applications on a daily-basis.

Tech WF Screens

  • Modality Console – a comprehensive set of tools, attached to the modality, to perform image acquisition activities (such as DMWL queries, exam protocoling, post-processing, etc.).
  • Radiology Information System (RIS) – a specific view into the enterprise RIS application, allowing Technologists to look up patient/procedure information, a set of tools to document the acquisition and mark exam as “complete”, etc.
  • Image Manager/Archive (IM/A) QC – a comprehensive set of imaging exam Quality Control (QC) tools, provided by the Image Manager/Archive (IM/A), such as PACS or VNA, or a dedicated application, to make any necessary corrections to ensure the quality of acquired imaging exam records.

As stated above, there is significant variability among healthcare providers with respect to instituting Tech Workflow policies and procedures. The following diagram illustrates the steps involved in a common Tech Workflow.

Tech WF Flow

Notes:

  • In some cases, Technologists validate the quality of the image and confirm that the number of images in the IM/A is correct for multiple studies at a time instead of each one independently due to the high-volume of exams being acquired.
  • An ability to assess the quality of the imaging exam and correct it (if needed) in a quick and user-friendly manner is critical for an efficient exam completion workflow.

PACS-driven Reading Workflow

In this scenario, the PACS Client provides a Reading Worklist and it is typically responsible for launching (in-context, through a desktop integration) the Report Creator application. There are several methods used across provider organizations to communicate study complete status updates to the PACS.

Method Benefit Challenge
Time out – this is the most typical approach, which considers a study to be complete after a defined period of time has passed (for example, five minutes) since the receipt (by PACS) of the last DICOM object from the modality.
  • Easy to implement
If the time-out is too long, the creation of the corresponding Reading Worklist item will be delayed. Alternatively, a short time-out may result in a Radiologist reporting an incomplete study, which requires follow-up review and potentially an addendum to the report once the missing images are stored to PACS.
HL7 ORM – some organizations release HL7 ORM messages to the Report Creator only after the order status is updated (to study complete) in the RIS.
  • Easy to implement
  • Prevents reporting of incomplete exams (although relies on Technologists to validate the completeness of the study structure in the PACS)
There are scenarios where PACS has received DICOM studies, but their statuses in the RIS application has not yet been updated (for example, as can happen with mobile modalities). The Reading Worklist is unaware of the HL7 message flow between the RIS and the Report Creator and, therefore, allows the Radiologist to start reviewing cases. However, these cases have no corresponding procedure information in the Report Creator. When the Radiologists tries to launch the reporting application in the context of the current study, the Report Creator is unable to comply.
DICOM MPPS – Once an exam is complete, a DICOM MPPS N-Set message (issued by the modality) informs the PACS (and/or RIS) about the structure of the study and the fact that it is completed (along with other useful exam information).
  • Prevents reporting of incomplete exams
  • Automatic confirmation of the structure of the study

 

  • The adoption of DICOM Modality Performed Procedure Step (MPPS) is still limited in most enterprises, even though some modalities, RIS, and PACS support it.
  • Somewhat complex to implement (requires integration and testing between each modality and the MPPS server) coupled with a lack of understanding as to the benefits of this approach in many healthcare provider organizations.
  • Some modality vendors charge an additional fee for a license to enable MPPS integration.
  • Can be disconnected from the “completion” of the exam in RIS (i.e. can ensure the Report Creator’s readiness), provided only the PACS receives and processes the MPPS messages.
DICOM Storage Commitment – Once the exam is complete, a series of DICOM messages (N-Action, N-Event-Report) between modalities and PACS can determine whether a complete study was stored to PACS.
  • Prevents reporting of incomplete exams
  • Automatic confirmation of the structure of the study
  • Although most PACS and many modalities support this DICOM transaction, it is not widely implemented by healthcare providers.
  • Somewhat complex to implement (requires integration and testing between each modality and the PACS server) coupled with a lack of understanding as to the benefits of this approach in many healthcare provider organizations.
  • Can be disconnected from the “completion” of the exam in RIS (i.e. can ensure the Report Creator’s readiness).

RIS-driven Reading Workflow

In this scenario, the RIS provides the Reading Worklist and it is implicitly aware of the status of the exam (assuming the same system is used by Techs and Rads). It creates the worklist item that corresponds to the exam once it reaches the “complete” status. As the Reading Worklist launches both the Report Creator and the Diagnostic Viewer (PACS Client) applications, it does not face the informatics challenges inherent to the PACS-driven Reading Workflow described above.

Enterprise-wide Reading Workflow (Dedicated, Standalone Application)

Some organizations use an enterprise-wide Reading Worklist that is a separate application from the PACS and RIS to orchestrate enterprise-wide diagnostic reading (and other imaging related) tasks across all their Radiologists using fine-grained task-allocation rules. Similar to the RIS-driven Reading Workflow, the worklist launches both the Report Creator and the Diagnostic Viewer applications once a worklist item is selected.

To prevent the complexity of the PACS-driven Reading Workflow described above, some organizations choose to release an HL7 ORM message from the RIS application to the worklist only when the status of the corresponding exam in that system is updated. Alternatively, organizations that choose to send all ORM messages to the worklist application as soon as procedures are scheduled, need to deal with ensuring that the PACS has a complete study prior to allowing it to be reported.

So, what?

It is important for healthcare provider organizations to understand the relationship between the Tech Workflow and the Reading Worklist approach they adopt. If a RIS-driven approach is not chosen, then there should be a clear integration strategy in place to ensure that studies are not reported too soon or missed.