Imaging Exam Acquisition and Quality Control (QC) Workflow in Today’s Consolidated Enterprise – Part 2 of 2

In my previous post, I discussed common challenges associated with the imaging exam acquisition workflows performed by Technologists (Tech Workflow) that many healthcare provider organizations face today.

In this post, we will explore imaging record Quality Control (QC) workflow.

Background

A typical Consolidated Enterprise is a healthcare provider organization consisting of multiple hospitals/facilities that often share a single instance of EMR/RIS and Image Manager/Archive (IM/A) systems, such as PACS or VNA. The consolidation journey is complex and requires careful planning that relies on a comprehensive approach towards a solution and interoperability architectures.

An Imaging Informatics team supporting a Consolidated Enterprise typically consists of PACS Admin and Imaging Analyst roles supporting one or more member-facilities.

Imaging Record Quality Control (QC) Workflows

To ensure the quality (completeness, consistency, correctness) of imaging records, providers rely on automatic workflows (such as validation by the IM/A system of the received DICOM study information against the corresponding HL7 patient and order information) and manual workflows performed either by Technologists during the Tech Workflow or by Imaging Informatics team members post-exam acquisition. Automatic updates of Patient and Procedure information are achieved through HL7 integration between EMR/RIS and the IM/A.

Typical manual QC activities include the following:

  • Individual Image Corrections (for example, correction of a wrong laterality marker)
  • DICOM Header Updates (for example, an update of the Study Description DICOM attribute)
  • Patient Update (moving a complete DICOM study from one patient record to another)
  • Study Merge (moving some, or all, of the DICOM objects from the “merged from” study to the “merged to” study)
  • Study Split (moving some of the DICOM objects/series from the “split from” study to the “split to” study)
  • Study Object Deletion (deletion of one or more objects/series from a study)

QC Workflow Challenges

Access Control Policy

One of the key challenges related to ensuring the quality of imaging records across large health system enterprises is determining who is qualified and authorized to perform QC activities. A common approach is to provide data control and correction tools to staff from the site where the imaging exam was acquired, since they are either aware of the context of an error or can easily get it from the interaction with the local clinical staff, systems, or the patient themselves. With such an approach, local staff can access only data acquired at sites to which they are assigned to comply with patient privacy policies and prevent any accidental updates to another site’s records. The following diagram illustrates this approach.

QC-1

Systems Responsibilities

Another important area of consideration is to determine which enterprise system should be the “source of truth” for Imaging QC workflows when there are multiple Image Manager/Archives. Consider the following common Imaging IT architecture, where multiple facilities share both PACS and VNA applications. In this scenario, the PACS maintains a local DICOM image cache while the VNA provides the long-term image archive. Both systems provide QC tools that allow authorized users to update the structure or content of imaging records.

QC-2

Since DICOM studies stored in the PACS cache also exist in the VNA, any changes resulting from QC activity performed in one of these systems must be communicated to the other to ensure that both systems are in sync. This gets more complicated when many systems storing DICOM data are involved.

Integrating the Healthcare Enterprise (IHE) developed the “Imaging Object Change Management (IOCM)” integration profile, which provides technical details regarding how to best propagate imaging record changes among multiple systems.

To minimize the complexity associated with the synchronization of imaging record changes, it is usually a good idea to appoint one system to be the “source of truth”. Although bidirectional (from PACS to VNA or from VNA to PACS) updates are technically possible, the complexity of managing and troubleshooting such integration while ensuring good data quality practices can be significant.

The Takeaway

Often the QC Workflow is not discussed in depth during the procurement phase of a new PACS or VNA. The result: The ability of the Vendor of Choice’s (VOC) solution to provide robust, reliable, and user-friendly QC tools, while ensuring compliance with access control rules across multiple sites, is not fully assessed. Practice shows that vendors vary significantly in these functional areas and their capabilities should be closely evaluated as part of any procurement process.

The Case for Vendor Neutral Artificial Intelligence (VNAi) Solutions in Imaging

Defining VNAi

Lessons Learned from Vendor Neutral Archive (VNA) Solutions

For well over a decade, VNA solutions have been available to provide a shared multi-department, multi-facility repository and integration point for healthcare enterprises. Organizations employing these systems, often in conjunction with an enterprise-wide Electronic Medical Record (EMR) system, typically benefit from a reduction in complexity, compared to managing disparate archives for each site and department. These organizations can invest their IT dollars in ensuring that the system is fast and provides maximum uptime, using on-premises or cloud deployments. And it can act as a central, managed broker for interoperability with other enterprises.

The ability to standardize on the format, metadata structure, quality of data (completeness and consistency of data across records, driven by organizational policy), and interfaces for storage, discovery, and access of records is much more feasible with a single, centrally-managed system. Ensuring adherence to healthcare IT standards, such as HL7 and DICOM, for all imaging records across the enterprise is possible with a shared repository that has mature data analytics capabilities and Quality Control (QC) tools.

What is a Vendor Neutral Artificial Intelligence (VNAi) Solution?

The same benefits of centralization and standardization of interfaces and data structures that VNA solutions provide are applicable to Artificial Intelligence (AI) solutions. This is not to say that a VNAi solution must also be a VNA (though it could be), just that they are both intended to be open and shared resources that provide services to several connected systems.

Without a shared, centrally managed solution, healthcare enterprises run the risk of deploying a multitude of vendor-proprietary systems, each with a narrow set of functions. Each of these systems would require integration with data sources and consumer systems, user interfaces to configure and support it, and potentially varying platforms to operate on.

Do we want to repeat the historic challenges and costs associated with managing disparate image archives when implementing AI capabilities in an enterprise?

Characteristics of a Good VNAi Solution

The following capabilities are important for a VNAi solution.

Interfaces

Flexible, well-documented, and supported interfaces for both imaging and clinical data are required. Standards should be supported, where they exist. Where standards do not exist, good design principles, such as the use of REST APIs and support for IT security best practices, should be adhered to.

Note: Connections to, or inclusion of, other sub-processes—such as Optical Character Recognition (OCR) and Natural Language Processing (NLP)—may be necessary to extract and preprocess unstructured data before use by AI algorithms.

Data Format Support

The data both coming in and going out will vary and a VNAi will need to support all kinds of data formats (including multimedia ones) with the ability to process this data for use in its algorithms. The more the VNAi can perform data parsing and preprocessing, the less each algorithm will need to deal with this.

Note: It may be required to have a method to anonymize some inbound and/or outbound data, based on configurable rules.

Processor Plug-in Framework

To provide consistent and reliable services to algorithms, which could be written in different programming languages or run on different hosts, the VNAi needs a well-documented, tested, and supported framework for plugging in algorithms for use by connected systems. Methods to manage the state of a plug-in—from test, production, and disabled, as well as revision controls—will be valuable.

Quality Control (QC) Tools

Automated and manual correction of data inputs and outputs will be required to address inaccurate or incomplete data sets.

Logging

Capturing the logic and variables used in AI processes will be important to retrospectively assess their success and to identify data generated by processes that prove over time to be flawed.

Data Analytics

For both business stakeholders (people) and connected applications (software), the ability to use data to measure success and predict outcomes will be essential.

Data Persistence Rules

Much like other data processing applications that rely on data as input, the VNAi will need to have configurable rules that determine how long defined sets of data are persisted, and when they are purged.

Performance

The VNAi will need to be able to quickly process large data sets at peak loads, even with highly complex algorithms. Dynamically assigning IT resources (compute, network, storage, etc.) within minutes, not hours or days, may be necessary.

Deployment Flexibility

Some organizations will want their VNAi in the cloud, others will want it on-premises. Perhaps some organizations want a hybrid approach, where learning and testing is on-premises, but production processing is done in the cloud.

High Availability (HA) and Business Continuity (BC) and System Monitoring

Like any critical system, uptime is important. The ability for the VNAi to be deployed in an HA/BC configuration will be essential.

Multi-tenant Data Segmentation and Access Controls

A shared VNAi reduces the effort to build and maintain the system, but its use and access to the data it provides will require data access controls to ensure that data is accessed only by authorized parties and systems.

Cost Sharing

Though this is not a technical characteristic, the VNAi solution likely requires the ability to share the system build and operating costs among participating organizations. Methods to identify usage of specific functions and algorithms to allocate licensing revenues would be very helpful.

Effective Technical Support

A VNAi can be a complex ecosystem with variable uses and data inputs and outputs. If the system is actively learning, how it behaves on one day may be different than on another. Supporting such a system will require developer-level profile support staff in many cases.

Conclusion

Without some form of VNAi (such the one described here), we risk choosing between monolithic, single-vendor platforms, or a myriad of different applications, each with their own vendor agreements, hosting and interface requirements, and management tools.

Special thanks to @kinsonho for his wisdom in reviewing this post prior to publication.

Get Moving – New Kinect SDK from Microsoft

Using a Microsoft Kinect (motion, voice) as a healthcare application interface input (e.g. navigating images without touching a computer in the operating room) made a lot of press, but those folks that actually developed for it found the initial device release lacking an a mature API for PC application developers. Microsoft has since released a software developer kit (SDK), but it still required extra coding to have the device recognize desirable gestures. An update to the SDK was recently released and it adds several new gestures that can be recognized and made available to application developers through the SDK.

Check it out.

So, for those many Rads that played the clip from Minority Report (where Tom Cruise interacts with images and video by moving his hands around) during their talks at SIIM and elsewhere, we are one step closer to realizing your dream. 🙂 Though, do try and wave your arms around for a 4 to 8 hour workday and let me know how it goes—eye fatigue will be the least of your worries, my friends.

Plug-ins vs. APIs

Without endorsing the product represented (I have not looked at it at), I think this blog post is covering some important points (and is well written, so do have a look).

Some thoughts, though…

Pet peeve of mine: The smartphone app metaphor is a bit overused in enterprise software that manages personal healthcare information. When a physician is making a life-altering decision about my (or my loved ones’) care, I would hope that they are relying on something with a bit more vigilance around it than something that they downloaded along with a Kanye West tune. I want something that is “battled tested” and properly integrated in the enterprise (see ITIL Change and Configuration Management) before being used clinically in the hands of healthcare providers. Call me paranoid.

Look, I get it. Users like downloading and using “apps”. But, the expression of desire for “apps” is properly translated in a desire for more flexibility and responsiveness from their healthcare IT vendor(s)–people like the personal experience of the control that downloading apps provides them and speed of innovation (with a decent platform, apps can be developed with massive parallelism–think Army of Nerds) , but if Angry Birds makes a mistake, no one dies (well, a bird, I suppose–never played the game so I am only guessing).

I may get some flame mail from respected friends for this one: open source applications are also not the (complete) answer. Sure, its fun to be able to change and extend source code (and to become a hero by fixing bugs yourself instead of waiting for a fix from your vendor), but for every “coder” I have met only a small percentage that really understand the full scope of work of professional software development required to produce an application to the quality we need in healthcare. Don’t get me wrong: I love hackers. They break through barriers and solve problems with practical methods. But, their output typically needs a lot more work to make something that is production ready. Great power, great responsibility, and all that.

The reality is: we don’t need a bunch of user-downloadable apps or developer-modifiable source code. We need APIs. Good ones. REST-based Web service APIs. Documented, well-designed and tested. Supported. Unbreakable, secure ones.

When done right, APIs are contracts. Promises. They aren’t some side-door or limited set of functions added as an afterthought. They become the language by which applications speak to each other, even internal parts of an application.

Healthcare standards like DICOM and HL7 standards include APIs, just ones based on older technologies and protocols. The information model is still pretty solid, I would argue, so we don’t need completely new “words”, just some different ways to communicate.

Oh, and products with great APIs are generally higher quality because it is much easier to write automated tests to beat the crap out of them before unleashing them on to the world.

More on this topic later.