There are obviously a lot more recommended practices when doing an RFP, but these three are always good to consider. I tried to provide guidance that applies to both IT and equipment purchases.
In less than two weeks, the SIIM Annual Meeting will be taking place at the newly opened Gaylord Rockies Resort in Aurora, Colorado. I am looking forward to catching up with old friends and meeting new ones, along with learning lots of new information.
This year, I am (co-)chairing three sessions and participating in a fourth. Here is a summary (my role). All times are in local MT.
- Wed 26-Jun 4:15 pm – Surviving in the Remote World: Off-Site Reading (Participant)
- Thu 27-Jun 4:15 pm – Building the Right Team for Success in the Consolidated Enterprise (Co-chair)
- Fri 28-Jun 9:45 am – For Project Leaders: Preparing for Successful Contract Negotiation (Chair)
- Fri 28-Jun 1:15 pm – IHE XDS – Dispelling Myth from Reality (Chair)
If it is your first annual meeting, I recommend you go to the First Time Attendee Meet-up on Tue 25-Jun at 6:15 pm in the Adams C/D Lobby room and attend the New Member Orientation: Intro to SIIM session on Wed 26-Jun at 9:45 am. Jim and Rick are great educators and mentors.
Also, be sure to check out the pre-conference sessions, the Hackathon and Innovation Challenge events, as well as the #AskIndustry, Learning Lab, and Scientific sessions. And be sure to put the SIIM 2019 Reception on your calendar. It is great to connect with peers and review the scientific posters.
While you are on-site, don’t forget to share information and activities (and selfies!) using the SIIM Twitter handle @SIIM_Tweets (remember to follow SIIM for great info on imaging informatics throughout the year!) and the official 2019 Annual Meeting hashtag: #SIIM19.
I hope to see you there!
On Friday, May 10, I once again have the pleasure of co-chairing the Medical Imaging Informatics and Teleradiology (MIIT) conference at Liuna Station in Hamilton, ON.
The program for the 14th annual MIIT meeting is stellar, we have a record number of sponsors, and—thanks to lower registration fees and new group discounts—many people are already signed up to attend.
- AI Strategy of CAR – Roger Tam will enlighten us on the Canadian Association of Radiologists’ strategy for AI.
- Cloud Services for Machine Learning and Analytics – Patrick Kling will reveal how cloud-based solutions can address the challenge of managing large volumes of data.
- Patient-Centered Radiology – Dr. Tessa Cook (@asset25)will provide insight into their progress on this topic at UPenn.
- Collecting Data to Facilitate Change – Dr. Alex Towbin of Cincinnati Children’s Hospital (@CincyKidsRad) will show us how to use data to support change management.
- Panel on the Future of DIRs in Canada – In this interactive session, we will discover what has been accomplished with Diagnostic Imaging Repositories (DIRs) in Ontario, and what’s next. I will moderate a panel with leaders from SWODIN and HDIRS.
- Practical Guide to making AI a Reality – Brad Genereaux (@IntegratorBrad), with broad experience working in hospitals, industry, standards committees, and technology, will help attendees prepare for this new area.
- Healthcare IT Standards – Kevin O’Donnell, a veteran of healthcare standards development and MIIT, will provide an overview of developments within the DICOM and HL7 standards, and IHE.
- ClinicalConnect – Dale Anderson will provide an update on this application (@ClinicalConnect), used by many organizations in the local region.
If you can attend, I am sure you will find the event educational. There are lots of opportunities to interact with our speakers and sponsors. If you are not from the region, you may find a weekend getaway to the nearby Niagara on the Lake wine region enjoyable.
And don’t forget to follow MIIT (@MIIT_Canada) on Twitter!
As health systems acquire or partner with previously independent facilities to form Consolidated Enterprises, and implement a Shared Electronic Medical Record (EMR) system, they often consolidate legacy diagnostic imaging IT systems to a shared solution. Facilities, data centers, identity management, networking equipment, interface engines, and other IT infrastructure and communications components are also often consolidated and managed centrally. Often, a program to capture and manage clinical imaging records follows.
Whether the health system deploys a Vendor Neutral Archive (VNA), an Enterprise PACS, or a combination of both, some investment is made to reduce the overall number of imaging IT systems installed and the number of interfaces to maintain. An enterprise-wide radiation dose monitoring solution may also be implemented.
While much has been written on strategies to achieve this type of shared, integrated, enterprise-wide imaging IT solution, there are several other opportunities for improvement beyond this vision.
In addition to imaging and information record management systems, enterprise-wide solutions for system monitoring, audit record management, and data analytics can also provide significant value.
Organizations often have some form of enterprise-level host monitoring solution, which provides basic information on the operational status of the computers, operating systems and (sometimes) databases. However, even when the hosts are operating normally, there are many conditions that can cause a solution or workflow to be impeded.
In imaging, there are many transaction dependencies that, if they are not all working as expected, can cause workflow to be delayed or disabled. Often, troubleshooting these workflow issues can be a challenge, especially in a high-transaction enterprise.
Having a solution that monitors all the involved systems and the transactions between them can help detect, prevent, and correct workflow issues.
Audit Record Management
Many jurisdictions have laws and regulations that require a comprehensive audit trail to be made available on demand. Typically, this audit trail provides a time-stamped record of all accesses and changes to a patient’s record, including their medical images, indexed by the users and systems involved.
Generating this audit trail from the myriad of logs in each involved system, each with its own record format and schema, can be a costly manual effort.
The Audit Trail and Node Authentication integration profile (ATNA), part of Integrating the Healthcare Enterprise (IHE), provides a framework for publishing, storing, and indexing audit records from different systems. It defines triggering events, along with a record format, and communication protocol.
Enterprises are encouraged to look for systems that support the appropriate actors in the ATNA integration profiles during procurement of new IT systems and equipment. Implementing an Audit Record Repository with tools that make audit trail generation easy is also important.
Capturing and analyzing operational data is key to identifying issues and trends. As each system generates logs in different formats and using different methods, it often takes significant effort to normalize data records to get reliable analytics reports.
Periodic (for example, daily, weekly, or monthly) reports, common in imaging departments for decades, are often not considered enough in today’s on-demand, real-time world. Interactive dashboards that allow stakeholders to examine the data through different “lenses”, by changing the query parameters, are increasingly being implemented.
Getting reliable analytics results using data from both information (for example, the EMR and RIS) and imaging (for example, modalities, PACS, VNA, and Viewers) systems often requires significant effort, tools to extract/transform/load (ETL) the data, and a deep understanding of the “meaning” of the data.
Implementing solutions that continuously and efficiently manage the health of your systems, the records accessed, and operational metrics are important aspects in today’s Consolidated Enterprise. Evaluating any new system as to their ability to integrate with, and provide information to, these systems is recommended.
In my previous post, I discussed common challenges associated with the imaging exam acquisition workflows performed by Technologists (Tech Workflow) that many healthcare provider organizations face today.
In this post, we will explore imaging record Quality Control (QC) workflow.
A typical Consolidated Enterprise is a healthcare provider organization consisting of multiple hospitals/facilities that often share a single instance of EMR/RIS and Image Manager/Archive (IM/A) systems, such as PACS or VNA. The consolidation journey is complex and requires careful planning that relies on a comprehensive approach towards a solution and interoperability architectures.
An Imaging Informatics team supporting a Consolidated Enterprise typically consists of PACS Admin and Imaging Analyst roles supporting one or more member-facilities.
Imaging Record Quality Control (QC) Workflows
To ensure the quality (completeness, consistency, correctness) of imaging records, providers rely on automatic workflows (such as validation by the IM/A system of the received DICOM study information against the corresponding HL7 patient and order information) and manual workflows performed either by Technologists during the Tech Workflow or by Imaging Informatics team members post-exam acquisition. Automatic updates of Patient and Procedure information are achieved through HL7 integration between EMR/RIS and the IM/A.
Typical manual QC activities include the following:
- Individual Image Corrections (for example, correction of a wrong laterality marker)
- DICOM Header Updates (for example, an update of the Study Description DICOM attribute)
- Patient Update (moving a complete DICOM study from one patient record to another)
- Study Merge (moving some, or all, of the DICOM objects from the “merged from” study to the “merged to” study)
- Study Split (moving some of the DICOM objects/series from the “split from” study to the “split to” study)
- Study Object Deletion (deletion of one or more objects/series from a study)
QC Workflow Challenges
Access Control Policy
One of the key challenges related to ensuring the quality of imaging records across large health system enterprises is determining who is qualified and authorized to perform QC activities. A common approach is to provide data control and correction tools to staff from the site where the imaging exam was acquired, since they are either aware of the context of an error or can easily get it from the interaction with the local clinical staff, systems, or the patient themselves. With such an approach, local staff can access only data acquired at sites to which they are assigned to comply with patient privacy policies and prevent any accidental updates to another site’s records. The following diagram illustrates this approach.
Another important area of consideration is to determine which enterprise system should be the “source of truth” for Imaging QC workflows when there are multiple Image Manager/Archives. Consider the following common Imaging IT architecture, where multiple facilities share both PACS and VNA applications. In this scenario, the PACS maintains a local DICOM image cache while the VNA provides the long-term image archive. Both systems provide QC tools that allow authorized users to update the structure or content of imaging records.
Since DICOM studies stored in the PACS cache also exist in the VNA, any changes resulting from QC activity performed in one of these systems must be communicated to the other to ensure that both systems are in sync. This gets more complicated when many systems storing DICOM data are involved.
Integrating the Healthcare Enterprise (IHE) developed the “Imaging Object Change Management (IOCM)” integration profile, which provides technical details regarding how to best propagate imaging record changes among multiple systems.
To minimize the complexity associated with the synchronization of imaging record changes, it is usually a good idea to appoint one system to be the “source of truth”. Although bidirectional (from PACS to VNA or from VNA to PACS) updates are technically possible, the complexity of managing and troubleshooting such integration while ensuring good data quality practices can be significant.
Often the QC Workflow is not discussed in depth during the procurement phase of a new PACS or VNA. The result: The ability of the Vendor of Choice’s (VOC) solution to provide robust, reliable, and user-friendly QC tools, while ensuring compliance with access control rules across multiple sites, is not fully assessed. Practice shows that vendors vary significantly in these functional areas and their capabilities should be closely evaluated as part of any procurement process.
Lessons Learned from Vendor Neutral Archive (VNA) Solutions
For well over a decade, VNA solutions have been available to provide a shared multi-department, multi-facility repository and integration point for healthcare enterprises. Organizations employing these systems, often in conjunction with an enterprise-wide Electronic Medical Record (EMR) system, typically benefit from a reduction in complexity, compared to managing disparate archives for each site and department. These organizations can invest their IT dollars in ensuring that the system is fast and provides maximum uptime, using on-premises or cloud deployments. And it can act as a central, managed broker for interoperability with other enterprises.
The ability to standardize on the format, metadata structure, quality of data (completeness and consistency of data across records, driven by organizational policy), and interfaces for storage, discovery, and access of records is much more feasible with a single, centrally-managed system. Ensuring adherence to healthcare IT standards, such as HL7 and DICOM, for all imaging records across the enterprise is possible with a shared repository that has mature data analytics capabilities and Quality Control (QC) tools.
What is a Vendor Neutral Artificial Intelligence (VNAi) Solution?
The same benefits of centralization and standardization of interfaces and data structures that VNA solutions provide are applicable to Artificial Intelligence (AI) solutions. This is not to say that a VNAi solution must also be a VNA (though it could be), just that they are both intended to be open and shared resources that provide services to several connected systems.
Without a shared, centrally managed solution, healthcare enterprises run the risk of deploying a multitude of vendor-proprietary systems, each with a narrow set of functions. Each of these systems would require integration with data sources and consumer systems, user interfaces to configure and support it, and potentially varying platforms to operate on.
Do we want to repeat the historic challenges and costs associated with managing disparate image archives when implementing AI capabilities in an enterprise?
Characteristics of a Good VNAi Solution
The following capabilities are important for a VNAi solution.
Flexible, well-documented, and supported interfaces for both imaging and clinical data are required. Standards should be supported, where they exist. Where standards do not exist, good design principles, such as the use of REST APIs and support for IT security best practices, should be adhered to.
Note: Connections to, or inclusion of, other sub-processes—such as Optical Character Recognition (OCR) and Natural Language Processing (NLP)—may be necessary to extract and preprocess unstructured data before use by AI algorithms.
Data Format Support
The data both coming in and going out will vary and a VNAi will need to support all kinds of data formats (including multimedia ones) with the ability to process this data for use in its algorithms. The more the VNAi can perform data parsing and preprocessing, the less each algorithm will need to deal with this.
Note: It may be required to have a method to anonymize some inbound and/or outbound data, based on configurable rules.
Processor Plug-in Framework
To provide consistent and reliable services to algorithms, which could be written in different programming languages or run on different hosts, the VNAi needs a well-documented, tested, and supported framework for plugging in algorithms for use by connected systems. Methods to manage the state of a plug-in—from test, production, and disabled, as well as revision controls—will be valuable.
Quality Control (QC) Tools
Automated and manual correction of data inputs and outputs will be required to address inaccurate or incomplete data sets.
Capturing the logic and variables used in AI processes will be important to retrospectively assess their success and to identify data generated by processes that prove over time to be flawed.
For both business stakeholders (people) and connected applications (software), the ability to use data to measure success and predict outcomes will be essential.
Data Persistence Rules
Much like other data processing applications that rely on data as input, the VNAi will need to have configurable rules that determine how long defined sets of data are persisted, and when they are purged.
The VNAi will need to be able to quickly process large data sets at peak loads, even with highly complex algorithms. Dynamically assigning IT resources (compute, network, storage, etc.) within minutes, not hours or days, may be necessary.
Some organizations will want their VNAi in the cloud, others will want it on-premises. Perhaps some organizations want a hybrid approach, where learning and testing is on-premises, but production processing is done in the cloud.
High Availability (HA) and Business Continuity (BC) and System Monitoring
Like any critical system, uptime is important. The ability for the VNAi to be deployed in an HA/BC configuration will be essential.
Multi-tenant Data Segmentation and Access Controls
A shared VNAi reduces the effort to build and maintain the system, but its use and access to the data it provides will require data access controls to ensure that data is accessed only by authorized parties and systems.
Though this is not a technical characteristic, the VNAi solution likely requires the ability to share the system build and operating costs among participating organizations. Methods to identify usage of specific functions and algorithms to allocate licensing revenues would be very helpful.
Effective Technical Support
A VNAi can be a complex ecosystem with variable uses and data inputs and outputs. If the system is actively learning, how it behaves on one day may be different than on another. Supporting such a system will require developer-level profile support staff in many cases.
Without some form of VNAi (such the one described here), we risk choosing between monolithic, single-vendor platforms, or a myriad of different applications, each with their own vendor agreements, hosting and interface requirements, and management tools.
Special thanks to @kinsonho for his wisdom in reviewing this post prior to publication.