AXIS Imaging Interview: Four Options for Image-Display Architecture: A Deep Dive

How diagnostic images get from server hard drives to the screen is a topic of great interest to both industry and buyers of imaging IT solutions. Speed of image access is critical for quality of service and care, along with productivity and user satisfaction.

Different solutions take different approaches to optimize image delivery over varying networks. Many solutions combine more than one software design method to achieve the best possible performance. In many cases, industry or buyers will use jargon, like “streaming”, to apply a simple term to these sometimes complex technical methods.

In a recent article by AXIS Imaging, I describe four common techniques that are used in (and occasionally between) different imaging IT systems to maximize image display speed. The article length limit prevented coverage of additional methods and intentionally excluded IT infrastructure optimizations (for example, faster networks, CPUs, and drives) and the use of irreversible lossy image compression of the images on disk.

Important Article Corrections

Although I followed up with the author about some transcription errors they made in preparing the article language, they were not corrected (at least at the time of posting this), so I am going to note some corrections here.

  1. Where the article states “…radiology practices and departments have options when designing high-speed image display…”, it should state “…radiology practices and departments have options when choosing the solution for high-speed image display…”. Radiology practices choose a solution and that solution will use one or more of the design methods (or additional ones not listed), but the Radiologists don’t choose the methods within the solution.
  2. In subsection #3, the second bullet refers to a condition where pre-caching is typically not possible (the article states the opposite). If the worklist is a separate application from the image display application, the image display application often has no method of knowing which exams listed are in the worklist, so is unable to pre-cache the images to the workstation. This is not always true, as some worklist applications can expose this information to the image display application through an API, but this is not a universally available capability and does require that a specific integration be developed to support this across the applications.
  3. In subsection #4, where it states “visual design infrastructure”, it should instead state “virtual desktop infrastructure”. People commonly refer to it as VDI.

The Case for Vendor Neutral Artificial Intelligence (VNAi) Solutions in Imaging

Defining VNAi

Lessons Learned from Vendor Neutral Archive (VNA) Solutions

For well over a decade, VNA solutions have been available to provide a shared multi-department, multi-facility repository and integration point for healthcare enterprises. Organizations employing these systems, often in conjunction with an enterprise-wide Electronic Medical Record (EMR) system, typically benefit from a reduction in complexity, compared to managing disparate archives for each site and department. These organizations can invest their IT dollars in ensuring that the system is fast and provides maximum uptime, using on-premises or cloud deployments. And it can act as a central, managed broker for interoperability with other enterprises.

The ability to standardize on the format, metadata structure, quality of data (completeness and consistency of data across records, driven by organizational policy), and interfaces for storage, discovery, and access of records is much more feasible with a single, centrally-managed system. Ensuring adherence to healthcare IT standards, such as HL7 and DICOM, for all imaging records across the enterprise is possible with a shared repository that has mature data analytics capabilities and Quality Control (QC) tools.

What is a Vendor Neutral Artificial Intelligence (VNAi) Solution?

The same benefits of centralization and standardization of interfaces and data structures that VNA solutions provide are applicable to Artificial Intelligence (AI) solutions. This is not to say that a VNAi solution must also be a VNA (though it could be), just that they are both intended to be open and shared resources that provide services to several connected systems.

Without a shared, centrally managed solution, healthcare enterprises run the risk of deploying a multitude of vendor-proprietary systems, each with a narrow set of functions. Each of these systems would require integration with data sources and consumer systems, user interfaces to configure and support it, and potentially varying platforms to operate on.

Do we want to repeat the historic challenges and costs associated with managing disparate image archives when implementing AI capabilities in an enterprise?

Characteristics of a Good VNAi Solution

The following capabilities are important for a VNAi solution.

Interfaces

Flexible, well-documented, and supported interfaces for both imaging and clinical data are required. Standards should be supported, where they exist. Where standards do not exist, good design principles, such as the use of REST APIs and support for IT security best practices, should be adhered to.

Note: Connections to, or inclusion of, other sub-processes—such as Optical Character Recognition (OCR) and Natural Language Processing (NLP)—may be necessary to extract and preprocess unstructured data before use by AI algorithms.

Data Format Support

The data both coming in and going out will vary and a VNAi will need to support all kinds of data formats (including multimedia ones) with the ability to process this data for use in its algorithms. The more the VNAi can perform data parsing and preprocessing, the less each algorithm will need to deal with this.

Note: It may be required to have a method to anonymize some inbound and/or outbound data, based on configurable rules.

Processor Plug-in Framework

To provide consistent and reliable services to algorithms, which could be written in different programming languages or run on different hosts, the VNAi needs a well-documented, tested, and supported framework for plugging in algorithms for use by connected systems. Methods to manage the state of a plug-in—from test, production, and disabled, as well as revision controls—will be valuable.

Quality Control (QC) Tools

Automated and manual correction of data inputs and outputs will be required to address inaccurate or incomplete data sets.

Logging

Capturing the logic and variables used in AI processes will be important to retrospectively assess their success and to identify data generated by processes that prove over time to be flawed.

Data Analytics

For both business stakeholders (people) and connected applications (software), the ability to use data to measure success and predict outcomes will be essential.

Data Persistence Rules

Much like other data processing applications that rely on data as input, the VNAi will need to have configurable rules that determine how long defined sets of data are persisted, and when they are purged.

Performance

The VNAi will need to be able to quickly process large data sets at peak loads, even with highly complex algorithms. Dynamically assigning IT resources (compute, network, storage, etc.) within minutes, not hours or days, may be necessary.

Deployment Flexibility

Some organizations will want their VNAi in the cloud, others will want it on-premises. Perhaps some organizations want a hybrid approach, where learning and testing is on-premises, but production processing is done in the cloud.

High Availability (HA) and Business Continuity (BC) and System Monitoring

Like any critical system, uptime is important. The ability for the VNAi to be deployed in an HA/BC configuration will be essential.

Multi-tenant Data Segmentation and Access Controls

A shared VNAi reduces the effort to build and maintain the system, but its use and access to the data it provides will require data access controls to ensure that data is accessed only by authorized parties and systems.

Cost Sharing

Though this is not a technical characteristic, the VNAi solution likely requires the ability to share the system build and operating costs among participating organizations. Methods to identify usage of specific functions and algorithms to allocate licensing revenues would be very helpful.

Effective Technical Support

A VNAi can be a complex ecosystem with variable uses and data inputs and outputs. If the system is actively learning, how it behaves on one day may be different than on another. Supporting such a system will require developer-level profile support staff in many cases.

Conclusion

Without some form of VNAi (such the one described here), we risk choosing between monolithic, single-vendor platforms, or a myriad of different applications, each with their own vendor agreements, hosting and interface requirements, and management tools.

Special thanks to @kinsonho for his wisdom in reviewing this post prior to publication.

MIIT 2018 – May 4, 2018

In just less than a month from today on Friday May 4, 2018 (Star Wars day!), the annual MIIT (Medical Imaging Informatics and Teleradiology) conference will once again be held at the beautiful Liuna Station in Hamilton, Ontario, Canada.

This year’s theme is Connecting Imaging and Information in the Era of AI and the program features several distinguished speakers from Canada and the U.S.

Talks will cover EMR implementation, Radiology Outreach, the link between Quality and Informatics, Highly Automated Radiology (using AI), an update on IHE, and a comparison of PACS+VNA vs. Regional PACS. It will also have a panel on the impact of EMRs and AI on Radiology and a talk on AI by a speaker from IBM Watson Health.

Register Today!

MIIT Badge

Developing an Enterprise Imaging Strategy—What is the best approach?

In my last post, we explored the current state-of-the-art of the Enterprise Imaging (EI) industry. In this post, I will zoom in on storing and managing non-DICOM images and videos. This can be ambiguous and may confuse providers who are trying to procure an EI solution. It also results in different schools of thought among vendors.

Currently, EI content can be stored and managed in one of the following formats:

  • Original object (e.g. jpg) stored in a solution’s database and/or filesystem using the vendor’s API (Application Programming Interface)
  • Original object (e.g. jpg) stored using the IHE Cross-Enterprise Document Sharing (XDS) integration profile in a solution’s XDS Document Repository component
  • Original object (e.g. jpg) stored in a solution’s database and/or filesystem using HL7’s FHIR® Media Content specifications
  • DICOM object stored in a solution’s Image Manager/Archive component; for example, using the IHE Web Image Capture (WIC) integration profile
  • DICOM object stored in a solution’s Image Manager/Archive and XDS Document Repository components using the IHE Cross-Enterprise Document Sharing for Imaging (XDS-I) integration profile

The following diagram depicts the main steps that take place during information capture activity for each method.

storage methods

All of the above methods have corresponding pros and cons, which leads to the current divergence of opinions regarding the best option to use. Having said that, it is clear that, irrespective of the chosen method, there is a need to properly collect and manage patient, administrative and clinical context (aka metadata) for the acquired EI content.

Metadata

Each of the above methods offer different levels of metadata rigidity and extensibility which impact the interoperability:

  • DICOM, FHIR and XDS-I based methods offer a level of certainty for the vendors with respect to what information should be captured and how it should be encoded.
  • XDS takes an approach of developing specific content profiles that address specific types of content; for example, the IHE XDS-SD (Scanned Document) integration profile. At the moment, there is no content profile that is specific to the Enterprise Imaging domain. Additionally, XDS allows the original object to be wrapped in a CDA Document to capture additional metadata in case the specified XDS Document Entry attributes are not sufficient.

Is there one “right” answer?

There are two overarching clinical reasons to capture EI content:

  1. To enrich patients’ clinical record
  2. To provide reliable, authorized access to it across the enterprise (and beyond)

As the following diagram suggests, the way EI content is stored is less important then the flexibility of an EI solution’s “Capture” and “Discovery and Access” components, because it is hidden behind those interfaces.

EI Access

It seems that, currently, there is no single answer for the best EI content format given the informatics complexity of healthcare provider’s enterprises. In order to adapt and compete, vendors will be pressured to support multiple inbound and outbound methods (such as FHIR, DICOM, DICOMWeb, XDS, proprietary APIs, etc.) and only time will tell which approach will become a de-facto standard.

Working on an Enterprise Imaging project? Leave us a comment with your thoughts, or contact us.

Enterprise Imaging Industry State-of-the-Art

Based on discussions with colleagues and our clients, Enterprise Imaging is becoming an integral part of U.S. Hospital IT Consolidated Clinical Record strategies.

HIMSS-SIIM Enterprise Imaging Workgroup‘s current working definition of Enterprise Imaging is as following:

  • Diagnostic Imaging – Encompassing traditional diagnostic imaging disciplines such as Radiology and Cardiology
  • Procedural Imaging – Including images that are acquired for diagnosis or clinical documentation purposes (such as visible light photos, point-of-care ultrasound)
  • Evidence Imaging – Including images and/or videos that are acquired for clinical documentation purposes (for example, scope videos, computer aided detection)
  • Image-based Clinical Reports – Documentation that includes or entirely consists of images (for example, Pulmonary Functional Test (PFT) report, multi-media pathology report)

Despite the attention from vendors, industry focus, and provider demand, this market space is still in its early stages of development. There are two main reasons: 1) the scope of the problem domain is still being defined; and 2) the vendor community is still working out the best practices and optimal technical approaches.

Moreover, the number of the departments that generate Enterprise Imaging content and that have their own departmental workflows is quite large.

This results in significant confusion on the provider side who are left to navigate a myriad of perspectives expressed by the imaging informatics industry. There are on-going initiatives that are currently working on demystifying the field of Enterprise Imaging. For example, the recent SIIM Webinar delivered by Dr. Towbin from Cincinnati Children’s, provides a very thorough analysis of the problem domain.

In conversations with vendors and providers, we have compiled several observations that might benefit the imaging informatics community.

The Right Approach

In the SIIM 2015 Opening General Session presentation, Don Dennison presented the following slide titled “Enterprise Image Management: Making the Right Choice”

EI

With the various systems in place to manage patient record data, there is often debate as to which enterprise system is best suited to offer Enterprise Imaging services.

At the moment, there is no obvious answer to the question presented by the slide. Besides the technical capabilities of the systems, the provider’s internal IT capabilities, capacity and policies can significantly influence the decision. At some organizations, where the Imaging Informatics Team plays a prominent IT role, the choice could be the VNA, while at others, where the Enterprise IT team takes the lead, the EMR or ECM is often chosen.

The Right Functionality

During RSNA 2015, we conducted a study to identify the state-of-the-art of Enterprise Imaging technology, including methods of acquisition, management, and distribution of non-DICOM images. The following table summarizes our findings.

 

Image / Video Acquisition
Ability to capture from mobile devices The majority of current vendors opted for native applications to provide better user experience and tighter security controls. Still, image capture is the prevailing capability, with video acquisition capabilities lagging behind. Some vendors offer integration with leading EMRs’ mobile applications.
Ability to capture from visible light cameras The ability to manually (i.e. file browse, drag & drop, etc.) upload both videos and images is a commodity. Automatic ingestion, on other hand, varies significantly from vendor to vendor. Most vendors offer proprietary integration frameworks, but their comprehensiveness and real-life integration experience is very different from one to another.
Ability to capture from different scopes Most of the vendors leverage third party hardware devices to integrate with digital or analog video sources real-time.
Acquisition Workflow
Order-based Workflow DICOM Modality Worklist (DMWL) SCU support, as well as the ability to generate or receive order information, are available in most vendor’s applications.
Context-based launch of the capture application is also a well understood and supported functionality.
Many of the vendors mimic an order-based workflow (i.e. create the Accession Number) for the acquisitions that are not scheduled. The main challenge with this approach is to determine the correct method to feed the created information back to the EMR (e.g. often called an “unsolicited result”, which may not be supported at the site).
Encounter-based Workflow Some vendors, originating from the Diagnostic Imaging space, struggle with native Encounter-based workflow support.
On many occasions, departmental visit/encounter information, supplied in HL7 messages from the EMR, is sufficient to build specific acquisition worklists for different service lines.
Scenarios where information services are not available Most of the vendors offer the ability to manually create patient and procedure information. The difference lies in whether all or just a sub-set of capturing methods (e.g. mobile vs. desktop) support that functionality.
Patient identity management Standards-based methods to discover or receive patient information is widely supported, while the support for proprietary methods to connect to patient information sources varies from vendor to vendor.
Ability to Edit Images/Videos
Editing Tools Most of the vendors rely on an installed Windows OS client application to edit (e.g. crop) acquired images or videos as part of the manual upload process (e.g. drag & drop). Selected vendors also allow static image editing only (i.e. no video) during the mobile capture.
Images An ability to associate different types of metadata (including notes) is supported by the majority of the vendors. Also, basic manipulation of the acquired images such as image deletion, markups and annotation, which are stored as overlay objects associated with the acquired images is common.
Only selected vendors are capable of calibrating images on-the-fly by using recognizable objects of known size embedded in the image.
Videos A flexible and comprehensive ability to associate different types of metadata (including notes and keywords) is supported by the majority of the vendors.
Most of the vendors have very limited (if at all) video editing and capturing capabilities and rely on third party providers.
Viewer
Current state The solutions typically consist of the following viewers:

  • Mobile capture
  • Desktop image/video upload
  • Desktop image/video editor
  • Zero-footprint (ZFP) EMR viewer with very limited, if at all, editing capabilities
Privacy and Security
Current state Most of the vendors offer a range of methods to ensure PHI protection such as:

  • Information deletion/encryption from the device
  • Strong Authentication and Authorization methods
  • Auditing
Reporting
Current state The most prevalent approach is to rely on an external system, such as the EMR or specialty-specific reporting application, to create and manage reports.
Record Management
Current state Most of the vendors opt for managing image and videos in their native format, while converting the content on-the-fly for standards-based communication with external systems.

Conclusion

It seems that Enterprise Imaging is going to rapidly evolve and we are eager to see how our clients, and providers in general, will benefit from these changes.

Working on an Enterprise Imaging project? Leave us a comment with your thoughts, or contact us.

Article – First Look at Stage 3: CMS Sticks to Its Guns on APIs, Patient Engagement

Here is a good summary on what is new in Meaningful Use Stage 3 Rules.

This excerpt caught my eye:

As far as timing goes, CMS said it disagrees that the API functionality cannot be implemented successfully by 2018 “as the technology is already in widespread use in other industries and API functions already exist in the health IT industry.”

All of this should be a boon for the FHIR (Fast Healthcare Interoperability Resources) standard development community and the Argonaut Project, working on API-related standards, as well as for the broader community of mobile app and personal health record developers. With barriers to patient access to their data coming down, patients will finally be able to create their own portals, separate from any health system and share that data with whomever they want.

This is good news for everyone.

If we truly want so solve issues that require access to information where and how we need it, we must provide interoperability. This means not only the data needs to available be in a format that is understood and supported by common applications, it means the method of discovering and accessing that data needs to be understood and supported, as well.

FHIR® (clinical data) is built on the right web technologies and design methods, as is DICOMweb™ (imaging data). With these APIs, we can discover and access the necessary patient information and make it available in any care setting we need.

And these APIs will create the foundation of data liquidity to spark an explosion of innovation of applications—including traditional departmental and enterprise ones, but also web and mobile ones.

Without clearly defined, supported and accessible APIs, we (healthcare) had no hope of achieving the kind of system-wide change required. We have no more excuses now.

What can Enterprise Imaging Learn from Radiology?

Radiology Information Interoperability for Productivity and Quality

In the early days of Radiology, data entry errors by Radiology Technologists (aka Techs) were common. Their attention was on the patient and the operation of the modality, not the clerical task of typing in data, after all. To address this, something called a DICOM Modality Worklist (aka DMWL) was developed and adopted.

Essentially, this took the textual patient and imaging procedure order information entered into the HIS or RIS (i.e. the order placer), and sent it to some system as an HL7 ORM message (an order). The structured patient/order information was then provided to modalities using the DICOM protocol (because this is the language they speak). DMWL could be provided by the RIS or PACS or some form of broker system that spoke both HL7 and DICOM.

This allowed trained clerical workers (or physicians), combined with software that validated the data entered (where it could), to pass the information to the modality workstation where it could be mapped into DICOM objects, without having to ask Techs to enter this info. The productivity and information quality gains were significant.

It is worth noting that the order provides other value than just eliminating duplicate data entry. It represents a work instruction, and it is used in scheduling and billing. Where image acquisition is not scheduled or billed for, orders are typically not created.

Enter Enterprise Imaging

As we enter the era of Enterprise Imaging, there are lots of lessons that we can learn from the solved problems in areas like Radiology.

For example, when capturing a photo in a Wound Care clinic, it has to be associated with the correct patient (obviously), but there is likely other pertinent info that should be captured, such as the anatomical region imaged and any observations by the physician.

In Enterprise Imaging, orders are often not placed. In many areas, the imaging is often not the primary task, but one that used to support clinical work.

If orders are not placed, how can we at least provide the benefit of passing textual patient data to the image capture device or application to reliably associate patient (and perhaps encounter or procedure) data?

Even if orders are placed, most of the devices and applications used in Enterprise Imaging cannot accept an HL7 message and do not speak DICOM. Some form of broker would likely be required yet again.

Enterprise Information Interoperability for Enterprise Image Capture

One hope that we have is the adoption of the new HL7 FHIR standard. Based on REST-based API design methods, it is much easier to integrate with different devices (especially mobile devices) than HL7 v2.x messaging and DICOM interfaces are. Other methods used are to generate a URL from the EMR, with all the info provided in parameters, that launches the image capture application/device in context. Another method is to use HL7 messaging to populate the VNA database with patient, encounter and order/procedure information (essentially a copy of what the EMR has), and use a tool or API (perhaps the DICOMweb™ Query API, QIDO-RS) to query this system to get the necessary information.

Don’t Forget the Metadata and Supporting Information

This still leaves the issue of how to reliably and consistently capture the information that goes with the image(s)—notes, anatomy info, findings, technical exam info, observations, etc. In DICOM, when this type of information is needed, a SOP Class is defined. The header of the SOP Class object specifies where all this metadata should go. This is one of the primary principles of interoperability: a defined format and data scheme, with a clear and shared meaning.

Assuming that not all Enterprise Images will be generated in, or converted into, DICOM format, the definition of the metadata schema may be left to be defined by the implementing vendor.

In addition to the clinical and technical data, sooner or later, someone is going to be looking for operational data for use in analytics and process improvement, so it will need to be captured (on some level of detail), as well.

Consistent Terms

And, even when we have a common schema, if the terms used within the scheme are not consistent, we end up spending an enormous amount of time doing mappings or integrating terminology services (and even then, never fully addressing all cases).

To Acquire or Not to Acquire

If we think about Enterprise Imaging that is not “ordered”, what triggers the acquisition of an Enterprise Image? Is it up to the clinic or individual care provider to make the judgement? Should a published set of best practices define this? Would the EMR have logic, based on the patient’s condition or care pathway, to prompt or force the user to acquire and store the image(s)?

Enterprise Imaging Acquisition Protocols Needed?

If we consider the different ways that images can be captured (still, video), the subject in frame (cropping, zooming), lighting, etc., and the ability to capture a single image or a set of images, do we also need some form of a book of protocols to guide the person acquiring the images? Should certain images contain a ruler (or object of known size) to allow the image to be calibrated for measurements?

The Cost of Doing Nothing

If we consider the impact of not having methods to avoid data entry errors, or not having a common schema, not having common terms, and not even having a common communication protocol or best practices for acquisition workflows, what hope does Enterprise Imaging have?

Even with options for all these things, imaging and information devices are still struggling to be interoperable with departmental and enterprise applications, as described in this Healthcare IT News article, “Nurses blame interoperability woes for medical errors”.

The Future is Now(ish)

This is why the mission and output of the joint HIMSS-SIIM Enterprise Imaging workgroup (charter in PDF here) is so important. The space needs to be better defined, with acquisition workflow practices, data formats, schemas, terms, and protocols outlined.

If we simply try to copy what is done in Radiology into Enterprise Imaging, it will create too much of a burden on the people asked to capture these images, and they won’t do it, frankly. Unlike the reimbursement in Radiology, they often have little incentive to spend the extra time to capture, index and upload images to the EMR when they are focused on the patient.

But, if we ignore the benefits that come with the controls and methods we have developed and matured over the years in Radiology, we risk having to re-learn all the same lessons again. And that would be very sad (and expensive, and wasteful, and unsafe…).

Add on top of all this the increasing need to share this data across different enterprises for continuity of care and the importance of interoperable data portability/liquidity is critical.

The fundamental healthcare informatics knowledge and business analysis skills developed by imaging informatics professionals, through on-the-job experience and membership in educational/research societies like SIIM, will be important in determining the right mix of proven concepts that apply, and new methods and innovations. Without a supply of talent to foster the change, nothing will change.

In Conclusion…

When dealing with such an undefined space, people often relish the idea of “doing it right this time”. I would urge anyone involved in this space to reflect on what has been accomplished in mature fields like Radiology, as there are a lot of “right things” that we may be taking for granted. With a little modernization, we can still get continued value out of what we have already achieved.

Article – SIIM: Experiment in web technologies points to future of health IT

Here is an article summarizing the way Cleveland Clinic is using REST-based APIs to solve real problems in their institution. Taken from a talk given by Mat Coolidge at the SIIM 2014 Annual Meeting.

The Value of Hackathons in Healthcare

Having participated in the inaugural SIIM 2014 Hackathon, I can appreciate the diverse expectations that participants have. Some think of these events as a way to learn and experiment, others a competition. Some prefer to work as a team, others alone. Some are interested in integrating existing systems and data in new ways, while others want to invent something completely new.

In any case, I found this article insightful. It explores why the concept of “hacking” is so prevalent in healthcare, and also touches on why new “apps” often struggle to make it past the hackathon stage. It even posits that a hackathon can replace the traditional RFP procurement process for identifying and selecting innovative solutions.