Pathology services, or tests such as biopsies and bloodwork used to diagnose and treat illnesses, are a foundation of modern medical care. However, there has been a sharp decrease in the number of specialists available to analyze these tissue and fluid samples that are key to patient diagnosis. Pathologists’ high-touch workflow of sample analysis, sample transport, consultation with other medical personnel, and maintenance of data integrity is overdue for a more efficient and scalable process, especially as caseloads continue to rise.
Utilizing digital pathology and applying telepathology concepts to this workflow can improve productivity, minimize biohazard risk, and better manage patient caseloads. However, this digital workflow presents new challenges for hospital IT teams, particularly in AI model management and data transfer across distributed networks. I’ll explore this solution path further by focusing on a telepathology use case featuring both these.
Today’s Pathology Workflow
To analyze a patient’s tissue or fluid sample, the pathologist stains the sample to identify cellular structures placed on a glass slide (otherwise known as a pathology sample). The pathologist then examines the sample under a microscope to determine if any signs of disease are present at the cellular level. The pathologist can then request a second opinion on their interpretation of the sample by physically shipping it to another pathologist for another examination using a microscope.
There are several challenges in this approach:
- Analysis throughput can vary as each pathologist may require different amounts of time to efficiently conduct their analysis.
- Process variability due to unknown factors in the time needed for preparing, packing, and transporting the sample.
- Increased risk exposure because the shipping process itself increases a possible biohazard risk if the slide broken while in transit.
AI Model Management and Telepathology
Artificial intelligence (AI) models are used to recognize signs of disease at the cellular level from a digitized image of the pathology sample. Whole Slide Imaging (WSI) instruments automate the microscopy step and digitize the microscopic image for interpretation with an AI inferencing algorithm. Inferencing helps triage the samples by prioritizing the most urgent samples to be reviewed by the pathologist in the queue based on the inferencing results. This helps pathologists efficiently assess their patient case load.
However, this new computer-aided workflow also brings new challenges. The hospital’s IT system must manage and deploy the various AI models used for analysis and enable data transfer capabilities across complex networks inside and outside of the organization’s infrastructure, as is the case when conducting peer-to-peer reviews.
Telepathology is the use of technology to share digitized images and data between locations. Using the telepathology use case, I will demonstrate how a computing architecture can simplify network routing automation and optimize AI model deployment within a hospital system. With this end goal in mind, Intel software engineers developed the Networking Optimization and AI Inferencing Management for Telepathology reference implementation. By converging a solution around an edge hardware and software architecture with a web-based user interface, it’s possible to manage various edge nodes from an edge controller. This can reduce the amount of ‘hands on’ management for data routing or traffic shaping that would be needed in the IT infrastructure.
Before examining this use case further, I’ll discuss the main software components developed by Intel that support its key capabilities.
OpenNESS is an open-source edge computing software toolkit that enables highly optimized and high-performing edge platforms to onboard and manage applications and network functions. The multi-access networking microservice helps reduce the network complexity experienced when sharing information over various types of network configurations. This microservice, or software protocol, addresses the inefficiency and risk of sending physical samples between pathologists. If the pathologist’s lab utilizes digital pathology, then the images and data will be available to send over networks to collaborating pathologists instead of physically shipping them. The Intel® Distribution of OpenNESS Toolkit is a free version of this toolkit that includes additional features and optimizations for Intel-based platforms.
OpenVINO Model Server (OVMS)
OVMS is a model server inference platform developed by Intel that uses inference engine libraries from the Intel® Distribution of OpenVINO™ toolkit, making it easy to deploy new algorithms and AI experiments. Similar in function to TensorFlow Serving, OVMS supports easy integration between training and deployment systems, scalability and a standard client interface for any models trained in a framework supported by the toolkit. Other features include the ability to deploy new models without changing client code, service of models from popular formats like Caffe, TensorFlow, MXNet, and ONNX, support for Intel CPUs, VPUs, and AI accelerators, and enablement on either bare metal hosts or in Docker containers or Kubernetes for deployment and scale within the network.
Now let’s get into the use case:
- Hospital A obtains a patient’s tissue or fluid sample. The sample is placed on a glass slide and then digitized using a Whole Slide Image (WSI) instrument. The image and associated data are sent from the WSI to an on-premise server for inferencing using one of Hospital A’s AI models. The OpenVINO Model Server (OVMA) manages the AI models on the inferencing server.
- The results are sent to the Hospital Information System (HIS) or enterprise health record (EHR) software to track patient information. In this case, the pathologist at Hospital A can review the inferencing results and request a second opinion from one or more pathologists at different hospital sites.
- Hospital A transfers the data to Hospital B, which has a different network architecture. OpenNESS will automatically manage the data transfer via a multi-access network microservice. It simplifies the data transfer for different network configurations since Hospital B has another instance of OpenNESS and OVMS present in their internal network.
- The pathologist at Hospital B can review the results and provide their input on the image and inference analysis results. That information is then sent back to the HIS or EHR at Hospital A for the primary pathologist’s final review.
Figure A. Architecture Diagram
The Client here is a machine that provides medical images on which inference is to be performed. It sends RPC calls to OVMS on the edge node which pulls the needed inferencing model or models supported by OpenVINO™. Inference analysis is performed using the underlying hardware and the result is sent to the HIS or EHR to an on-prem server or serviced by a CSP. If Hospital A transfers the data to Hospital B, OpenNESS will automatically manage the data transfer via a multi-access network microservice. It simplifies the data transfer for different network configurations since Hospital B has another instance of OpenNESS and OVMS present in their internal network.
I invite you to take a deeper dive into the software components, recommended hardware, and architecture for this use case by accessing the documentation for the Networking Optimization and AI Inferencing Management for Telepathology Reference Implementation. You will find an overview, the system requirements, and a get started guide to download a recommended configuration of the reference implementation. The Intel Edge Software Hub provides all the tools and information you need to investigate, test, and create an application. OpenVINO Model Server and OpenNESS can also provide support for additional IT challenges your organization may be facing beyond the telepathology use case that I explored.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.