Privileged. That’s the word I would use to describe my involvement with the RSNA Imaging AI in Practice 2021 Demo. I wrote about my experience with last year’s AI demo, only it got much better this year when I was asked to be Teri Sippel-Schmidt’s co-pilot as a Techical Project Manager. I spoke highly of Teri’s leadership in last year’s demo, hence why I feel privileged to play a bigger part in bringing this demo to life.

On the heels of last year’s success, the demo retained the vast majority of last year’s participants and added more. All in all, 22 vendors participated with 32 products in total. Instead of last year’s three demo teams, we ended up with five this year. 

The premise is the same as last year – showcasing how AI-touting systems can integrate (with interoperability under the hood) to deliver on the promise of improving Radiology workflows end-to-end. Starting from scheduling of resources, through protocoling, image acquisition, image analysis, reporting, and last but not least, AI model training.

As the case was with last year, many thanks go out to everyone that made the demo possible. I am sure I might miss some people but I’ll try anyway:

  • RSNA Board of Directors, Radiology Informatics Committee (RIC) and the RIC IAIP Task Force.
  • Our venerable clinical champions
  • RSNA staff
  • The vendors who make all this possible
  • Most importantly (on a personal level), Teri for coaching me with an abundance of patience and wisdom.

If you happen to see this in time, please be sure to come see the demo. It will be running 9am-5pm Nov 28th through Dec 1st. South Hall booth 4925.

I will post recaps of this demo and overall impressions of RSNA 2021 afterwards. Stay tuned!

 

Over the past few months, I have been jotting down random notes as I attend sessions and converse with friends/colleagues in the radiology domain. I apologize if this post is slightly incoherent, it is more of a note to my future self, but I thought I’d share as others might find some of these bits and pieces useful. Additionally, I won’t spend a lot of time explaining each of the concepts – there are plenty of resources online that do a great job of explaining them in detail – this post is more about putting all of it together in a concise format.

1- Things To Keep In Mind

Artificial Intelligence vs. Machine Learning vs. Deep Learning

Most people assume the terms are interchangeable, which is incorrect. Think of Machine Learning (ML) as a deeper niche of Artificial Intelligence (AI), and, in turn, Deep Learning (DL) as an even deeper niche of Machine Learning. 

Curation And Classification

Curation and classification are crucial for successful training and deployment. You should not rely on the AI/ML/DL algorithm to know if the images being fed into it are the right type. For example, think of an algorithm that expects Chest X-rays taken in the PA (Posterior-Anterior) orientation. How will it handle images in different orientations, e.g. AP (reverse) or even lateral? Even crazier, what happens if you feed a random picture (say, a picture of a pet) into an algorithm? Without proper checks, you’ll get misleading AI results.

Model Brittleness

The idea that a model may demonstrate amazing results (the often hyped area under the curve or AUC) with familiar data, but fails when presented with unfamiliar data. There are many reasons why this may happen, differences in population diversity, scanners used to capture the images, …etc.

Model Bias

E.g., has this model been trained on data with visible minorities? A model is as good as the data it has been trained on. If a model has been trained only on publicly-available datasets, then you must keep in mind the vast majority of those datasets in the USA are coming from just three states.

Model Decay

Without getting into a lot of details, a model that is not continuously learning will eventually “decay” and produce less accurate results, very much like a human and the need for continuing education. There are multiple reasons why this may happen. If you are interested, I encourage you to read about concept drift vs. data drift.

Automation Bias

Humans tend to assume machines are infallible, which could not be further from the truth, especially with AI. You do not want your users to get too comfortable letting the AI fly on autopilot, worse even, assuming AI is more correct. Just have a look at the drivers that experienced serious crashes because they over-relied on their vehicle’s “autonomous” driving systems.

Ongoing Monitoring

This often is an afterthought, but it is essential, period. 

 

2- Integration Of AI Into the Workflow

Where In The Workflow?

Depending on your specific AI algorithm, it might be more helpful with triaging worklists and re-prioritizing exams, or it may assist with image interpretation (make measurements, detect bone age, …etc). Last but not least, it can review findings – like a second read but a lot less expensive!

Seamless Integration

Need I say more?!?

Explainability 

The AI algorithm must be able to “explain” how it arrived at whatever conclusions it made, vs. being a black box. One example could be through the use of heatmaps on images, as one example. Have a look at this cool visualization to understand what I mean: https://www.cs.ryerson.ca/~aharley/vis/conv/

Feedback Loop

A user must be able to provide “feedback” by Accepting, Rejecting, Editing and/or Adding to the algorithm’s findings. Ideally, the feedback loop assists with model re-learning, too.

Interoperability Standards

Another afterthought… AI algorithm must be good samaritans and able to integrate with other healthcare systems using international standards. HL7 v2/FHIR and DICOM are bare-minimum. Ideally, it should support newer standards like FHIRcast, IHE AIW and IHE AIR.

Smartphone Paradigm

Imagine if AI algorithms worked like downloading apps on your smartphone? Doesn’t that make you feel all warm and fuzzy? 🙂

 

3- Taking Things To The Next Level

Once everything else I mentioned above is checked off…

Federated Learning

The concept of being able to perform model learning across multiple physical “sites” (e.g. hospitals) without the need to exchange the actual data (read: Private Health Information), but also taking into account the differences in data between each one of these sites.

Patient History & Priors

For example, an AI algorithm to detect a brain hemorrhage is really helpful, right? But what if the algorithm got even smarter and could differentiate between an old hemorrhage, which doesn’t require action, vs. a new hemorrhage that requires attention ASAP?

De-mistify AI By Trying Your Hands At It

Yup. Just that! A lot of people seem to think AI is some voodoo. Try it for yourself and it will no longer be so. I mean, you are not going to become an expert overnight but at least you’ll have an idea of the mechanics of what is under the hood.

 

4- Analytics – A Real Afterthought

Virtually no one seems to be thinking about this one… You need analytics and dashboarding not just to monitor AI algorithms, but also to monitor things like whether a given algorithm is with the money paid, and the efficiencies they are creating in your workflows, plus much more. Ideally, all of this works with the use of IHE SOLE.

General Impressions

I had the pleasure of attending RSNA’s 2020 annual meeting – the first one to go virtual! I find virtual conferences to be a mixed bag. On one hand, the recorded sessions are a godsend allowing me to catch up later on things I may have missed; but on the flip side, I dearly miss the interactions of a face-to-face conference and the networking with friends and colleagues from far and wide.

RSNA 2020’s program was excellent. Being the elephant in the room, there were plenty of AI-related sessions that explored different facets, like:

  • Ethics discussions
  • Real-world AI implementation advice
  • Hands-on practical AI sessions
  • Finding or creating a curated dataset for AI training, testing and validation
  • Imaging repositories like the Cancer Imaging Archive, which offer a lot more than just images

AI aside, I was able to attend a few session on other topics of interest, such as:

  • Structured radiology reporting
  • The current state of technology for PACS, Universal Viewers, …etc.
  • International interoperability standards and coding systems like FHIR, DICOM, LOINC, SNOMED, ICD-10 and CPT
  • Cybersecurity in medical imaging IT
  • Image sharing
  • Building a social-media and web presence/brand
  • Peer review and peer learning

Last but not least, I had a chance to attend a handful of vendor-sponsored sessions. The following stood out:

  • Nuance-sponsored session by Dr. Woojin Kim where he covered issues with AI like adversarial attacks, model decay and so on
  • Hyperfine’s demo of their portable MRI scanner
  • Nanox’ demo of their revolutionary X-ray scanners

Imaging AI in Practice Demo

Shameless plug alert! This was one of the highlights for me, because of my participation in the demo. I will be the first to admit that I played a very small role, but nonetheless I was an enthusiastic participant because I found the whole thing to be very stimulating. A lot of people don’t know it, but interoperability and tight integration of systems are very exciting.

The AI Demonstration was meant to showcase how AI can augment radiology to empower clinicians by integrating seamlessly into the workflow end-to-end (can’t emphasize the last point enough). Examples of where AI can lend a hand:

  • Protocoling
  • Worklist prioritization
  • Visualization, reconstruction and/or quality improvement of images
  • Classification, segmentation, feature detection and/or measurement extraction
  • Second reads
  • Report analysis for things like follow-up recommendation management (e.g. incidental findings)

Putting the demo together took 14 vendors, 26 products, nearly 9 months of hard work and many hours of conference calls. However, none of it could have been done without Teri Sippel-Schmidt‘s leadership, plus the extra-ordinary support from RSNA’s leadership and RSNA’s informatics committee

On the face of it, the demo aims to show viewers that different systems can plug-and-play, and does so in a relatable manner showing how AI can help save time, make radiology more efficient, improve the practice and contribute to quality initiatives. However, under the hood, the demo is a strong push for standards like FHIR & DICOMweb, in addition to a number of IHE profiles like AIW, AIR, SOLE and Results Distribution; as well as coding systems like RadElement, RadLex, ..etc. In fact, the demo was modelled very much like an IHE connectathon, but instead of focusing on individual transactions, it focuses on the big picture of a wholesome end-to-end radiology workflow – from when a patient is admitted into the ER, through getting scans done and on to follow-up recommendations and subsequent imaging exams.

The RSNA AI Demo boiled down to 3 videos, 15-20 minutes each, plus one introduction video. You can watch the demo videos on RSNA’s AI demo micro-site, or via RSNA’s youtube channel.