Over the past few months, I have been jotting down random notes as I attend sessions and converse with friends/colleagues in the radiology domain. I apologize if this post is slightly incoherent, it is more of a note to my future self, but I thought I’d share as others might find some of these bits and pieces useful. Additionally, I won’t spend a lot of time explaining each of the concepts – there are plenty of resources online that do a great job of explaining them in detail – this post is more about putting all of it together in a concise format.
1- Things To Keep In Mind
Artificial Intelligence vs. Machine Learning vs. Deep Learning
Most people assume the terms are interchangeable, which is incorrect. Think of Machine Learning (ML) as a deeper niche of Artificial Intelligence (AI), and, in turn, Deep Learning (DL) as an even deeper niche of Machine Learning.
Curation And Classification
Curation and classification are crucial for successful training and deployment. You should not rely on the AI/ML/DL algorithm to know if the images being fed into it are the right type. For example, think of an algorithm that expects Chest X-rays taken in the PA (Posterior-Anterior) orientation. How will it handle images in different orientations, e.g. AP (reverse) or even lateral? Even crazier, what happens if you feed a random picture (say, a picture of a pet) into an algorithm? Without proper checks, you’ll get misleading AI results.
The idea that a model may demonstrate amazing results (the often hyped area under the curve or AUC) with familiar data, but fails when presented with unfamiliar data. There are many reasons why this may happen, differences in population diversity, scanners used to capture the images, …etc.
E.g., has this model been trained on data with visible minorities? A model is as good as the data it has been trained on. If a model has been trained only on publicly-available datasets, then you must keep in mind the vast majority of those datasets in the USA are coming from just three states.
Without getting into a lot of details, a model that is not continuously learning will eventually “decay” and produce less accurate results, very much like a human and the need for continuing education. There are multiple reasons why this may happen. If you are interested, I encourage you to read about concept drift vs. data drift.
Humans tend to assume machines are infallible, which could not be further from the truth, especially with AI. You do not want your users to get too comfortable letting the AI fly on autopilot, worse even, assuming AI is more correct. Just have a look at the drivers that experienced serious crashes because they over-relied on their vehicle’s “autonomous” driving systems.
This often is an afterthought, but it is essential, period.
2- Integration Of AI Into the Workflow
Where In The Workflow?
Depending on your specific AI algorithm, it might be more helpful with triaging worklists and re-prioritizing exams, or it may assist with image interpretation (make measurements, detect bone age, …etc). Last but not least, it can review findings – like a second read but a lot less expensive!
Need I say more?!?
The AI algorithm must be able to “explain” how it arrived at whatever conclusions it made, vs. being a black box. One example could be through the use of heatmaps on images, as one example. Have a look at this cool visualization to understand what I mean: https://www.cs.ryerson.ca/~aharley/vis/conv/
A user must be able to provide “feedback” by Accepting, Rejecting, Editing and/or Adding to the algorithm’s findings. Ideally, the feedback loop assists with model re-learning, too.
Another afterthought… AI algorithm must be good samaritans and able to integrate with other healthcare systems using international standards. HL7 v2/FHIR and DICOM are bare-minimum. Ideally, it should support newer standards like FHIRcast, IHE AIW and IHE AIR.
Imagine if AI algorithms worked like downloading apps on your smartphone? Doesn’t that make you feel all warm and fuzzy? 🙂
3- Taking Things To The Next Level
Once everything else I mentioned above is checked off…
The concept of being able to perform model learning across multiple physical “sites” (e.g. hospitals) without the need to exchange the actual data (read: Private Health Information), but also taking into account the differences in data between each one of these sites.
Patient History & Priors
For example, an AI algorithm to detect a brain hemorrhage is really helpful, right? But what if the algorithm got even smarter and could differentiate between an old hemorrhage, which doesn’t require action, vs. a new hemorrhage that requires attention ASAP?
De-mistify AI By Trying Your Hands At It
Yup. Just that! A lot of people seem to think AI is some voodoo. Try it for yourself and it will no longer be so. I mean, you are not going to become an expert overnight but at least you’ll have an idea of the mechanics of what is under the hood.
4- Analytics – A Real Afterthought
Virtually no one seems to be thinking about this one… You need analytics and dashboarding not just to monitor AI algorithms, but also to monitor things like whether a given algorithm is with the money paid, and the efficiencies they are creating in your workflows, plus much more. Ideally, all of this works with the use of IHE SOLE.