Privileged. That’s the word I would use to describe my involvement with the RSNA Imaging AI in Practice 2021 Demo. I wrote about my experience with last year’s AI demo, only it got much better this year when I was asked to be Teri Sippel-Schmidt’s co-pilot as a Techical Project Manager. I spoke highly of Teri’s leadership in last year’s demo, hence why I feel privileged to play a bigger part in bringing this demo to life.

On the heels of last year’s success, the demo retained the vast majority of last year’s participants and added more. All in all, 22 vendors participated with 32 products in total. Instead of last year’s three demo teams, we ended up with five this year. 

The premise is the same as last year – showcasing how AI-touting systems can integrate (with interoperability under the hood) to deliver on the promise of improving Radiology workflows end-to-end. Starting from scheduling of resources, through protocoling, image acquisition, image analysis, reporting, and last but not least, AI model training.

As the case was with last year, many thanks go out to everyone that made the demo possible. I am sure I might miss some people but I’ll try anyway:

  • RSNA Board of Directors, Radiology Informatics Committee (RIC) and the RIC IAIP Task Force.
  • Our venerable clinical champions
  • RSNA staff
  • The vendors who make all this possible
  • Most importantly (on a personal level), Teri for coaching me with an abundance of patience and wisdom.

If you happen to see this in time, please be sure to come see the demo. It will be running 9am-5pm Nov 28th through Dec 1st. South Hall booth 4925.

I will post recaps of this demo and overall impressions of RSNA 2021 afterwards. Stay tuned!


I have just posted on the topic of Radiology report anonymization over the “Institute for Better Health – Data, Analytics and Artificial Intelligence” blog. Please follow the link below to read it.

Report Anonymization: Date formats and whitespaces

Discussing the extreme wide variety of date/time formats used by individuals and systems, even inside one single institution. Also touches on a lesser known topic of different types of whitespace occasionally used in dates to prevent line wrapping from breaking up a date value.

I have just posted on the topics of Radiology report anonymization over the “Institute for Better Health – Data, Analytics and Artificial Intelligence” blog. Please follow the link below to read it.

Report Anonymization: Rich Text Format (RTF)

Explores different approaches for converting Rich Text Format (RTF) radiology reports to plain text for the purposes of anonymization, natural language processing (NLP) and more.

I have just posted a review of a number of DICOM anonymization tools over the “Institute for Better Health – Data, Analytics and Artificial Intelligence” blog. Please follow the link below to read it.

A comparison of DICOM Anonymization Tools

This post compares off the shelf DICOM anonymization tools like DicomCleaner, CTP, PyDicom/died and more for a project with 480,000 chest X-ray images. The discussion includes the requirements for the right tool, and a pro/con style analysis of each tool included in this comparison.

Over the past few months, I have been jotting down random notes as I attend sessions and converse with friends/colleagues in the radiology domain. I apologize if this post is slightly incoherent, it is more of a note to my future self, but I thought I’d share as others might find some of these bits and pieces useful. Additionally, I won’t spend a lot of time explaining each of the concepts – there are plenty of resources online that do a great job of explaining them in detail – this post is more about putting all of it together in a concise format.

1- Things To Keep In Mind

Artificial Intelligence vs. Machine Learning vs. Deep Learning

Most people assume the terms are interchangeable, which is incorrect. Think of Machine Learning (ML) as a deeper niche of Artificial Intelligence (AI), and, in turn, Deep Learning (DL) as an even deeper niche of Machine Learning. 

Curation And Classification

Curation and classification are crucial for successful training and deployment. You should not rely on the AI/ML/DL algorithm to know if the images being fed into it are the right type. For example, think of an algorithm that expects Chest X-rays taken in the PA (Posterior-Anterior) orientation. How will it handle images in different orientations, e.g. AP (reverse) or even lateral? Even crazier, what happens if you feed a random picture (say, a picture of a pet) into an algorithm? Without proper checks, you’ll get misleading AI results.

Model Brittleness

The idea that a model may demonstrate amazing results (the often hyped area under the curve or AUC) with familiar data, but fails when presented with unfamiliar data. There are many reasons why this may happen, differences in population diversity, scanners used to capture the images, …etc.

Model Bias

E.g., has this model been trained on data with visible minorities? A model is as good as the data it has been trained on. If a model has been trained only on publicly-available datasets, then you must keep in mind the vast majority of those datasets in the USA are coming from just three states.

Model Decay

Without getting into a lot of details, a model that is not continuously learning will eventually “decay” and produce less accurate results, very much like a human and the need for continuing education. There are multiple reasons why this may happen. If you are interested, I encourage you to read about concept drift vs. data drift.

Automation Bias

Humans tend to assume machines are infallible, which could not be further from the truth, especially with AI. You do not want your users to get too comfortable letting the AI fly on autopilot, worse even, assuming AI is more correct. Just have a look at the drivers that experienced serious crashes because they over-relied on their vehicle’s “autonomous” driving systems.

Ongoing Monitoring

This often is an afterthought, but it is essential, period. 


2- Integration Of AI Into the Workflow

Where In The Workflow?

Depending on your specific AI algorithm, it might be more helpful with triaging worklists and re-prioritizing exams, or it may assist with image interpretation (make measurements, detect bone age, …etc). Last but not least, it can review findings – like a second read but a lot less expensive!

Seamless Integration

Need I say more?!?


The AI algorithm must be able to “explain” how it arrived at whatever conclusions it made, vs. being a black box. One example could be through the use of heatmaps on images, as one example. Have a look at this cool visualization to understand what I mean:

Feedback Loop

A user must be able to provide “feedback” by Accepting, Rejecting, Editing and/or Adding to the algorithm’s findings. Ideally, the feedback loop assists with model re-learning, too.

Interoperability Standards

Another afterthought… AI algorithm must be good samaritans and able to integrate with other healthcare systems using international standards. HL7 v2/FHIR and DICOM are bare-minimum. Ideally, it should support newer standards like FHIRcast, IHE AIW and IHE AIR.

Smartphone Paradigm

Imagine if AI algorithms worked like downloading apps on your smartphone? Doesn’t that make you feel all warm and fuzzy? 🙂


3- Taking Things To The Next Level

Once everything else I mentioned above is checked off…

Federated Learning

The concept of being able to perform model learning across multiple physical “sites” (e.g. hospitals) without the need to exchange the actual data (read: Private Health Information), but also taking into account the differences in data between each one of these sites.

Patient History & Priors

For example, an AI algorithm to detect a brain hemorrhage is really helpful, right? But what if the algorithm got even smarter and could differentiate between an old hemorrhage, which doesn’t require action, vs. a new hemorrhage that requires attention ASAP?

De-mistify AI By Trying Your Hands At It

Yup. Just that! A lot of people seem to think AI is some voodoo. Try it for yourself and it will no longer be so. I mean, you are not going to become an expert overnight but at least you’ll have an idea of the mechanics of what is under the hood.


4- Analytics – A Real Afterthought

Virtually no one seems to be thinking about this one… You need analytics and dashboarding not just to monitor AI algorithms, but also to monitor things like whether a given algorithm is with the money paid, and the efficiencies they are creating in your workflows, plus much more. Ideally, all of this works with the use of IHE SOLE.

General Impressions

I had the pleasure of attending RSNA’s 2020 annual meeting – the first one to go virtual! I find virtual conferences to be a mixed bag. On one hand, the recorded sessions are a godsend allowing me to catch up later on things I may have missed; but on the flip side, I dearly miss the interactions of a face-to-face conference and the networking with friends and colleagues from far and wide.

RSNA 2020’s program was excellent. Being the elephant in the room, there were plenty of AI-related sessions that explored different facets, like:

  • Ethics discussions
  • Real-world AI implementation advice
  • Hands-on practical AI sessions
  • Finding or creating a curated dataset for AI training, testing and validation
  • Imaging repositories like the Cancer Imaging Archive, which offer a lot more than just images

AI aside, I was able to attend a few session on other topics of interest, such as:

  • Structured radiology reporting
  • The current state of technology for PACS, Universal Viewers, …etc.
  • International interoperability standards and coding systems like FHIR, DICOM, LOINC, SNOMED, ICD-10 and CPT
  • Cybersecurity in medical imaging IT
  • Image sharing
  • Building a social-media and web presence/brand
  • Peer review and peer learning

Last but not least, I had a chance to attend a handful of vendor-sponsored sessions. The following stood out:

  • Nuance-sponsored session by Dr. Woojin Kim where he covered issues with AI like adversarial attacks, model decay and so on
  • Hyperfine’s demo of their portable MRI scanner
  • Nanox’ demo of their revolutionary X-ray scanners

Imaging AI in Practice Demo

Shameless plug alert! This was one of the highlights for me, because of my participation in the demo. I will be the first to admit that I played a very small role, but nonetheless I was an enthusiastic participant because I found the whole thing to be very stimulating. A lot of people don’t know it, but interoperability and tight integration of systems are very exciting.

The AI Demonstration was meant to showcase how AI can augment radiology to empower clinicians by integrating seamlessly into the workflow end-to-end (can’t emphasize the last point enough). Examples of where AI can lend a hand:

  • Protocoling
  • Worklist prioritization
  • Visualization, reconstruction and/or quality improvement of images
  • Classification, segmentation, feature detection and/or measurement extraction
  • Second reads
  • Report analysis for things like follow-up recommendation management (e.g. incidental findings)

Putting the demo together took 14 vendors, 26 products, nearly 9 months of hard work and many hours of conference calls. However, none of it could have been done without Teri Sippel-Schmidt‘s leadership, plus the extra-ordinary support from RSNA’s leadership and RSNA’s informatics committee

On the face of it, the demo aims to show viewers that different systems can plug-and-play, and does so in a relatable manner showing how AI can help save time, make radiology more efficient, improve the practice and contribute to quality initiatives. However, under the hood, the demo is a strong push for standards like FHIR & DICOMweb, in addition to a number of IHE profiles like AIW, AIR, SOLE and Results Distribution; as well as coding systems like RadElement, RadLex, ..etc. In fact, the demo was modelled very much like an IHE connectathon, but instead of focusing on individual transactions, it focuses on the big picture of a wholesome end-to-end radiology workflow – from when a patient is admitted into the ER, through getting scans done and on to follow-up recommendations and subsequent imaging exams.

The RSNA AI Demo boiled down to 3 videos, 15-20 minutes each, plus one introduction video. You can watch the demo videos on RSNA’s AI demo micro-site, or via RSNA’s youtube channel.

I have long been intrigued by the concept of genetics testing. Yes, I know it is not perfect and the science there is not 100% exact, in fact, the Canadian Broadcasting Corporation (CBC) recently did some investigative journalism on the matter and the results, shall we say, were a head-scratcher. See the CBC’s finding in article format or video format if you are curious.

Nonetheless, curiosity drove me to try it when I found the 23andMe kit for half price online, and also found out that you can cover the cost under your Health Spending Account (HSA) in Canada if you have one. So I pulled the trigger and got the kit.

Getting the sample was quite straightforward, it is just a matter of spitting into a tube, then using a pre-labelled box to return the sample to 23andMe’s lab… Now the waiting game!

About a month later, the results came out. Luckily, no nasty surprises! For example, I have a typical likelihood of developing type 2 diabetes, according to 23andMe. However, the real surprise came from the long list of things they can tell you about from your genes. Things that I didn’t even know had a name for diagnoses, let alone could be diagnosed via my genes.

For example, did you know what misophonia is? I certainly didn’t. It is the “condition” of hating the sound of chewing, which apparently is something your genes can tell about you. In case you are wondering, yes, I have that and my report indicated so.

Here is a quick list of peculiar traits that 23andMe offered me insights on:

  • Asparagus odour detection
  • Bitter taste detection
  • Cilantro taste aversion
  • Ice Cream Flavour preference 
  • Sweet vs. Salty preference
  • Fear of heights
  • Fear of public speaking
  • Mosquito bite frequency
  • Motion sickness

I could go on and on but you get the picture. All in all, I think this was a fun experience considering the low cost and effort involved. Had the cost been higher, or effort involved (e.g. blood sample vs. spit sample) then I may have had second thoughts. As it stands, I would do it again if I knew what I know now… 

I had the honour of presenting at the 2020 FHIR North Conference. My topic was on FHIR & DICOM, where I began by giving a formal introduction to DICOM as it is used in medical imaging, then segued into how DICOM integrates with FHIR to compliment the patient record – marrying images with clinical records. The session also covered basic interactions between FHIR and DICOMweb to query for and retrieve imaging studies. For those of you with a conference pass, you can view my recorded presentation here:

In addition to presenting, I was able to attend a number of sessions at the conference. Here is a list of some that I found particularly interesting:

  • James Agnew’s Intro to FHIR: James is a very engaging speaker and this talk informative while also being easy to follow. Highly recommended for those new to FHIR.
  • Opening Keynote: Advancing Interoperability for the Public Good: Didi Davis talked about the Sequoia Project leading the development of a framework to bridge old and new technologies and ease the transition.
  • Ontario’s COVID Response: This presentation covered a number of topics related to Ontario’s COVID-19 efforts, but I was intrigued by how the online portal, where individuals can retrieve their COVID-19 test results, was constructed using FHIR and Ontario Laboratories Information System (OLIS).
  • International Patient Summary – Ready for Prime-time: Gave an enlightening overview of the IPS, a health record extract, comprising a standardized collection of clinical and contextual information (retrospective, concurrent, prospective) that provides a snapshot in time of a subject of care’s health information and healthcare.
  • Structured Data Capture (SDC): A topic I have been intermittently following ever since I came to know about it when Alex Goel did a project related to it at the SIIM Hackathon, based on his experience at Cancer Care Ontario. On a related note, Alex was featured in a SIIMcast podcast episode that you can find here:
  • Modernizing Data Submissions in Home and Continuing Care Sector: Talked about how Canadian Institute for Health Information (CIHI) has embarked on modernizing its data collection system to leverage the HL7 FHIR standard to provide a unified specification for submission of interRAI data.

I was a first-timer at the September 2020 HL7 FHIR Connectathon, formally known as Connectathon 25. I will start by admitting the experience was a little overwhelming. For a brand-new attendee, the virtual format didn’t lend itself well to make my experience easier – nothing like being able to walk up to another participant and strike a conversation like you can do in a face to face setting. 

I came into the connectathon with a little FHIR prototype that I had built, which I was hoping to put through its paces at the connectathon. While that, unfortunately, did not quite pan out, I still found the connectathon to be interesting and hope to return to another FHIR connectathon before too long. The FHIR community has an amazing culture – everyone is very smart yet down to earth and welcoming. 

Connectathon 25 had a lengthy list of tracks (see:, not to mention the educational sessions. There was a flurry of activity for each stream plus in the Zulip chat. I was particularly interested in the v2-to-FHIR track, Google was one of the participants and they had cooked up a pretty cool ETL pipeline that can map v2 to FHIR data in a very flexible manner, see

I might not have been able to accomplish what I set out to do but I certainly made up for it by learning a ton of new things about FHIR, observing others and also my favourite FHIR library, HAPI. I look forward to being a part of a future FHIR connectathon!