Interview: Rick Abramson - European Medical Journal

Interview: Rick Abramson

7 Mins
Flagship Journal

Rick Abramson | Adjunct Associate Professor of Biomedical Engineering, Vanderbilt University, Nashville, Tennessee, USA

Citation: EMJ. 2024;9[3]: https://doi.org/10.33590/emj/KHDZ3489.

What motivated you to explore the intersection of radiology and AI, having previously worked in several different sectors?

I have worked previously in healthcare management consulting, health services research, clinical practice, and healthcare administration. As such, I am all too familiar with the myriad challenges we face in healthcare. A few years ago, when I was in a physician-executive role at HCA Healthcare, I had the opportunity to look out over what was transpiring within a vast USA-based for-profit health system, and I saw some disturbing trends. Volumes, whether measured in facility admissions or patient visits or procedures, were skyrocketing, but the supply of available providers was dwindling. That meant the workload on any given provider was rising exponentially, and providers were working harder and longer than ever before. This translated into both flagging morale and diminishing care quality. 

At that time, I reasoned that healthcare needed to follow other industries and look to technology for a solution. Could cutting-edge technology, including AI, ease the burden on overworked providers while simultaneously enhancing the quality of care? I was so intrigued by this driving question that I left HCA and joined the startup world, where I have been for the past 3 years. I would say this is the most exciting area I’ve ever worked in, both for the opportunities and for the challenges presented by this exhilarating technology. 

During your lecture at the European Congress of Radiology (ECR) earlier this year, you highlighted the limitations of AI. What do you believe are the most significant challenges that need to be addressed to advance AI in healthcare?

I am an optimist regarding the opportunities AI offers for healthcare, but we must acknowledge some significant obstacles to overcome before we can progress forward. As a community, we are now transitioning from AI as a research project into AI as a part of real-world clinical workflows. With that transition come some very important questions: are the AI tools addressing important problems, or are they merely fancy solutions to problems that have already been addressed? To advance the adoption of AI tools in the clinical setting, we cannot be satisfied with merely demonstrating the diagnostic performance of the tools. We must also evaluate their practical usefulness within real-world workflows, and we must demonstrate the incremental value of AI-enabled workflows over the status quo. 

For example, let’s consider an AI tool for fracture detection. When evaluating such a tool, we cannot focus exclusively on its performance in standalone testing. We must also look at how the tool will be deployed in clinical workflows and what effect that deployment will have on the downstream outcomes we care about. If the tool is accurate for detecting easy fractures but struggles to detect the fractures that radiologists tend to miss, the incremental value added by the AI tool might be quite small. So perhaps we need to focus on the difficult edge cases and look at fracture detection rates with the AI tool versus without. But we cannot stop there, because even if there is an improvement in fracture detection rates, we must convince ourselves that this improvement makes a long-term difference in patient care. If we can’t demonstrate such an improvement in patient care, maybe there are other reasons to adopt the AI tool, for example, perhaps the tool offers a beneficial effect on care pathway efficiency. But we can’t just assume that effect; we must demonstrate it affirmatively to justify the adoption of the tool. 

Beyond evaluating AI tools and demonstrating their effect on long-term outcomes, we have other, more practical challenges to address. In most countries, AI tools are still not reimbursed by payers, which means providers must pay for these tools themselves, and this in turn means that AI developers must show how deployment of these tools will generate a return on investment over a reasonable time horizon. In jurisdictions where AI tools are purchased by government-run facilities, the procurement process is often bureaucratic and cumbersome, and it may differ wildly from facility to facility. Regulatory approval is another big challenge: most countries have not yet updated their regulatory frameworks to accommodate modern AI tools, and developers often struggle under the weight of excessive regulatory burden. But perhaps the biggest obstacle of all is ourselves; we, the healthcare community, still tend to look at AI with ambivalence and mistrust, some of us because of a fear of replacement, others because of legitimate concern for uncertainties and risks. Until we convince ourselves that AI technology is both safe and worthwhile, adoption will proceed quite slowly. 

How do you balance the use of AI in radiology with the need to maintain human oversight and patient safety?

This is a crucial question, and it’s one we all need to be asking. Physicians, developers, administrators, regulators, and investors; we all share the responsibility of ensuring that AI in healthcare is adopted in an ethically responsible manner, with patient safety as our top priority. If an AI tool would make providers more efficient but at the expense of patient care quality, that’s a tool we should avoid adopting. 

With that having been said, I might challenge the fundamental notion of a ‘balance’ or trade-off between technology adoption and patient safety. Of course, we want to make sure that new technology is not biasing physicians to the detriment of patient care or removing important human safeguards that keep patients safe. But we must also acknowledge that new technology might actually enhance patient safety. We know that errors are unfortunately quite prevalent in medicine, and we are already seeing examples of how AI can prevent errors and misses; in this respect, we may actually have a moral responsibility to push AI adoption forward. We also need to be more open-minded about where human control is necessary to maintain patient safety and where automation might keep patients safer. 

In your experience, what are some of the most promising applications of AI in public health screening and early cancer detection?

Well, we have archetypal examples for each: tuberculosis screening for the public health initiative and mammography for early cancer detection. Both applications are being transformed quite dramatically by AI software, to the point where I believe the workflows for both tuberculosis screening and mammography will be entirely different in 5 years from where they are today. 

But we also have other emerging applications of AI for large-scale population screening. We see AI being used in early-detection initiatives for other cancers, particularly lung cancer. We see AI being deployed for early detection of chronic conditions like congestive heart failure and chronic obstructive pulmonary disease, in hopes that early identification will keep patients healthy for longer and also reduce expensive hospital admissions for acute disease exacerbations. And we even see AI being used to identify patients with low skeletal muscle mass for targeted nutrition and exercise interventions. I am optimistic that AI technology will accompany a shift in our public health focus from identifying disease that is already present to preventing disease before it arises.  

Having pioneered international teleradiology, what insights can you share about the integration of AI in teleradiology practices?

I’m not sure I would qualify as a ‘pioneer’; I would save that label for those brave entrepreneurs who introduced the world to an entirely new way of practising radiology. However, I was indeed among the first wave of international teleradiologists, before remote practice became commonplace. Back then, nobody knew whether teleradiology was just a temporary fad or whether it would become mainstream. At first, teleradiology grew and became popular simply because it allowed radiologists to avoid overnight call. But as overnight teleradiology grew more prevalent, radiology practices found they had no choice but to sign up with a teleradiology provider, because without overnight coverage they could not recruit and retain radiologists who increasingly expected overnight teleradiology as a standard part of their practice environment. In my days at HCA, daytime teleradiology coverage was becoming an important alternative to traditional on-site diagnostic radiology in those parts of the USA that were having trouble recruiting radiologists. 

I think teleradiology has more growth ahead, now that we have found (in the post-Covid world) an even greater acceptance for remote diagnostic work. Regarding AI technology, I have noticed that teleradiology firms seem to be among the earliest adopters. I think that’s for a few reasons: first, teleradiology practices tend to be quite innovative, as they are continually questioning traditional workflows and pushing the bounds of how to improve the operational side of radiology. Second, they operate at scale in an extremely competitive market environment where small differences in operational efficiency may have huge effects on profitability. I look to teleradiology as somewhat of a bellwether, an early indicator of which AI tools the market will find most useful and how quickly those tools will be adopted. 

What role do you see for AI in addressing the current challenges of overworked and understaffed radiology departments?

There is enormous possibility here, but only if we can achieve the magnitude of efficiency gains we need. Current state-of-the-art AI tools can deliver 10% efficiency improvements, but that’s not enough to address the workforce staffing challenges we face. We really require efficiency gains that would double or triple our productivity, and that magnitude of workflow improvement can only come through the automation of diagnostic reporting. All the major AI software developers are working on automated ‘draft’ reporting, and the race is on to see if and when this technology will be ready for large-scale clinical deployment. 

I expect we will see adoption of automated reporting not all at once, but in waves. First, we will see the integration of low-level automation into more complex, human-driven pathways, such as AI replacing the second reader in double-reader mammography workflows. Then we will see AI become the single reader in underserved regions, especially those locations that are overwhelmed with plain film volumes. And steadily we will march on, towards more and more automated draft reporting in more regions and for more workflows, until radiology becomes much more automated than it has been in the past. 

How important is interdisciplinary collaboration between clinicians, researchers, and policymakers in developing effective AI solutions in healthcare?  

Interdisciplinary collaboration is absolutely crucial. Researchers are the engine for advancing technology, but they need to understand from clinicians what problems are the most pressing and what solutions will be practical for clinical workflows. Clinicians need to maintain dialogue with researchers in both the design phase and also the testing and evaluation phase, and of course, they will be the adopters of new technology as it is refined. Policymakers establish the regulatory and market frameworks needed to incentivise both development and adoption in keeping with healthcare system requirements. Co-creation of AI solutions is therefore paramount if we are to see the advancement of technically sound, clinically effective, and policy-compliant technology. 

You were formerly global Chief Medical Officer at Annalise.ai; can you elaborate on the core features of Annalise.ai’s enterprise products and how they enhance diagnostic interpretation in radiology? 

Annalise.ai is a global leader in the development of advanced AI applications for medical imaging. Its software solutions are regulatory-cleared and commercially available in 40 countries. Annalise has separated itself from the competition by developing ‘comprehensive’ diagnostic support solutions, that is, solutions that report on dozens of imaging findings within a particular modality in contrast to the single-finding algorithms of some competitors. 

Annalise has enjoyed rapid expansion in several markets; for example, it recently won multiple tenders within the British National Health Service (NHS) and will soon be processing one-third of all chest X-rays obtained in the UK. I recently relinquished my full-time role as Annalise’s Chief Medical Officer, but I am proud of what we accomplished as a team during my tenure, and I look forward to serving Annalise in an advisory role going forward. 

What is next for you in the field of AI and healthcare?

I’ve been going fairly hard for the last few years, so I’ll take a little bit of a break before jumping back into a full-time role. In the short term, I plan to do some clinical work, and I will also spend a fair bit of time advising investors on capital deployment and helping policymakers to focus resources. In the long term, I plan to remain focused on the intersection of technology, clinical practice, and the market and policy environments. It’s a complex space full of excitement and opportunity. I look forward to seeing what the future brings, not just for myself but for all of healthcare! 

Please rate the quality of this content

As you found this content interesting...

Follow us on social media!

We are sorry that this content was not interesting for you!

Let us improve this content!

Tell us how we can improve this content?