Health IT

How do you make doctors trust machines in an AI-driven clinical world?

During a panel at the MedCity INVEST Twin Cities conference leaders from the payer, provider and investor spaces spoke about how to actually drive adoption of AI tools in the clinical system.

From L to R: Moderator Joe Carlson with the Minneapolis Star-Tribune, Tina Wallman with Optum | IRD, Bradley Erickson with the Mayo Clinic and Gene Munster with Loup Ventures

The spotlight on the use of AI in the healthcare system has led to breathless headlines about intense industry disruption, massive cost savings or fears about a machine or an algorithm replacing your trusted physician.

A panel at the MedCity INVEST Twin Cities attempted to tamp down some of the unrealistic hype around the technology and outline some of the ways to build AI systems that work in a clinical space and can drive widespread deployment of the technology.

presented by

Moderated by Minneapolis Star-Tribune reporter Joe Carlson, the event featured Tina Wallman, senior director of strategic initiatives at Optum | IRD, Bradley Erickson, professor and associate chair for research at the Mayo Clinic Department of Radiology and Gene Munster, a founding partner at Loup Ventures.

Still, even as they were trying to set realistic expectations, the panel generally agreed on the revolutionary potential of AI even in the technology’s early innings in healthcare.

“As an investor you need a healthy mix of skepticism, but also optimism,” Munster said. “There’s definitely a ton of hype right now, but the substance is worth the wait.”

Erickson laid out some of the already existing applications of AI in radiology at the Mayo Clinic which include identifying tumors or physical signs of conditions like multiple sclerosis.

In order to build up trust in the technique as a valid clinical tool, Erickson said the health system trained and tested algorithms against control groups. He also drew a distinction between trust and understanding and how the concept of explainable AI helps breach that gap and can drive its use by clinicians.

“Explainable AI helps you understand how the machine came to that conclusion. In the past, deep learning was called a black box, but now there are tools that are being developed to make it into more of a grey box,” Erickson said. “That’s a critical piece for adoption in medicine as well as a lot of other areas.”

Wallman added that the key to building that strong relationship is including clinical expertise into the technological development process to understand what sort of information is necessary to see and in what way it should be presented to make users more comfortable.

“Trust is a huge part of the design of AI systems and how we put them in place or how we get them integrated,” Wallman said.

“You can have technologists build amazing deep learning models that will never see the light of day because they haven’t been designed in a way to develop trust, show interpretability or provide insights that clinicians actually care about.

Erickson said instead of using the team artificial intelligence, the term augmented human intelligence is in vogue as a way to explain that while algorithms are assisting in clinical decisions, they are not the ultimate decision-makers.

When it comes to where patient understanding and trust fits in, Erickson said patients generally rely on the advice and expertise of their physician or care provider to do the diligence necessary to ensure that the AI techniques are effective.

“I’m not sure how patients understand (deep learning) is being used,” Erickson said. “There’s been a lot of surveys and studies of patient adoption and most of them like the idea that their doc is using cutting edge, space age type technology.”

Picture: Kevin Truong, MedCity News