Health Tech

The Healthcare AI Question No One Wants to Answer

At ViVE 2024, Tarun Kapoor — chief digital transformation officer at New Jersey-based Virtua Health — highlighted a conundrum he thinks may be the reason why AI hasn’t been able to move the needle when it comes to solving the clinician shortage.

Anyone who has been to a healthcare conference in the last few years knows that AI is dominating a lot of the discussions. The technology is often heralded as a winning solution that can address the industry’s sweeping clinician shortage — but it doesn’t seem like AI tools have really been able to move the needle yet when it comes to this problem.

Tarun Kapoor, chief digital transformation officer at New Jersey-based Virtua Health, highlighted a conundrum he thinks may be the reason why AI is stalling a bit.

“What a lot of folks are saying right now is ‘We’re using AI technology, but we still have a clinician in the loop.’ That has a double-edged sword to it,” he declared during an interview Sunday at the ViVE conference in Los Angeles.

If a clinician has to be in the loop every single time a tool is used, it’s dubious to claim that the tool is alleviating their burnout, Kapoor explained. At some point, hospitals are going to have to think about “letting automation go almost all the way through” — otherwise the workforce shortage isn’t getting solved, he argued.

This problem brings a “much bigger, societal question” to light, Kapoor pointed out. 

He highlighted the fact that driverless cars have been around for about 20 years, yet most people are still terrified of the idea of letting them loose on the streets. That’s because society has zero appetite for those driverless cars to make a mistake, he explained.

Kapoor noted that in order for hospitals to solve the clinician shortage through technology, AI will have to completely automate some clinical workflows — sans a clinician in the loop. But that leads to a difficult question: what’s worse — a dangerously-sized dearth of clinicians or the risks associated with letting AI make clinical decisions without human oversight?

“I can’t answer that question. Society has to answer that question. But I think those are the types of conversations that we need to be having,” Kapoor remarked.

In his view, healthcare leaders should be talking more about their AI risk tolerance. When having these conversations, they should remember that it’s not just AI that is capable of making mistakes — clinicians are susceptible to slip-ups too, Kapoor noted. 

Prior to the pandemic, medical errors made by clinicians were the third-leading cause of death in the U.S., he pointed out. Considering this, AI may actually have the potential to reduce the number of clinical errors in the U.S. healthcare system.

“If we cut that number from several hundred thousand per year to half of that, but your family member was harmed by an algorithm, are you going to tolerate it? You can get justice when a human makes a mistake, but how do you do that when an algorithm makes a mistake?” Kapoor asked.

That’s the type of question that healthcare leaders should focus on hashing out instead of exchanging various buzzwords, he said.

Photo: metamorworks, Getty Images