Hospitals, Artificial Intelligence, Providers

Why Providers Are Stalling When It Comes to Scaling AI

Without a clear picture of which solutions are working and which ones aren’t, it’s difficult for providers to scale AI across their enterprise. As hospitals transition their AI efforts from experimentation mode to the widespread adoption phase, experts agree that more rigorous, real-world evidence is needed.

cost money ROI

Hospitals are poised to spend billions on AI in the coming years, but many remain ill-prepared to gauge the true return they are getting from these investments. 

Health system leaders say they are still figuring this process out and experimenting with different ways to measure AI’s effectiveness — ranging from hard metrics like patient outcomes to softer indicators like physician job satisfaction. 

Without a clear picture of which tools are working and which ones aren’t, it’s also difficult for hospitals to scale AI across their enterprise. The scaling process is further complicated by varying needs across specialties, inadequate technology infrastructures and the need for strong data governance. 

As health systems transition their AI efforts from experimentation mode to the widespread adoption phase, industry experts agree that more rigorous, real-world evidence is needed.

How hospitals are rethinking ROI

Health system leaders across the country are still determining how to best measure the success of AI tools, according to Kiran Mysore, chief data & analytics officer at Northern California-based health system Sutter Health.

“The challenge we have today is most pilots don’t think about ROI upfront. It’s ‘let’s go — just solve the problem and go do it.’ The danger there is that you go too far without having a conversation about AI value. You have to have that conversation up front as early as possible,” he said.

presented by

Mysore noted hospital leaders have to calculate a rough estimate of a given tool’s ROI before it gets adopted, as this information can shape decision-making when it comes to the size of the investment hospital leadership is willing to make. If a hospital predicts that a piece of technology will generate a modest ROI, it probably won’t invest a lot of money upfront — but the hospital might if the projected ROI was much higher, Mysore explained.

Take AI-powered ambient listening tools for an example.

“Does it save some time for the physicians? That’s hard to measure — because when a physician sees 10-12 patients in half a day, how do you actually measure that? The best thing we can measure is cognitive burden, but that is not a scientific measure. It’s just a physician feeling relieved and relaxed — and being able to have a conversation versus having to type something,” Mysore explained.

For some tools, qualitative metrics matter a great deal. 

Ambient listening tools are one of those tools — the healthcare industry is facing a severe shortage of clinicians amid a historic burnout crisis, so physicians feeling less stressed at work is an important measure to pay attention to, Mysore declared.

Another health system executive — Scott Arnold, CIO and chief of innovation at Tampa General Hospital — agreed with Mysore. 

He noted that hospitals don’t typically track metrics like staff attrition rates or physicians’ overall job satisfaction in order to calculate an AI tool’s ROI. But to Arnold, these can be real indicators of a solution’s impact.

“Sure, there may not be a direct ROI figure that I can deliver up to the CFO, but I can point over to the attrition rate and how that’s gone into single digits because people are happy and they got a little time back at night. Now they’re not spending their night, you know, hand jamming notes into a system when we have AI tools to do it for them,” he explained.

For other technologies, quantitative metrics are more important. For instance, a hospital would closely track the average length of patient stays after adopting an AI tool that helps automate patient discharge processes.

Why scaling AI can be challenging

There are also a new set of challenges when it comes time to scale an AI solution that performed well during its pilot phase, Mysore of Sutter Health noted.

“Maybe you have a bunch of primary care physicians and you roll it out to them first, but when you roll it out to cardiologists or to nurses or to others, it’s going to be very different. You can’t necessarily use the same scaling functions, because primary care physicians ask a certain set of questions and they document a certain set of things. Cardiologists might do very different things, so it’s really important for us to tailor the AI use to the patient population and the physician population,” he remarked.

Without tailored deployment strategies, even the most promising AI tools risk stalling at the pilot phase, Mysore said.

Fundamentally, most health systems lack the infrastructure necessary to quickly scale AI solutions, added Tej Shah, managing director at Accenture. He likened this conundrum to “building the lab but not the garage.”

“In the survey that we did with 300 C-suite leaders across healthcare providers, we saw that folks are dipping their toe into this technology. They’re investing to build and pilot these AI solutions within their four walls, but we’re not really seeing folks invest in the infrastructure that they need in order to get to the value,” Shah declared.

To build this infrastructure, hospitals must begin with a strong digital core. Hospitals achieve a strong digital core by moving their operations to the cloud and making sure their data is structured and accessible, Shah explained. 

Structured, accessible data means that AI tools can deliver reliable insights, he pointed out. Shah said poor data quality can lead to inefficiencies and biased algorithms, and ultimately missed opportunities for scaling AI solutions.

He noted that hospitals need to establish a robust governance structure around their digital tools as well, as this ensures that use cases are secure and ethical.

In addition to building the necessary tech infrastructure to scale AI, hospitals need to get serious about training their staff on how to use these tools.

“It’s about [providers] making the investment in their people to help them be able to use the technology in a way that makes sense and helping them also understand what the guardrails are today. There is this sort of jagged frontier of AI — it’s about helping the clinicians really understand and appreciate what that jagged frontier looks like, and what they can and should be using this technology for,” he explained.

As often is the case with technology, it’s “people and process” that truly determine the success of AI in healthcare.

There’s an evidence gap

There’s another key problem hospitals face when it comes to scaling AI: They don’t have very much external evidence to reference to help them figure out which solutions work the best and therefore should be adopted the fastest, pointed out Meg Barron, managing director at Peterson Health Technology Institute (PHTI).

Barron’s organization is a nonprofit that is addressing this problem by publishing public research that assesses the clinical and economic impact of digital health tools. 

She emphasized the importance of prioritizing clinical effectiveness over engagement and user satisfaction in digital health evaluations. 

“For any given solution category, there’s often various evidence that can exist, but not all evidence is created equal, and there can often be bias and lack of quality in a lot of the research,” Barron stated.

Bias can seep into efficacy studies, particularly when vendors have financial or promotional incentives behind the research. Without rigorous standards and transparency in the evidence generation process, much of the available data on digital health tools may not actually reflect their true clinical impact, Barron cautioned.

She said PHTI aims to bridge this gap by systematically reviewing evidence with a focus on tools’ real-world data and performance — rather than relying solely on randomized controlled trials, which are not always reliable evidence measures for rapidly evolving digital health technologies.

Real-world evidence for healthcare AI tools isn’t exactly abundant, and providers can often struggle to access it, Barron noted.

A lot of the data vendors use to demonstrate the efficacy of their technology is derived from studies conducted in controlled environments, typically using simulated data that doesn’t come from real patients. For example, a report last year analyzed more than 500 studies on large language models in healthcare and found that only 5% of them were conducted using real-world patient data. 

As providers continue to scrutinize digital health vendors, it’s also important to investigate their claims about saving money, Barron said.

While cost reduction isn’t the primary goal of every AI tool, improving health outcomes with technology often leads to lower spending, she pointed out.

She recommended digital health technology assessments consider both clinical effectiveness and budget impact, particularly within the one-to-three-year contract cycles that are common in healthcare.

Through its research, PHTI has found that some digital solutions, such as virtual physical therapy, can save providers money and deliver clinical results similar to in-person care. 

“We found that virtual apps can make it easier for people to do physical therapy, which helps them heal more quickly and avoid other costs, such as surgery and pain medications. Often in other cases, technology can help expand beyond just one-to-one care to bring down overall delivery costs and also expand access. That’s nirvana; that’s the end goal — and our intent is to help to surface where these instances are happening so they can be scaled faster,” Barron declared.

On the other hand, PHTI’s research also showed that digital diabetes management tools have raised costs without superior outcomes — despite the fact that these vendors were touting money-saving capabilities.

As the healthcare industry pushes for faster AI adoption, Barron thinks a scrupulous eye and real-world evidence will be essential to guiding providers’ decisions on which technologies to scale. Without these components, hospitals risk investing in tools that promise a lot but ultimately fail to deliver on their clinical and cost-saving potential.

Photo: champc, Getty Images