At World Medical Innovation Forum, panelists agree AI can boost clinical trials

While it may not be possible for clinical trials to go completely virtual with AI, there are big cost and efficiency advantages to applying artificial intelligence to them.

AI, machine learning

Artificial intelligence is the latest buzzword in healthcare with many evaluating the myriad ways it can be effectively applied. At the World Medical Innovation Forum in Boston on Monday, the conversation centered on how AI can reduce costs and improve efficiency in clinical trials.

Panelists discussed how AI can mine old clinical trial data for new purposes and virtualize parts of the clinical trial process, in some cases even entire control arms.  However, it’s important to note that challenges to applying AI in clinical trials AI include obtaining reliable underlying datasets and making systems transparent enough to regulators and payers.

One potential benefit to applying AI in clinical trials is its ability to select the right patients who will stay on the study drug and stick with the study through the follow-up process.

“Worse than a site that doesn’t enroll patients is a site that enrolls a lot of patients and those patients don’t stay in the study,” said Dr. Stephen Wiviott, executive director of the Clinical Trials Office at Partners HealthCare.

Of course, AI can also clean and analyze data thereby dramatically speeding up the process. Just cleaning the data after a trial takes one-to-two months, but AI can do it in a day, said Joseph Scheeren, senior advisor for R&D at Bayer. Overall, Scheeren estimated that AI can cut 30-40% of the time required for a clinical trial. “In R&D, speed is everything,” he said.

When pressed by panel moderator Dr. Krishna Yeshwant, general partner at Google Ventures and a practicing physician, about how much money AI would save throughout the clinical trial process, Wiviott estimated it would cut costs by 90 percent.

“So much of this money is spent on humans checking other humans’ work,” he said adding that AI is now ready to eliminate that.

But beyond helping to streamline clinical trial setup and double-checking, AI can simulate the entire control arm of a clinical trial using a previously collected dataset, said Dr. Amy Abernethy, Chief Medical and Scientific Officer at Flatiron Health, the New York-based company which Roche bought for $1.9 billion earlier this year.

This approach is applicable in a setting with a large and predictable outcome, Wiviott said, like immunotherapies for cancer, and not in a setting with smaller effect sizes, as in public health problems like diabetes.

Sizable benefits of AI notwithstanding, panelists were not optimistic that it would ever be possible to run an entire clinical trial virtually with AI. As Ramesh Durvasula, head of IT and informatics for research labs at Eli Lilly, said, “we can’t virtualize a petri dish yet,” never mind a human body.

But they generally agreed that advances in AI would replace some portion of patients with virtual modeling, saving both time and money. It may be possible for AI to allow companies to get drugs approved with less medical evidence in humans, supplementing this by collecting evidence when the drug is on the market, per Scheeren. Colin Hill, CEO and cofounder of GNS Healthcare, believes we’re heading towards virtually simulating drugs down to the molecular level, and using randomized clinical trials to confirm results of virtual experiments.

Causal approaches to AI are promising compared to deep learning because they can pull out causal relationships, not just correlations, from observational studies.

To use AI, it’s critically important to have reliable underlying datasets, said Abernethy. Even within a dataset, the people applying the AI must have an understanding of where the data is likely to be reliable (for example, the timing of a key treatment decision) and unreliable (for example, areas where doctors are more likely to copy and paste from other notes).

In order to train AI models, data needs to be well-annotated, an area that is still lacking.

“We don’t have enough data to train the model to prove that it’s a better model,” Durvasula said. “Overcoming this collection of data, standardizing of data, really is still the major bottleneck.”

Currently, oncology leads the way in reliable underlying datasets, Abernethy said. Hill agreed, saying that half of all clinical trials that GNS crunches data on are in oncology, followed by autoimmune diseases, then brain diseases.

Another important consideration for datasets, and the AI systems run on them, is that they be auditable and transparent to regulators like the FDA and EMA, which panelists characterized as forward-thinking about computational methods but also accustomed to paper trails. Also payers are increasingly a relevant audience, so processes need to be understandable to them as well.

Panelists were also keen on recycling pharmaceutical datasets, which several agreed are underused. Using AI, they can search for answers to new questions in old datasets.

“Pharmaceutical companies are sitting on huge swathes of data,” said Dr. Jackie Hunter, CEO of BenevolentBio.

Photo: ANDRZEJ WOJCICKI, Getty Images

 

Shares1
Shares1