Artificial Intelligence, Legal, Policy

In the final days of Trump administration, agencies clashed over how to regulate medical AI

The agency published its first action plan last week for how it plans to regulate machine learning-based software as a medical device. To start, the FDA said it will issue guidance on how changes to algorithms should be regulated as they “learn.”

In the final days of the Trump Administration, two agencies clashed over how AI tools should be regulated in healthcare. The Food and Drug Administration had just mapped out a plan for how it would regulate changes to AI-based medical software in the future, when the Department of Health and Human Services proposed that the FDA should cease to review some software tools altogether.

The sudden about-face took many by surprise, and seemed to fly in the face of the plans the FDA had outlined just days before.

The list of proposed exemptions included some common uses for AI in healthcare, such as software used to flag lesions suspected for cancer, and radiological computer-assisted triage and notification software. It would also permanently exempt digital health tools used to treat psychiatric disorders, which the FDA had temporarily exempted in response to the pandemic.

To justify the change, HHS cited the cost of getting 510(k) clearance, in which companies must prove their device is “substantially equivalent” to one that has already been approved by the agency, as well as a lack of reported adverse events.

But it’s unclear if the Biden Administration – including HHS Secretary nominee Xavier Becerra — will actually implement the proposed changes.

 

FDA sets the tone for future regulation

For its part, the FDA had detailed a five-part action plan on how it would regulate machine learning tools in healthcare going forward. It sought to tackle the knotty issues of how to make algorithms more transparent, how to evaluate them for bias, and how the agency would handle changes to algorithms after they had been implemented.

Bakul Patel, director of the FDA’s new Digital Health Center of Excellence, touted the plan as a way to realize the potential of these technologies while ensuring they are safe and effective.

One lingering issue it didn’t address was whether or not certain algorithms fall under the FDA’s purview. This had been a gray area even prior to HHS’ proposal, and some clinical decision support tools have been exempted under the 21st Century Cures Act.

“For software in general, there isn’t one clear overarching guidance saying this is when software’s regulated and when software is not regulated,” Michele Buenafe, a partner with Morgan Lewis, said in a phone interview.

The specifics of how exactly the FDA would achieve some of these goals, such as evaluating AI tools for bias, were also vague. The agency said it had been working with the University of California San Francisco, Stanford University, and Johns Hopkins University to develop methods to evaluate machine learning-based medical software.

“Because AI/ML systems are developed and trained using data from historical datasets, they are vulnerable to bias – and prone to mirroring biases present in the data,” the FDA’s action plan noted. “Health care delivery is known to vary by factors such as race, ethnicity, and socio-economic status; therefore, it is possible that biases present in our health care system may be inadvertently introduced into the algorithms.”

Jvion, which built a system to identify patients at risk of an adverse event, uses broad datasets to try to avoid these pitfalls, chief medical officer Dr. John Frownfelter said. It also considers social determinants; for example, an hour-long commute on public transportation could be a risk factor for whether someone gets sick from Covid-19.

“While the FDA Oversight action plan is well intended, it remains to be seen whether the design of the details of the plan strikes the right balance,” Frownfelter wrote in an email. “The Action Plan hopefully will enable rather than stifle the rapid learning that clinical AI has the potential to provide.”

 

How to regulate systems that ‘learn’

One of the most interesting aspects touched on by the FDA was how it plans to handle machine learning tools that “learn” as they’re exposed to more data. In practice, most algorithms used in healthcare don’t work this way — they’re “locked,” meaning they can’t adapt over time.

The FDA said it plans to issue a draft guidance this year for a framework developers can use to approve future changes they anticipate for an AI system. The FDA tested out this approach last year with Caption Health, a startup whose algorithm to help clinicians perform cardiac ultrasounds received a de-novo clearance in February. Ironically, it would be exempted from FDA clearance under HHS’ proposed guidance.

The AI tool that Caption built was designed to assist clinicians with getting cardiac ultrasound images, which can be very difficult to perform, requiring users to tilt and move transducers in a specific position to get the needed image. It provides real-time guidance to the user about how close they are to the optimal place.

“I really liked the concept in general of allowing companies to submit a scope of future changes that they anticipate making,” Sam Surette, Caption Health’s head of regulatory affairs and quality assurance, said in an interview. “They basically draw a boundary around where the algorithm is allowed to change and say that they’re comfortable with that and clear it as part of the device.”

Even with this clearance, Caption’s algorithm does not continuously “learn” onsite; each change is tested before it is rolled out to Caption’s users. While the idea of AI that updates continuously can be exciting, many healthcare companies haven’t yet found it would be beneficial enough to implement it, Surette said.

“We haven’t crossed that Rubicon in terms of adaptive AI but this lays the groundwork,” he added.

Aside from the FDA’s plans to build this framework, the rest of its plans were “amorphous,” Buenafe said.

“It’s unclear how it’s going to shake out, how it may impact developers of AI or machine learning technology, or patients who may be treated or diagnosed by this technology,” she said.

 

Photo credit: Pixtum, Getty Images

Shares1

This article is featured in the Healthcare Docket newsletter, a partnership between Breaking Media publications MedCity News and Above the Law.

Enter your email address to subscribe.

Shares1