“I don’t want to get political … ” Alan Minsk, an attorney, was forced to say several times during a panel discussion on the Food and Drug Administration’s use of artificial intelligence, held at the annual SXSW festival on Monday.
Yet the reality is clear. DOGE cuts resulted in 20% job losses at the agency, and into that knowledge and expertise vacuum, artificial intelligence tools are being implemented with the hope that they will be able to do reviews faster, helping the agency to essentially do more with less. The panel hosted by marketing and communications firm Real Chemistry was aptly called, “The Last Human at the FDA: AI on a Skeleton Crew.”
Will the gamble succeed? Only time will tell, though the panelists stressed that transparency between the healthcare industry and FDA experts and reviewers in how AI is being implemented both by companies and the agency is key to establishing trust. But for now get prepared to be frustrated.
The Hidden Administrative Tasks Draining Small Practices
Small practices play a critical role in healthcare delivery, but they cannot continue to absorb ever-increasing administrative demands without consequences.
“So I’ve done this for 33 years, and I got to say that the morale of FDA is, what’s a polite word? Lousy,” declared Minsk, partner and chair of the food and drug practice at Arnall Golden Gregory LLP. “I would say the clients are very frustrated by [the] lack of, at times, lack of guidance, confusion, frustration.”
The levels of frustration depend on which center you are dealing with within the FDA.
Which division has seen the least upheaval? Roughly six to 12 months after the worst of the DOGE cuts, the Center for Drug Evaluation and Research (CDER) is turning a corner and “stabilizing,” according to Tala Fakhouri, vice president of regulatory consulting at Parexel, a clinical research organization and a co-panelist. Fakhouri, who was formerly at the FDA’s CDER division, noted that CDRH (the Center for Devices and Radiological Health) “has generally been kind of safe” with Minsk interjecting that CDRH hasn’t seen quite the level of cuts as CDER has. CDER is stabilizing, meanwhile, “CBER (Center for Biologics Evaluation and Research) is in a lot of trouble,” Fakhouri pointed out.
Minsk chimed in that CDRH saw certain cuts, but “they had to bring them back” because they were evaluating “Musk’s device application.” This is apparently a reference to Elon Musk’s Neuralink brain interface system, which received FDA’s breakthrough device designation that helps reduce the timeline for bringing innovative medical technologies to market.
Beyond Analytics: How Sellers Dorsey is Hard-Coding Value into Medicaid Policy [Video]
How to turn analytics into actual policy outcomes.
Now, where is the FDA with its use of AI? Leslie Isenegger, head of client development, RC Resolve, part of Real Chemistry and the panel’s moderator, pointed out that the agency has named Jeremy Walsh as its first-ever chief AI officer, and now 70% of the agency has access to the generative AI tool called Elsa. Also, in December, the agency qualified its first AI Drug Development tool. In other words, the agency is betting that AI will help it become more efficient.
But the lack of institutional expertise at this time may hamstring progress. Here’s what Fakhouri said:
“Anytime you’re trying to implement innovation at FDA or actually any regulatory agency, you need the policy staff, you need the experienced staff because they’ve seen it all and they can help de-risk. This is also important for AI. But when you lose [staff] from office of new drug policy, OND (Office of New Drugs), a good proportion of staff from the Office of Medical Policy initially — generics policy was also one of the offices impacted — you lose that connective tissue that can actually bring down the temperature when new innovations are implemented or when the agency is considering new guidance in an area that might be tricky, like moving to a one trial.”
She added that this gap in knowledge and expertise has real ramifications.
“There’s no magical large language model at the FDA. The magical thing at the FDA is the knowledge base, whether it’s data that the agency has, or the staff that the agency has. That’s where the magic happens. So when you lose a large proportion of your experienced staff, and then you’re supposed to build AI tools to help fill this gap, what you end up with is AI applications that can optimize certain parts of the process,” she said.
But fully autonomous AI making regulatory decisions, that’s still in the future.
“It’s kind of wild to say that regulatory decisions could be done with AI,” she said. “I think we’ll all know when the FDA is doing that because we’ll be doing it within our organizations, and it’s not there yet.”