Health Tech

How Are AI Companies Reacting to HHS’ New Transparency Requirements?

Last week, HHS finalized a new rule that requires healthcare AI developers to provide more data about their products to customers, which could aid providers in determining AI tools’ risks and effectiveness. Some AI leaders believe the new guardrails are a step in the right direction, and others are skeptical about whether the new rules are necessary or will be effective.

AI, machine learning

The use of AI in healthcare fills some people with feelings of enthusiasm, some with fear and some with both. In fact, a new survey from the American Medical Association showed that nearly half of physicians are equally excited and concerned about the introduction of AI into their field.

Some key reasons people have reservations about healthcare AI include concerns that the technology lacks sufficient regulation and that people using AI algorithms often don’t understand how they work. Last week, HHS finalized a new rule that seeks to address these concerns by establishing transparency requirements for the use of AI in healthcare settings. It is slated to go into effect by the end of 2024.

The aim of these new regulations is to mitigate bias and inaccuracy in the rapidly evolving AI landscape. Some leaders of companies developing healthcare AI tools believe the new guardrails are a step in the right direction, and others are skeptical about whether the new rules are necessary or will be effective.

The finalized rule requires healthcare AI developers to provide more data about their products to customers, which could aid providers in determining AI tools’ risks and effectiveness. The rule is not only for AI models that are explicitly involved in clinical care — it also applies to tools that indirectly affect patient care, such as those that help with scheduling or supply chain management. 

Under the new rule, AI vendors must share information about how their software works and how it was developed. That means disclosing information about who funded their products’ development, which data was used to train the model, measures they used to prevent bias, how they validated the product, and which use cases the tool was designed for.

One healthcare AI leader — Ron Vianu, CEO of AI-enabled diagnostic technology company Covera Health — called the new regulations “phenomenal.”

presented by

“They will either dramatically improve the quality of AI companies out there as a whole or dramatically narrow down the market to top performers, weeding out those who don’t withstand the test,” he declared.

At the same time, if the metrics that AI companies use in their reports are not standardized, healthcare providers will have a difficult time comparing vendors and determining which tools are best to adopt, Vianu noted. He recommended that HHS standardize the metrics used in AI developers’ transparency reports.

Another executive in the healthcare AI space — Dave Latshaw, CEO of AI drug development startup BioPhy — said that the rule is “great for patients,” as it seeks to give them a clearer picture of the algorithms that are increasingly used in their care. However, the new regulations pose a challenge for companies developing AI-enabled healthcare products, as they will need to meet stricter transparency standards, he noted.

“Downstream this will likely escalate development costs and complexity, but it’s a necessary step towards ensuring safer and more effective health IT solutions,” Latshaw explained.

Additionally, AI companies need guidance from HHS on which elements of an algorithm should be disclosed in one of these reports, pointed out Brigham Hyde. He is CEO of Atropos Health, a company that uses AI to deliver insights to clinicians at the point of care. 

Hyde applauded the rule but said details will matter when it comes to the reporting requirements — “both in terms of what will be useful and interpretable and also what will be feasible for algorithm developers without stifling innovation or damaging intellectual property development for industry.”

Some leaders in the healthcare AI world are decrying the new rule altogether. Leo Grady — former CEO of Paige.AI and current CEO of Jona, an AI-powered gut microbiome testing startup — said the regulations are “a terrible idea.”

“We already have a very effective organization that evaluates medical technologies for bias, safety and efficacy and puts a label on every product, including AI products — the FDA. There is zero added value of an additional label that is optional, nonuniform, non-evaluated, not enforced and only added to AI-based medical products — what about biased or unsafe non-AI medical products?” he said.

In Grady’s view, the finalized rule at best is redundant and confusing. At worst, he thinks it is “a huge time sink” and will slow down the pace at which vendors are able to deliver beneficial products to clinicians and patients.

Photo: Andrzej Wojcicki, Getty Images