Policy, Health Tech, Artificial Intelligence

ONC Chief: New AI Transparency Rule Should Drive Higher Quality Tools & More AI Adoption

Micky Tripathi, head of The Office of the National Coordinator for Health Information Technology (ONC), thinks his office’s recent rule on AI transparency will spur greater adoption of AI tools in healthcare, as well as potentially drive higher quality AI-based products.

Micky Tripathi, head of The Office of the National Coordinator for Health Information Technology (ONC), thinks his office’s recent rule on AI transparency will do a good job of empowering both providers and tech developers. 

“I think it’s going to spur more adoption of AI-based tools in healthcare delivery, and hopefully, it’s going to establish a sort of mechanism for driving higher quality AI-based tools,” he stated during an interview this month at the Reuters Digital Health conference in San Diego.

In December, the ONC finalized a new rule that instated transparency requirements for the use of AI in healthcare settings. It requires healthcare AI developers to provide more data about their products to customers, which could aid providers in determining AI tools’ risks and effectiveness. 

Sponsored Post

Physician Targeting Using Real-time Data: How PurpleLab’s Alerts Can Help

By leveraging real-time data that offers unprecedented insights into physician behavior and patient outcomes, companies can gain a competitive advantage with prescribers. PurpleLab®, a healthcare analytics platform with one of the largest medical and pharmaceutical claims databases in the United States, recently announced the launch of Alerts which translates complex information into actionable insights, empowering companies to identify the right physicians to target, determine the most effective marketing strategies and ultimately improve patient care.

Under the new rule, AI vendors must share information about how their software works and how it was developed. That means disclosing information about who funded their products’ development, which data was used to train the model, measures they used to prevent bias, how they validated the product, and which use cases the tool was designed for.

The rule is not only for AI models that are explicitly involved in clinical care — it also applies to tools that indirectly affect patient care, such as those that help with scheduling or supply chain management. It is slated to go into effect on the first day of 2025.

There are a lot of clinicians out there who are apprehensive about using new AI models, Tripathi pointed out. Some clinicians view advanced forms of AI as uncharted territory and are worried that these products could make serious mistakes. 

By establishing this rule, the ONC is seeking to give providers more visibility into the AI tools that are available to them so they can make informed decisions about which solutions are the most reliable and which might be a good fit for their workflows, Tripathi explained. 

This information will be displayed in the EHR, making it accessible to clinicians who want to learn about the design, validation and use cases of the tools at their disposal, he added.

Tripathi also noted that the new rule gives healthcare AI developers an opportunity to differentiate themselves from the competition. 

He said he has heard from provider organizations that are planning to integrate developers’ upcoming transparency disclosures into their greater AI governance strategy. Some of these providers have also already begun to require the same types of transparency disclosures from the third-party developers they are thinking about partnering with, he added.

Photo: Natali_Mis, Getty Images