MedCity Influencers

Navigating Healthcare’s New Era of Algorithmic Transparency

What EHR/EMR vendors need to know to comply with ONC’s HTI-1 final rule

The recently released Health Data, Technology, and Interoperability (HTI-1) Final Rule from the Office of the National Coordinator for Health IT (ONC) has introduced groundbreaking transparency requirements for artificial intelligence (AI) and predictive algorithms used in certified health IT systems. 

With ONC-certified health IT supporting the care delivered by more than 96% of hospitals and 78% of office-based physicians, this regulatory approach will have far-reaching effects on the healthcare industry.

As EHR/EMR vendors seek to comply with these new regulations, they must navigate uncharted and frequently confusing territory and confront the challenges posed by the complexity and opacity of powerful AI tools, including Large Language Models (LLMs).

The potential and challenges of Large Language Models (LLMs)

LLMs are a type of AI that can analyze vast amounts of data, such as unstructured clinical notes, to generate insights and recommendations. While LLMs have the potential to revolutionize predictive decision support in healthcare, their inherent complexity and “black box” nature make it difficult to understand how they arrive at their conclusions. This opacity poses significant challenges for EHR vendors relying on these models to comply with the transparency requirements of the HTI-1 Final Rule.  

Understanding the FAVES criteria

The HTI-1 Final Rule introduces the FAVES criteria (fairness, appropriateness, validity, effectiveness, and safety) as a framework for assessing the transparency and accountability of AI and predictive algorithms. EHR/EMR vendors must ensure that clinical users can access a consistent, baseline set of information about the algorithms they use to support decision-making. Vendors must demonstrate that their systems meet each of these criteria:

  • Fairness: Algorithms must be free from bias and discrimination, ensuring equitable treatment for all patients.
  • Appropriateness: Algorithms must be suitable for their intended use cases and respect patient privacy and autonomy.
  • Validity: Algorithms must be based on sound scientific principles and validated using rigorous testing and evaluation methods.
  • Effectiveness: Algorithms must demonstrate real-world effectiveness in improving patient outcomes and clinical decision-making.
  • Safety: Algorithms must be safe to use and accompanied by appropriate monitoring, reporting, and risk mitigation measures.

Evidence-based vs. predictive decision support

The HTI-1 Final Rule distinguishes between evidence-based decision support tools, such as diagnostic prompts and out-of-range lab alerts, and predictive decision support systems that rely on LLMs and other AI algorithms. While evidence-based tools are not the primary focus of the new regulations, predictive decision support systems are subject to stringent transparency requirements, reflecting their greater potential for harm if not properly validated and monitored.

Preparing for ONC certification criteria

To maintain certification and comply with the HTI-1 Final Rule, EHR/EMR vendors must closely monitor the development of the ONC certification criteria, expected to be released by the end of the year. Vendors should proactively assess their current and planned use of LLMs and other predictive algorithms, ensuring that they are prepared to provide detailed information on training data, potential biases, and decision-making processes. Failure to comply with these requirements could result in loss of certification and market share.

The importance of collaboration and transparency

As the healthcare industry navigates this new landscape of algorithmic transparency, collaboration between EHR/EMR vendors, healthcare providers, and regulatory bodies will be essential. By working together to establish best practices, share knowledge, and address potential challenges, the industry can ensure that the benefits of AI and LLMs in healthcare are realized while prioritizing patient safety and trust. Healthcare providers also have a crucial role to play in providing feedback on the accuracy and usefulness of predictive decision support tools, helping to refine these systems over time.

The HTI-1 Final Rule represents a significant step forward in ensuring the responsible and ethical use of AI and predictive algorithms in healthcare. As the industry continues to evolve, EHR/EMR vendors that prioritize transparency, collaboration, and patient-centered innovation will be well-prepared to navigate the challenges and opportunities that lie ahead. By embracing algorithmic transparency and working together to establish best practices, the healthcare community can harness the power of AI to improve patient care and outcomes while maintaining the trust and confidence of patients and providers alike.

Photo: metamorworks, Getty Images

Dr. Jay Anders is Chief Medical Officer of Medicomp Systems . Dr. Anders supports product development, serving as a representative and voice for the physician and healthcare community that Medicomp’s products serve. Prior to joining Medicomp, Dr. Anders served as Chief Medical Officer for McKesson Business Performance Services, where he was responsible for supporting development of clinical information systems for the organization. He was also instrumental in leading the first integration of Medicomp’s Quippe Physician Documentation into an EHR. Dr. Anders spearheads Medicomp’s clinical advisory board, working closely with doctors and nurses to ensure that all Medicomp products are developed based on user needs and preferences to enhance usability.