Startups, Artificial Intelligence

What’s needed to effectively bring AI to healthcare?

A new report from the Duke University Margolis Center for Health seeks to shed light on the current state of AI technology in healthcare and how to effectively drive adoption of AI-based clinical support tools.

The idea that AI-enabled technology will change the way that healthcare is delivered and how clinicians make decisions and diagnoses is almost taken as a given. Over the past few years, a multitude of startups and products have been built on that very thesis.

There’s a very real need for improvements to clinical decision making. Diagnostic errors cause an estimated 40,000 to 80,000 deaths each year and untold millions in additional costs to the system.

AI-based tools could help medical staff enhance healthcare by guiding diagnosis, treatment plans and population health management. Still, there are plenty of unanswered questions on how best to use and integrate these technologies safely.

A new report from the Duke University Margolis Center for Health seeks to shed light on the current state of AI technology in healthcare and how to effectively drive adoption of AI-based clinical support tools.

AI technology can be split into two main buckets, rules-based systems uses previously validated information like clinical guidelines to help make recommendations, machine learning-based systems use ingested data to make decisions or suggestions.

Diagnostic support software can help guide clinicians and give primary care physicians the ability to provide specialty care.

Some areas in which this class of technology has already been given regulatory clearance include diabetic retinopathy screenings, stroke detection and treatment and helping to diagnose wrist bone fractures.

The regulation of these sort of AI-based decision guidance tools falls mainly to the FDA, which is in the process of developing a pre-certification regulatory pathway which better fits the iterative software development process.

The pre-cert program relies on an excellence appraisal of developers who can reliably create high quality and safe software, streamlined pre-market review of these products and the identification of real-world performance analytics to help judge how these tools perform after it receives approval.

Importantly, the report highlights that the FDA needs to examine how software updates are verified and validated before being sent out to the field and what rises to the level of a new regulatory submission.

Another area of potential improvement by the FDA would be in more clearly delineating how much explanation of machine learning systems is needed for regulatory approval and how much is necessary to be made available to end users. These decisions will likely lead to a certain distribution of liability in the case of failure of these technologies.

“A balance will need to be struck such that allocation of liability ensures that entities and persons with the most knowledge of risks and best positions to mitigate risk are incentivized to do so as opposed to those with the least knowledge of risk and ability to mitigate,” the report states.

Building clinician trust in AI systems also involves better explanation and description about how the software works, which could include human-comprehensible explanations of how a model comes to specific conclusion or an indication of the certainty or uncertainty of the recommendations.

A few of the priorities identified by researchers to better drive adoption were building up a clinical evidence base proving AI software tools actually improve care, transparent labeling and information to help both patients and clinicians understand the potential risk factors and ensuring that AI systems themselves are ethically trained and able to protect patient privacy.

Demonstrating the value of these tools to health systems is key and potential methodologies suggested by the report include verifying the accuracy of the product with data that reflects the provider’s specific patient population and employing front-line physicians to help design a product that fits into the clinician workflow.

Another key driver to adoption is securing coverage and payment from health plans which can be supported by positive performance and ROI outcomes. While AI-based diagnostic support tools used like existing diagnostic tests have a clear pathway for reimbursement, the report said that software that more closely integrates into EMRs and are used for all patients need stronger guidelines from payers on the validation needed for coverage.

The report also suggests that the FDA take a stronger role in labeling requirements around AI systems to display algorithmic safety and efficacy, as well factors like input data requirements and applicable patient populations.

In order to drive adoption of AI-based systems in healthcare, developers also have an important role in taking increased responsibility in mitigating bias in data sets and evaluating the ability of their algorithms to adapt to different workflows and test sites.

Patient privacy also plays a role in ethical data usage and researchers identified potential solutions including increased security standards, the establishment of
certified third-party data holders, regulatory limits on downstream uses of data and integrating cybersecurity into the initial design and development of the system.

Picture: Getty Images, wigglestick

Shares0
Shares0