Although medical imaging analysis has been one area where machine learning tools are making inroads, healthcare startups are paving the way for other applications. But for artificial intelligence to gain wider interest from clinicians, the way these algorithms arrive at their conclusions needs to be understandable, Fredrikson & Byron Shareholder Ryan Johnson observes.
Looking at the legal considerations for AI implementation in healthcare, Johnson noted that for providers and healthcare professionals, malpractice risk is an important consideration. Physicians cannot blindly rely on AI clinical recommendations.
“Physicians still have primary responsibility for clinical decision-making, and the standard of care does not yet treat AI-based recommendations as superior to physician judgment.” However, Johnson notes, “At some point in the future, when AI clearly outperforms human diagnoses and treatment recommendations, it might be malpractice per se not to follow the AI recommendation.”
That raises the question: If the process showing how an algorithm reaches its recommended decision is not transparent, how does the clinician evaluate whether the recommendation is right?
Johnson noted that the FDA has been very supportive of digital health and has worked to try to provide rules to encourage, not stifle innovation. Some software driving clinical decisions are treated as medical devices subject to FDA jurisdiction, Johnson notes that the FDA has emphasized that clinical decision support software should allow licensed professionals to independently review the basis for the software based recommendations.
The black box issue makes the use of AI even more challenging. Can the algorithm consistently reproduce results?
The risk for clinicians is that if the algorithm relies on incomplete information, for example, a fuzzy image, it could lead to a faulty conclusion and the provider takes on malpractice risk.
As for AI applications for healthcare that he sees gaining ground, Johnson pointed to tools that can combat waste, fraud, and abuse.
“There’s a lot of potential there,” he said.
Johnson also drew attention to companies using machine learning, a form of AI, to analyze consumer data and predict health outcomes.
Carrot Health, a Minneapolis-based health tech startup, analyzes consumer data in the context of health outcomes, finding surprising correlations with clinical significance. For example, Carrot has observed a correlation between individuals who owned a minivan with no children and obesity. It also observed that individuals who owned dogs were more likely to be active and have a reduced risk of obesity.
Bind, a health tech startup focused on on-demand health insurance, enlists machine learning to do a better job of underwriting. Bind’s founder, Tony Miller, previously led Definity Health. The goal is to simplify insurance and help employees get the care they need by removing barriers like deductibles and co-insurance while showing them their costs in advance.
Although it is still in the early stages, machine learning healthcare is making headway. As for the underlying question of whether it’s less susceptible to bias being programmed into the software compared with other sectors, Johnson said he thinks healthcare offers a way to minimize the risk of bias with AI applications.
“Sophisticated AI developers can minimize the bias issue but it is an issue that developers should consider to avoid skewing results or recommendations. With respect to clinical software, there also needs to be transparency so clinicians can understand how machine learning tools reach their conclusions.”
Photo: Bill Oxford, Getty Images