Would you blindly trust AI to make important decisions with personal, financial, safety, or security ramifications? Like most people, the answer is probably no, and instead, you’d want to know how it arrives at those decisions first, consider its rationale, and then make your own decision based on that information.
This process, known as AI explainability, is key to unlocking trustworthy AI – or AI that is both reliable and ethical. As sensitive industries like healthcare continue to expand the use of AI, achieving trustworthiness and explainability in AI models is critical to ensuring patient safety. Without explainability, researchers cannot fully validate the output of an AI model and therefore cannot trust these models to support providers in high-stakes situations with patients. As hospitals continue to face staff shortages and provider burnout, the need for AI continues to grow to alleviate the administrative burden and support tasks like medical coding, ambient scribing, and support with decision-making. But without proper AI explainability in place, patient safety remains at risk.
What is AI explainability?
The Funding Model for Cancer Innovation is Broken — We Can Fix It
Closing cancer health equity gaps require medical breakthroughs made possible by new funding approaches.
As machine learning (ML) models become increasingly advanced, humans are tasked with understanding the steps an algorithm takes to arrive at its result. In the healthcare industry, this means tasking providers with the challenge of retracing how an algorithm arrived at a potential diagnosis. Despite all their advancements and insight, most ML engines still retain their “black box,” which means their calculation process is impossible to decipher or trace.
Enter explainability. While explainable AI — also known as XAI — is still an emerging concept that requires more consolidated and precise definitions, it largely refers to the idea that an ML model’s reasoning process can be explained in a way that makes sense to us as humans. Simply put, AI explainability sheds light on the process by which AI reaches its conclusions. This transparency fosters trust by allowing researchers and users to understand, validate, and refine AI models, especially when dealing with nuanced or changing data inputs.
While AI has immense potential to revolutionize a host of industries, it’s already making significant progress in the healthcare industry, with investments in health AI soaring to a staggering $11 billion in 2024 alone. But in order for systems to implement and trust these new technologies, providers need to be able to trust their outputs, rather than blindly trust them. AI researchers have found explainability as a necessary facet of this, recognizing its ability to address emerging ethical and legal questions around AI and help developers ensure that systems work as expected — and as promised.
The path to achieving explainability
Reducing Clinical and Staff Burnout with AI Automation
As technology advances, AI-powered tools will increasingly reduce the administrative burdens on healthcare providers.
In an effort to achieve trustworthy AI, many researchers have turned to a unique solution: using AI to explain AI. This method consists of having a second, surrogate AI model that is trained to explain why the first AI arrived at its output. While it may sound helpful to task another AI with that work, this method is highly problematic, let alone paradoxical, as it blindly trusts the decision-making process of both models without questioning their reasoning. One flawed system does not negate the other.
Take, for example, an AI model that concludes a patient has leukemia and is validated by a second AI model, based on the same inputs. At a quick glance, a provider might trust this decision given the patients’ symptoms of weight loss, fatigue, and high white blood cell count. The AI has validated the AI, and the patient is left with a somber diagnosis. Case closed.
Herein proves the necessity to have explainable AI. In this same scenario, if the provider had access to the AI’s decision-making process and was able to locate which keywords it picked up on to conclude leukemia, they could see that the patient’s bone marrow biopsy results were not actually recognized by the model. In factoring these results in, the provider recognizes that the patient clearly has lymphoma, not leukemia.
This situation underscores the critical need for transparent and traceable decision-making processes in AI models. Relying on another AI model to explain the first simply compounds the potential for error. To ensure the safe and effective use of AI in healthcare, the industry must prioritize developing specialized, explainable models that provide healthcare professionals with clear insights into a model’s reasoning. Only by leveraging these insights can providers confidently leverage AI to enhance patient care.
How explainability serves healthcare professionals
Beyond diagnoses, explainability has extensive importance across the healthcare industry, especially in identifying biases embedded in AI. Because AI does not have the necessary context or tools to understand nuance, AI models can regularly misinterpret data or jump to conclusions based on inherent bias in their outputs. Take the case of the Framingham Heart Study, where participants’s cardiovascular risk scores were scored disproportionately depending on the participants’ race. If an explainable AI model had been applied to the data, researchers might have been able to identify race as a biased input and adjust their logic to provide more accurate risk scores for participants.
Without explainability, providers waste valuable time trying to understand how AI arrived at a certain diagnosis or treatment. Any lack of transparency in the decision-making process can be incredibly dangerous, especially when AI models are prone to bias. Explainability, on the other hand, serves as a guide, showing the AI’s decision-making process. By highlighting what keywords, inputs, or factors impact the AI’s output, explainability enables researchers to better identify and rectify errors, leading to more accurate and equitable healthcare decisions.
What this means for AI
While AI is already being implemented in healthcare, it still has a long way to go. Recent incidents of AI tools fabricating medical conversations highlight the risks of unchecked AI in healthcare, potentially leading to dire consequences such as incorrect prescriptions or misdiagnoses. AI should augment, not replace, human provider expertise. Explainability empowers healthcare professionals to work in tandem with AI, ensuring that patients receive the most accurate and informed care.
AI explainability provides a unique challenge, but one that serves to provide immense potential for patients. By equipping providers with these AI models, we can create a world where medical decisions are not just data-driven, but also transparent and understandable, fostering a new era of trust and confidence in healthcare.
Photo: Andrzej Wojcicki, Getty Images
Lars Maaløe is co-founder and CTO of Corti. Maaløe holds a MS and PhD in Machine Learning from the Technical University of Denmark. He was awarded PhD of the year by the department for Applied Mathematics and Computer Science and has published at top machine learning venues: ICML, NeurIPS etc. His primary research domain is in semi-supervised and unsupervised machine learning. In the past, Maaløe has worked with
companies such as Issuu and Apple.
This post appears through the MedCity Influencers program. Anyone can publish their perspective on business and innovation in healthcare on MedCity News through MedCity Influencers. Click here to find out how.