Health inequities, racial disparities, and access barriers have long plagued the healthcare system. While digital solutions hold the potential to mitigate these challenges, the unintentional improper use of these technologies can actually have the opposite effect: widening the gap in healthcare access and exacerbating disparities among vulnerable populations.
Nowhere is that concern more critical than with artificial intelligence (AI). AI advancements are revolutionizing the healthcare landscape and opening up new possibilities to enhance patient care and health outcomes, provide more personalized and meaningful experiences, and respond better to consumer needs.
However, AI also introduces the potential for bias, which in turn creates complex ethical concerns and high levels of consumer distrust. If organizations aren’t careful in their approach — and neglect critical concerns about ethical standards and safeguards — the risks of AI could outweigh the benefits.
The root causes of AI bias
AI bias often originates from two key sources: data and algorithms. AI bias is often created as a result of hypotheses and objectives of the creators, and may be unintended. Data curation and algorithm development are both human activities, and the frame of mind of the developers matters greatly in increasing or reducing bias.
AI technologies are only as good as the data that feeds them — and from data selection to representation, several factors can impact data quality, accuracy, and representation. Historical disparities and inequalities have resulted in vast data gaps and inaccuracies related to symptoms, treatment, and the experiences of marginalized communities. These issues can significantly affect AI’s performance and lead to erroneous conclusions.
On the algorithm side, developers often have specific goals in mind when creating AI products that influence how algorithms are designed, how they function, and the outcomes they produce. Design and programming choices made during AI development can inject personal or institutional biases into the algorithm’s decision-making process.
In one highly publicized case, a widely used AI algorithm designed to gauge which patients needed extra medical care was found to be biased against Black patients, underestimating their needs compared to White patients and leading to fewer referrals for vital medical interventions.
When AI systems are trained on data that reflects these biases (or algorithms are flawed from the start), they can inadvertently learn and propagate them. For instance, AI-powered tools fail to take into account the fact that medical research has historically undersampled marginalized populations. This oversight can easily produce inaccurate or incomplete diagnosis and treatment recommendations for racial minorities, women, low-income populations, and other groups.
These instances of biases negatively impact care, perpetuate existing disparities, and undermine progress on health equity. But they have another side effect — one that’s perhaps less overt, yet equally debilitating: They erode trust in the healthcare system among populations that are most vulnerable.
From early detection and diagnosis tools to personalized consumer messaging and information, AI provides organizations with opportunities to improve care, streamline operations, and innovate into the future. It’s no wonder nine in 10 healthcare leaders believe AI will assist in improving patients’ experiences. But when consumers, providers, or health organizations perceive AI as unreliable or biased, they are less likely to trust and use AI-driven solutions, and less likely to experience its vast benefits.
How organizations can build trust in AI
The vast majority of health organizations recognize the competitive importance of AI initiatives and most are confident that their organizations are prepared to handle potential risks.
However, research shows that AI bias is often more prevalent than executives are aware of — and your organization can’t afford to maintain a false sense of security when the stakes are so high. The following areas of improvement are critical to ensure your organization can benefit from AI without adding to inequities.
- Set standards and safeguards
To prevent bias and minimize other negative effects, it’s critical to adhere to high ethical standards and implement rigorous safeguards in the adoption of digital tools. Implement best practices established by trusted entities, like the ones established by the Coalition for Health AI.
Best practices may include, but are not limited to:
-
- Data quality: Adopting robust data quality, collection, and curation practices that ensure data used for AI is diverse, complete, accurate, and relevant
- Governance: Implementing algorithm governance structures to monitor AI outcomes and detect biases
- Audits: Conducting regular audits to identify and rectify bias in outcomes.
- Pattern matching: Investing in pattern-matching capabilities that can recognize bias patterns in AI outcomes to aid in early detection and mitigation.
- Manual expertise: Deploying trained experts who can manually oversee AI results to ensure they align with ethical standards.
- Assistive technology: Using AI as assistive technology, analyzing its effectiveness, identifying areas of improvement, and then scaling tools up before AI technology interfaces with consumers
Most importantly, it is vital to verify the impact of using AI on patient outcomes at frequent intervals, seeking evidence of bias through analysis, and correcting data curation or algorithms to reduce the effects of bias.
- Build trust and transparency.
Successful AI adoption requires building a strong foundation of trust and transparency with consumers. These efforts ensure your organization acts responsibly and takes the necessary steps to mitigate potential bias while enabling consumers to understand how your organization uses AI tools.
To start, foster greater transparency and openness about how data is used in AI tools, how it’s collected, and the purpose behind such practices. When consumers understand the reasoning behind your decisions, they are more likely to trust and follow them.
Likewise, do your diligence to ensure that all outputs from AI systems come from known and trusted sources. The behavior science principle known as authority bias underscores the notion that when messages come from trusted experts or sources, consumers are more likely to trust and act on the guidance provided.
- Add value and personalization.
Healthcare happens in the context of a relationship — and the best way your digital operations can build strong, trusting relationships with consumers is by offering meaningful, personalized experiences. It’s an area in which most organizations could use some help: Three-quarters of consumers wish their healthcare experiences were more personalized.
Fortunately, AI can help organizations achieve this at scale. By analyzing large data sets and recognizing patterns, AI can create personalized experiences, provide valuable information, and offer helpful recommendations. For instance, AI-powered solutions can analyze a consumer’s data and health history to recommend appropriate actions and resources, such as providing relevant education resources on heart health, detailing a customized diabetes management plan, or helping someone locate and book an appointment with a specialist.
By meeting consumer needs and providing tangible value, AI tools can help alleviate the very concerns consumers may have about the technology and demonstrate the benefits it offers for their care.
Ethical AI starts with a plan
AI puts a vast amount of power in the hands of healthcare organizations. Like any digital tool, it has the potential to improve healthcare, as well as introduce risks that could prove detrimental to patient outcomes and the overall integrity of the healthcare system.
To harness the best parts of AI — and avoid its worst possible outcomes — you need an AI strategy that not only includes technical implementation tactics but also prioritizes efforts to minimize bias, address ethical considerations, and build consumer trust and confidence.
AI is here to stay, and offers great promise to accelerate innovation in healthcare.
By prioritizing these responsibilities, you can achieve the full promise of healthcare’s digital transformation: a healthier, more equitable future.
Photo: ipopba, Getty Images
Sanjeev Sawai, Chief Product and Technology Officer at mPulse Mobile, has a passion for building innovative software products. For the last decade and a half, he has led product and technology teams to deliver market-leading products. mPulse is a confluence of Sanjeev’s recent experience in healthcare, and a dozen years of past work in conversational AI and speech applications. Sanjeev has brought to market enterprise grade and SaaS-scale software products in a variety of markets, most notably telecommunications, financial services and healthcare. He has led the development of market-leading products in the voice solutions market and built embedded systems for defense applications. Previously, Sanjeev has held leadership positions in product development at HealthEdge, Altisource, Interactions, Envox and Brooktrout.
This post appears through the MedCity Influencers program. Anyone can publish their perspective on business and innovation in healthcare on MedCity News through MedCity Influencers. Click here to find out how.