MedCity Influencers, Artificial Intelligence

We can’t blame AI when humans mess up 

If we continue to apply the technology without widespread agreement to these basic premises, we could dissolve people’s trust before we even earn it.

Medical technology concept. 3D rendering.

People fear what they don’t understand, and the use of data – through rule-based artificial intelligence (AI) and machine-learning (ML) techniques – falls squarely into that category. When it comes to healthcare, that fear understandably gets ratcheted up, as people worry about computers making decisions about their health.

So, when I read about Stanford’s vaccine algorithm debacle, I was reminded just how important it is for those of us in healthcare to commit to educating our communities about these technologies and work towards a consistent and trust-building discussion around data tech, especially when it comes to AI and ML.

The human element
It’s important to set the stage. As people stoke fear about technology, we need to recognize humans aren’t without mistakes. In fact, according to Johns Hopkins, more than 250,000 people in the United States die every year because of medical mistakes, making it the third leading cause of death.

Rule-based AI systems, like in the Stanford example, produce pre-defined outcomes that are based on a set of rules coded by humans. They are simple models which utilize the rule of “if-then.” That means humans developed an “if X, then Y” formula for calculating who would receive the coronavirus vaccine at Stanford.

Industry-wide failure that starts with humans (and results in bias)
One upsetting and widely-reported example of human vs. machine failure was revealed in 2019. An algorithm that’s used by many large healthcare systems was found to make Black patients substantially less likely than their white counterparts to receive important medical treatment. Under the system, patients who have complex medical needs are automatically flagged by the algorithm in an effort to predict which patients will benefit the most from extra assistance.

In order to make that prediction, the algorithm relies on cost data – how much money is spent on an individual patient. The problem: Black patients have significantly fewer healthcare dollars spent on them than white patients, making it an (I’m sure unintentionally) inherently-biased choice on which to base the algorithm.

These two examples both point to the important role humans play in data-driven technology in healthcare, and shine a light on the need to consider implicit biases and other inputs from the outset.

We need to ask whether AI or ML is right for each use case
I believe in the power of these technologies to do mind-blowing things when we have the right set of criteria and data identified. But that doesn’t mean they are appropriate for every task. We can’t paint with a broad brush and apply AI to everything and expect positive outcomes; it’s only as great as the data that powers it, and the people that connect the data sources.

As practitioners, we need to be thoughtful about the application approach for every use case. For example, we are facing the most challenging vaccine distribution in history; the task of getting a new vaccine into the arms of roughly 200 million Americans (i.e. 60-70 percent of the population for herd immunity) in phases is not something for which we have a template, and therefore not something on which we should experiment with un-tested models.

We need to think and act strategically
A successful data-driven strategy is designed from end-to-end, where practitioners go beyond asking if they can implement the technical details of an approach, be it a rule-based AI or ML model. They need to look at and think through what would happen when they put the approach into practice. Applying that filter to the Stanford example, you start to realize that it’s possible – even likely – to miss important candidates (in this case medical residents), ultimately overlooking them for the treatment (i.e., the vaccine). Had this been thoroughly thought out, the team would have realized this approach would not work for the task at hand.

We need to start with low-risk use cases 
As we look to leverage technology to make an impact in healthcare, we need to start with scenarios and use cases that will establish credibility and trust, without the risk of serious implications or negative consequences. For example, using ML to identify which patients are at higher risk for a certain disease, like breast cancer, and reaching out proactively to engage them with their provider carries very little risk.

There are opportunities in healthcare to leverage AI and ML to connect people to care – more equitably, effectively and swiftly – which is where we need to start. We should agree that, at least right now, it’s too risky to ask technology to determine who receives care.

We need to facilitate progress… thoughtfully
The reality is, healthcare is years behind in using AI and ML to put data to use in order to drive better health outcomes. However, there is hope and great opportunity to use this important technology for immediate use cases, e.g., for proactive, preventative care. But just because we have the data and technology to leverage, it doesn’t mean we should put our foot on the gas pedal.

Instead of a conversation around ‘where technology should/shouldn’t be deployed in healthcare,’ we need to talk about ‘where to go fast vs. where to go slow.’ Anything on the clinical side that determines health outcomes should take the integration of technology slowly and methodically. Every use case needs to be thought out and designed for bias and fairness, which requires time and commitment.

While I am optimistic when I see healthcare making progress in its use of data to drive better outcomes, I am saddened when I see opportunities to build trust in the technology undermined. I applaud the desire to leverage AI and ML to drive outcomes, and hope that, as an industry, we can come together to have this important conversation around how we should be leveraging its power for good. If we continue to apply the technology without widespread agreement to these basic premises, we could dissolve people’s trust before we even earn it.

Photo: metamorworks, Getty Images

 


Avatar photo
Avatar photo

Joe Schmid

Joe Schmid is chief technology officer at SymphonyRM where he works with healthcare organizations to drive better healthcare outcomes through data-driven campaigns.

This post appears through the MedCity Influencers program. Anyone can publish their perspective on business and innovation in healthcare on MedCity News through MedCity Influencers. Click here to find out how.

Shares0
Shares0