AI’s impact within healthcare has increased significantly, yet biases in AI algorithms are creating gaps in care that these tools were meant to solve. Health leaders are being forced to weigh both the financial incentive of achieving operational efficacy and patients’ needs as they determine how to provide the most appropriate and cost-effective care possible.
While there is a significant need and opportunity to address this overall spend with advanced AI tools that predict patient needs and readmission likelihood, intrinsic biases within these algorithms ultimately cause more harm than good and financial savings are being achieved at the expense of patient’s health.
Cost containment vs. patient needs
Many of these biases tie back to a core tension in value-based care: the drive to contain costs versus the needs of individual patients. Value-based models and Medicare Advantage plans are using predictive analytics to manage post-acute spending. The algorithms crunch mountains of data to determine a patient’s care plan such as, how many days of rehab a typical patient “should” need or how many home therapy visits are “enough.” Insurers tout this as personalized medicine, but I often see a one-size-fits-all mentality. The tools spit out an optimal length of stay or service level aimed at the average patient, which is often not the reality.
Frontline providers witness these conflicts frequently, feeling these utilization management algorithms are like a blunt instrument. Partnering with accountable care organizations (ACOs) and hospitals, I’ve repeatedly come across automated prior-authorization systems that deny things like an extra week of home nursing, or a custom piece of medical equipment, because the patient “doesn’t meet criteria.” In a value-based contract, there is pressure to reduce services that seem excessive statistically, but illnesses aren’t always average. I recall a cancer survivor with complications who exceeded the algorithm’s standard number of home therapy visits. The cost-containment logic would have cut her off; instead, our care coordinators fought to extend services and prevented what could have been a costly hospital readmission. Unfortunately, not every patient has an advocate to override the algorithm. Value-based care should never mean care denied when it’s legitimately needed, but without careful checks, algorithms can make exactly that mistake in the name of “optimization.”
Opaque decisions and care coordination challenges
For patients and families, one of the most maddening parts of all this is the opaqueness. When an AI formula decides to deny coverage, the people living with the consequences often have no idea why. They just receive a dry denial letter, which often all look the same, with generic phrases like “not medically necessary” or “services no longer required”, with little to no detail about their specific case. For example, two of our patients in separate facilities received letters saying a medical director reviewed their case, no name or specialty given, and concluded they were ready to go home, yet neither letter mentioned the very real conditions that made home unsafe. It’s as if the decision was made in a black box and only a vaguely worded verdict emerges. Oftentimes the algorithm’s report is never shared with patients at all, leaving them to guess on the scoring method, while it runs quietly in the background, unseen and unexamined by those it affects. This lack of transparency makes it extremely hard for families to challenge or even understand denials.
The opacity doesn’t just hurt patients; it throws sand in the gears of care coordination. Hospitals and skilled nursing facilities (SNFs) struggle to plan transitions when coverage cut-offs come abruptly based on hidden criteria. This uncertainty leaves discharge planners without a proper plan for post-discharge services and SNFs blindsided by an insurer stopping payment while a patient still needs rehab. This creates tension between providers and payers and puts patients in the middle of a tug-of-war. Hospitals have also had to scramble to keep a patient longer or find alternative funding because an automated denial upended the original discharge plan. In many cases, the physicians and SNF care teams strongly disagree with the algorithm’s decision to end coverage as they know the patient isn’t ready. The result can be hurried discharges, hasty handoffs, and higher risk of complications or readmission – exactly what good transitional care is supposed to prevent. These AI-based coverage decisions, when shrouded in secrecy, erode trust and coordination. Providers are forced to waste time on appeals and workarounds instead of caring for patients. Families are often left in the dark until they’re suddenly hit with a denial and scramble to arrange care on their own. Transparency is not a luxury here; it’s a necessity for safe, coordinated care.
Instilling fairness and transparency in algorithmic care decisions
Driving costs is important throughout all levels of care and is a key piece of value-based care programs. However, this cannot be done at the expense of the patients. Beyond regulation, algorithm developers and healthcare organizations need to double down on auditing these tools for bias before full implantation is rolled out. This includes examining outcomes by race, gender, and zip code, among other factors, and fixing any errors that arise. Transparency is also a large piece of the puzzle. Insurers don’t need to publish proprietary formulas, but they should disclose the criteria used to approve or deny post-acute services. Patients and providers deserve to know if decisions are based on clinical evidence, cost projections, or an AI algorithm. Additionally, hospitals and SNFs should not be kept in the dark about how long a patient’s post-acute care is likely to be covered. Even if an algorithm is used, its predictions should be shared so everyone can plan appropriately and flag concerns early if the prediction seems off. When it comes to care coordination, increased communication is key.
AI algorithms are tools and while they were designed by humans and can be programmed to match company priorities, they rarely have the full picture. As these tools continue to evolve, healthcare leaders must always place the patient first. This means keeping human checks in place to ensure patients still receive the care they need. At the end of the day, humans not only have more situational knowledge than technology, but they have more empathy and understanding to make a better judgement than these tools ever will.
Photo: J Studios, Getty Images
Dr. Afzal is a visionary in healthcare innovation, dedicating more than a decade to advancing value-based care models. As the co-founder and CEO of Puzzle Healthcare, he leads a nationally recognized company that specializes in post-acute care coordination and reducing hospital readmissions. Under his leadership, Puzzle Healthcare has garnered praise from several of the nation’s top healthcare systems and ACOs for its exceptional patient outcomes, improved care delivery, and effective reduction in readmission rates.
This post appears through the MedCity Influencers program. Anyone can publish their perspective on business and innovation in healthcare on MedCity News through MedCity Influencers. Click here to find out how.
