MedCity Influencers

Overcoming the Challenges of Integrating AI in Care Management 

Providers should carefully evaluate how different AI models can be strategically integrated to optimize the care management lifecycle and address the complex needs of their patients and populations, while at the same time prioritizing patient safety and ethical considerations.

Over the past few years, healthcare organizations have started aggressively integrating different AI tools into their systems and processes. However, this complex endeavor can go wrong in many ways, so healthcare organizations need to be judicious in their choice of tools and vendors. 

As one CEO recently said, “People don’t sue computers; they sue doctors or the institution that the doctors work for.” This underscores the crucial need to fully understand the challenges of integrating AI into healthcare systems and care management. 

Any lack of visibility into a patient’s accurate health picture and the inability to drill down to underlying issues can make decision-making challenging. AI can help with this process, but if not well designed, it can become a barrier to achieving this critical capability. We, therefore, need to understand and closely monitor where things can go wrong. These are critical issues because they can have real consequences for people.

presented by

Care management should be systematically integrated into any population health management strategy.  Despite its complexity, care management holds immense potential for improvement through the integration of AI. By seamlessly incorporating various AI technologies into workflows and use cases, healthcare organizations can enhance patient outcomes and streamline healthcare delivery. Using AI solutions that are tightly integrated into care management processes can lead to significant improvements. This approach, which involves the simultaneous use of multiple AI models such as predictive analytics, prescriptive algorithms, natural language processing (NLP), and generative models, can revolutionize how we manage healthcare. 

AI, for the sake of having AI, is a recipe for trouble. When thoughtfully integrated into the appropriate workflows and processes, AI technologies can enhance patient outcomes and streamline healthcare delivery significantly. However, it is not without its challenges. Healthcare providers must consider AI hallucinations, data quality, and model stability concerns. 

It is crucial that these AI capabilities are not just add-ons to the different steps of the care management workflows but strategically integrated. This strategic integration should enhance risk identification, patient engagement, clinical decision-making, and reporting outcomes. Providers should carefully evaluate how different AI models can be strategically integrated to optimize the care management lifecycle and address the complex needs of their patients and populations. 

There are four distinct types of AI technologies when we talk about AI in care management. Each plays a unique role in augmenting care management workflows, from risk prediction to patient engagement and outcomes analysis:

presented by

Predictive AI – Leveraging historical data, predictive AI models can forecast future events, predict trends, and anticipate potential outcomes, allowing organizations to address emerging risks and optimize patient care proactively. 

Prescriptive AI—Moving beyond predictions, Prescriptive AI utilizes rules-based algorithms to recommend specific actions and interventions. This model is based on a constrained approach and uses well-defined rules to recommend specific evidence-based actions to achieve desired outcomes. This ability to constrain the model is a crucial property of Prescriptive AI.

Natural language processing (NLP)—NLP analyzes and interprets the natural language used in healthcare settings, such as physician notes. By extracting and codifying valuable information from unstructured data, NLP enhances the ability to leverage clinical insights for improved care management. 

Generative AI—Generative AI models are probabilistic machines that look for patterns they learn in their training and use them to predict the next word in any given sequence. These models are trained to mimic human communication and can assist in developing personalized patient engagement strategies and streamlining communication workflows. 

However, even with well-designed AI models, small perturbations in the data can lead to unexpected and potentially harmful outputs or “hallucinations.” 

AI hallucinations 

All generative AI models hallucinate. The question is, how do we minimize those, and what is their impact? What Generative AI models do it it’s to look at a lot of data and learn patterns in the sense that word E is likely to follow word ABCD. Whether that makes sense from a real-world point of view is irrelevant from generative AI’s perspective. A somewhat concerning example is the chatbot (SARAH) that the World Health Organization (WHO) released based on ChatGPT 3.5, which is reported to have made numerous errors. In one instance, SARAH made up a list of non-existent clinics in San Francisco.

The idea here is not to pick on the WHO, but even if these errors happen very infrequently, say once in every hundred thousand responses, that’s still a lot of errors, given how much usage these models might get.

The takeaway is that healthcare organizations should be vigilant about data accuracy, monitor AI performance, and address such issues. 

There are three ways in which issues can arise when integrating AI into healthcare systems:

  • The training data used in AI models
  • Generative AI models, themselves and 
  • The inherent challenges in AI designs

For prescriptive AI models, hallucination and stability are not a concern, but these can be significant concerns for predictive and especially for generative AI. 

In the case of generative AI, they are specifically trained on large amounts of data, and that data can contain both accurate and inaccurate content, as well as various types of biases. As we have seen, these models predict patterns based on their training data without discerning the truth. While they provide good responses most of the time, they can produce falsehoods or biases, which can be hard to detect. So, we have to be very aware of the potential pitfalls in using Generative AI models, which brings us to the issue of the limits of generative models. 

There are certain inherent challenges in the design of generative AI models as the name itself suggests. These models ”generate” information using probabilistic models and are more like a Zoltar Machine than an encyclopedia. However, most users tend to assume that they are looking up info in an encyclopedia when using generative AI tools.

Healthcare organizations need to be very careful, and they should work with vendors who possess a deep understanding of clinical knowledge, patient data, and the appropriate application of AI technologies to mitigate the risks and unintended consequences that may arise. They also need to understand the issues around hallucinations and ensure that their vendors have taken suitable measures to constrain the models and use technologies to minimize, if not eliminate any undesirable outcomes. 

As the healthcare industry evolves, embracing AI’s potential while prioritizing patient safety and ethical considerations will be the key to success. By strategically integrating AI into care management workflows and addressing the challenges head-on, healthcare providers can pave the way for a future where AI-augmented care delivers better outcomes for patients and providers alike. 

Photo: Dilok Klaisataporn, Getty Images

Dr. Mansoor Khan is the Chief Executive Officer of Persivia. Inc. Dr. Mansoor Khan is a 20-year veteran of the software and healthcare industries. He is a serial entrepreneur who has been developing advanced technologies and cutting-edge software since the mid-90s. Over the years, he has led teams that have developed technology and applications for Disease Surveillance, Artificial Intelligence, Quality Management, Analytics, Care Management and Cost and Utilization Management. These efforts have won numerous awards over the years including best Decision Support System for ACOs (Blackbook) and Top 100 AI companies.

This post appears through the MedCity Influencers program. Anyone can publish their perspective on business and innovation in healthcare on MedCity News through MedCity Influencers. Click here to find out how.