MedCity Influencers

From FOMO to FOMU: A Framework for Getting AI Right

Healthcare is a highly regulated industry, and any use of AI within it is dealing with the most sensitive data. In order to ensure you’re not putting you, your company or patients at risk, consider an evaluation framework for any AI tool that considers these three things.

In healthcare, the AI mindset is changing. It’s no longer about the race to embrace a transformative technology; it’s about the pressing need to ensure the use of AI doesn’t blow up in everyone’s face. Or, at the very least, that it doesn’t result in wasted time, energy, and resources.

In other words, we’re moving on from FOMO, fear of missing out on AI’s potential, to FOMU – a fear of messing up.

As AI matures, as we move into its next phase, and as dedicated review boards become increasingly common, we’ve developed a three-part framework to help healthcare leaders ensure they get things right the first time.  

presented by

Evaluation: Bringing order to the wild west

Up to this point, there hasn’t been any standard guidance about how to evaluate AI tools. We’ve heard from customers and partners that many organizations are coming up with their evaluation tactics and standards on the fly. 

Healthcare is a highly regulated industry, and any use of AI within it is dealing with the most sensitive data. In order to ensure you’re not putting you, your company or patients at risk, we encourage an evaluation framework for any AI tool that considers:

  • Compliance: Are the AI tools you’re using compliant with general security standards like HIPAA and SOC 2? Do they follow data retention protocols, so that only anonymized data – and not PHI – is used to directly train machine learning models, and that all PHI is deleted by the AI tool within a specified window? Finally, do they support you in achieving your own compliance (e.g., in the healthcare world, do they help you achieve your compliance with your adverse event detection obligations?)
  • Safety: Are there human-in-the loop guardrails to continuously evaluate performance? What metrics are tracked and how large is the dataset of human evaluation? How continuously are performance metrics tracked? Finally, if the tools use generative AI, what guardrails are there in place to reduce the rate of hallucinations?
  • Ethics: It’s important for AI tools to be free of bias. Does the tool conduct regular bias testing? Can you ensure it works equally well across different demographics of users?
presented by

Implementation: Choosing the right entry point

Implementation goes hand-in-hand with evaluation, and developing and following a comprehensive rollout strategy is critical. Organizations must decide whether to start small or tackle large-scale deployments and how these decisions will impact ROI and scalability within their businesses.

Additionally, it’s crucial to consider the evolving nature of AI roles within the organization. While CTOs and CIOs have traditionally handled technology integrations, in many cases the rise of AI has necessitated specialized, AI-focused roles. These roles might focus on overseeing AI initiatives, for example, and ensuring that AI strategies align with overall business goals and ethical standards. 

Change management: Keeping your humans in the loop

More than I’d like to admit, I’ve seen organizations get perhaps a little too excited about a new AI use case, and roll it out poorly. The idea of transformation can be anxiety-inducing when jobs may be at stake. A human-centric approach is important

Employees need to be educated on precisely how and why AI is being embraced, because if they don’t understand the “why,” they’re probably not going to be supportive. And that “why” needs to be not just about how the technology will benefit the business, but how it will benefit the people, too. Healthcare leaders should be thinking about human workers in tandem with any work AI may be able to take on. For a chance at success, you also need their buy-in.

The right way to make these decisions, and any of the decisions outlined above, depends on your specific organization and specific circumstances. But if you take away one thing from this post, it’s that the time to develop a framework for AI evaluation has arrived, to give your AI strategy the best chance to succeed.

Photo: steved_np3, Getty Images

Brian Haenni joined Infinitus in 2021. Prior to Infinitus, he spent over a decade working in patient access, on both the vendor and pharma sides, with leadership positions in strategy and business transformation, operations, and sales. Prior to his work with patient access, he worked with consulting and technology companies across the globe.

Brian holds a BA degree in International Business from University of Georgia. He lives in Charlotte with his wife and two children. His hobbies include trying to keep up with two active sons, mastering downward-facing dog, and sharing great food with friends and family.

This post appears through the MedCity Influencers program. Anyone can publish their perspective on business and innovation in healthcare on MedCity News through MedCity Influencers. Click here to find out how.