The saying “nothing is free” usually points to a hidden, intangible cost like reputation or mental anguish. But in healthcare, the veiled costs of so-called “free” AI pilots are much, much more literal.
Recent headlines have painted a troubling picture of AI adoption. Massachusetts Institute of Technology’s (MIT) recent State of AI in Business 2025 report, for example, found that 95 percent of generative AI pilots fail. According to MIT, this is the “GenAI Divide;” most companies rely on generic tools that can impress in a demo but collapse in real workflows, while only a few integrate AI deeply enough to make a meaningful, sustained impact.
Nowhere is this divide more evident than in healthcare. Every health system in the U.S. has been inundated with “free trials” from AI vendors. More often than not, it plays out like this: Demos pique the interest of decision-makers, who then greenlight their teams to dive in. That’s when organizational overhead begins to creep in, staff dedicates time to the pilot, and before long, opportunity costs begin to accumulate. In 2022, Stanford reported that “free” models (ones which require custom data extracts or further training to be suitable for clinical use) can cost upward of $200,000 – and still do not translate into clinical gains in the form of better care or lower cost.
The Power of One: Redefining Healthcare with an AI-Driven Unified Platform
In a landscape where complexity has long been the norm, the power of one lies not just in unification, but in intelligence and automation.
Multiply that price tag across dozens of pilots, and the cost of failure can quickly balloon into the millions.
AI has been positioned over the past few years as healthcare’s savior. When these expensive experiments fail to deliver, trust in the technology erodes; every stalled or abandoned pilot reinforces the perception that the technology is more hype than help. But the problem isn’t that the value of AI isn’t living up to its promise. The American Medical Association, for example, has found that clinicians who have access to the right automation tools report lower levels of burnout.
When deployed thoughtfully, AI can reduce administrative burden, streamline communication, and meaningfully support clinician workflows and decision-making. Pilots are critical because they demonstrate whether or not AI tools can actually deliver these improvements in practice. But they must be implemented and measured with rigor. Not all AI is created equal; choosing the right tool for the right job is key, but more important is how leaders set the conditions for success once a tool is adopted. Without clear goals and shared accountability, AI pilots can quickly become exercises in hope rather than strategy.
That’s an expensive way to innovate. AI is powerful, but it requires structure to succeed. Three disciplines can reverse its current trajectory.
Three AI disciplines
First, discipline in design. Before agreeing to yet another pilot, healthcare leaders must define who the tool is for, what problem it solves, when it should be used, and where it belongs in the workflow. Above all, leaders should ask why they need it. Without an answer to that question as a guiding principle, measurement becomes impossible and adoption is likely to lag – or fail altogether.
Second, discipline in outcomes. Every pilot should begin with a definition of what success looks like based on organizational priorities – a definition that is both specific and measurable. It might be reducing report turnaround time, lowering administrative burden, or improving patient access. An AI model designed to flag patients at risk for breast cancer and encourage follow-up, for example, would need to prove its ability to successfully flag risk, schedule patients in critical follow-up care, and catch potential cancers earlier.
Finally, discipline in partnerships. The easy option with any solution is to default to the biggest or already in-place vendor with the broadest catalog. But size and scale alone don’t guarantee success – far from it. In fact, as put forward by MIT in its recent paper, generic Gen AI tools often fail precisely because they are not designed for the complexity of the specific workflow. In healthcare, those workflows are especially complex. The organizations that succeed will be those that choose partners who understand their domain, help define outcomes, and share accountability for results.
In other words, don’t pick the cheapest or largest solution. Pick the right one. Choose wrong, and you’re essentially running a self-developed project with all the cost and risk. Choose right, and you’re building a pathway to sustainable success.
AI in healthcare doesn’t fail because the technology is bad or broken. It fails because decision-makers jump in without discipline, frameworks, or the right partners. The hidden cost of “free” is too high to keep learning the same lesson.
Photo: Damon_Moss, Getty Images
Demetri Giannikopoulos is the Chief Innovation Officer at Rad AI, the leader in generative AI in healthcare. He has over 20 years of experience in healthcare technology, focused on advancing AI adoption in complex clinical settings, and has deep expertise in leveraging AI as a tool to help bridge the gap between regulatory requirements, innovative AI offerings, and the needs of providers. Demetri has contributed to national guidelines like BRIDGE, a framework designed to accelerate the adoption of AI in the healthcare industry, and serves as a workgroup member for the Coalition for Health AI. He also holds leading roles as a patient advocate as part of the ACR Patient & Family Centered Care Quality Experience Committee and as a Patient-Centered Outcomes Research Institute (PCORI) Ambassador.
This post appears through the MedCity Influencers program. Anyone can publish their perspective on business and innovation in healthcare on MedCity News through MedCity Influencers. Click here to find out how.
