The healthcare industry has spent billions on automation in the last decades, but for many, AI hasn’t delivered the efficiency or financial returns they expected. Recent research from MIT found that 95% of organizations reported zero return on investment from their AI programs thus far.
One of the key drivers of this problem is that many solutions and workflows are disconnected. Healthcare organizations have invested in automation without accountability, and have implemented AI systems that don’t have the context needed to perform reliably. Instead of reducing friction, these tools can actually end up adding administrative burden, while also making the organization vulnerable to life-impacting errors.
AI is a tool. Humans are accountable. But we can engineer accountability into AI systems by prioritizing data integrity, human oversight, and continuous learning. When AI is honest and acts as a connector in healthcare workflows, clinician time is freed up, accuracy is ensured, and revenue is protected.
Data integrity is key to accountable AI
To keep these systems honest, healthcare organizations must ensure their data is well-governed and well-contextualized. Today, many healthcare organizations are missing that context. When clinical and operational data live in separate point solutions or legacy EHRs that don’t speak to one another, AI agents can’t operate with the proper context needed to produce accurate and reliable outputs. Using an AI agent that’s operating on partial data is like driving with blinders, and in healthcare, where every decision carries real consequences, guesswork isn’t an option.
Data interoperability is the starting point for accountable AI. When healthcare organizations unify data across point solutions, AI is able to act with full context, streamlining workflows and reducing administrative friction. In concert, patient experience is improved.
Balancing AI with human oversight
Successful integration of AI in healthcare requires the right balance between technology and human expertise, with agentic AI changing the level of human oversight needed in healthcare. Autonomous systems are able to act proactively and manage complex processes, such as flagging a missing preventive care task or submitting a prior authorization request, but that does not mean human oversight isn’t still needed. Human experts must be involved as strategic supervisors and final decision makers, give healthcare professionals time back for patient care and similar high-value work. By leveraging AI’s analytical powers with human expertise and empathy, healthcare organizations can create a system that empowers both patients and clinicians.
Strengthening AI’s judgment through continuous learning
The healthcare industry is constantly changing with new regulations and clinical guidelines to keep up with, along with evolving patient expectations. An accountable AI tool is able to adapt with the industry through the processes of continuous learning and feedback.
Continuous learning gives AI the real-world clinical, technical, and emotional context needed to make informed decisions. Staff can strengthen AI performance over time through feedback that reinforce appropriate and compliant AI outputs, helping to avoid the common pitfall of capability without context. For example, when using an AI medical coder, a human auditor should review the AI’s output and provide feedback to train the AI to be highly accurate. Continuous learning not only ensures accuracy, it can also make AI tools easier to use for clinicians. Through feedback, a clinician using an ambient listening scribe can train the AI to format clinical notes in their preferred style so that the AI fits more seamlessly into their workflow.
Continuous learning creates a valuable feedback loop that improves speed and quality. Ongoing feedback from clinicians and staff can strengthen AI’s judgment, enabling it to execute tasks faster and more reliably.
Accountability is the new AI metric
Keeping AI honest isn’t about slowing innovation, it’s about building systems that support clinicians, patients, and healthcare as a whole. The future of healthcare will be shaped by organizations that embrace AI-driven connected workflows while maintaining human expertise. Everyone benefits when AI is responsible, context-aware, and integrated. Organizations cut down on inefficiencies and protect revenue, patients experience smoother access to care and faster approvals, and clinicians have more time to do what they were trained to do: care for patients.
Photo: Panya Mingthaisong, Getty Images
Ajai Sehgal serves as Chief AI Officer at IKS Health, leading the organization’s enterprise-wide AI vision and strategy to harness data, analytics, and advanced technologies for accelerating innovation, improving outcomes, and amplifying impact across the healthcare ecosystem. A seasoned leader with experience spanning startups to Fortune 100 companies, Ajai was most recently the inaugural Chief Data & Analytics Officer at Mayo Clinic, where he drove the use of over a century of clinical data to power breakthrough medical innovations and elevate patient care. He also served as Chair of Digital Technology at the Mayo Clinic’s Center for Digital Health.
Ajai’s global leadership experience includes senior technology roles at EagleView, Hootsuite, and The Chemistry Group, overseeing Data & Analytics, Software Engineering, IT, Security, and Operations. Earlier in his career, he served 16 years in the Royal Canadian Air Force before joining Microsoft, where he played a key role in founding and scaling Expedia into the world’s largest travel agency. A strong advocate for responsible AI innovation, Ajai continues to mentor and advise within the broader technology community.
This post appears through the MedCity Influencers program. Anyone can publish their perspective on business and innovation in healthcare on MedCity News through MedCity Influencers. Click here to find out how.
