Understanding how artificial intelligence (AI) fits into the future of a healthcare organization is a challenge at a level that most leaders have never faced and are not prepared for. Both clinicians and administrators are inundated with sales pitches claiming that AI-enabled products will make patients healthier, increase revenue, cut costs, reduce employee burnout, and so on. Meanwhile, the rank and file may be perplexed by these new capabilities, or even worried that AI might replace them or their job. And to add another layer of complexity, attorneys are already watching the legal landscape to see who’s being held responsible when AI messes up.
Without question, AI holds tremendous potential for transforming health care. It can quickly sort through thousands of medical images to find tumors, analyze mountains of data to identify patients at risk of crisis, help rationalize complicated scheduling for surgical time, and interpret patient record data to measure quality of care.
But, as AI begins to show up everywhere, from net-new applications to upgrades of existing applications, what are the best practices for incorporating AI into daily operations or clinical care? How can an organization make sure its data is of good enough quality to meet AI’s requirements? How can it train staff to interact effectively with such applications and use their capabilities to ease their workload, and improve quality and performance? What if the applications behave in unexpected ways? What’s the best balance between AI and human judgment?
These are just a few of the questions every healthcare leader will face, and they don’t lend themselves to a quick online search or easily uncovered answers. As educators, we have seen the value of even a brief formal study program – as little as four to five online hours per week for six to eight weeks – in getting administrative and clinical leaders up to speed on these complex topics. Through these learning opportunities, they get enough immersive experience in AI programming to start understanding its potential (and, even more important, its current limitations). We’ve used tools like Tensorflow Playground, which gives users a taste of how neural networks work, and ChatGPT to help our learners develop their own AI models.
Through these learning opportunities, students can pose their pressing questions to expert instructors and compare notes with their peers to see how other organizations are coping with these issues. There’s no substitute for the kind of familiarity these programs build, and they can pay for themselves many times over by helping institutions head off costly and risky mistakes and reap the full value of their AI investments.
Learning to ask the right questions
Reducing Clinical and Staff Burnout with AI Automation
As technology advances, AI-powered tools will increasingly reduce the administrative burdens on healthcare providers.
Organizations will usually not be responsible for creating their own AI applications. Most AI will come embedded in a product, which leaders will have to evaluate when determining whether to buy and how to implement. A formal program of AI study can help them grasp the basic concepts, ask the right questions, and prevent being snowed by tech talk.
Consider this scenario: A vendor claims its AI system can detect the earliest signs of lung cancer in screening X-rays with a 95% success rate. Such a capability could replace low-dose CT, currently the standard of care, with chest X-rays at half the cost while improving accuracy. Sounds amazing, but let’s ask some questions.
- Question #1: “Did radiologists review the X-rays your AI learned from?” In other words, have humans validated the data used to train the program?
- Question #2: “Is that 95% success rate from training datasets or test datasets?” In other words, can the AI accurately apply what it has learned in its training to X-rays it hasn’t seen before? Or does it suffer from what AI experts call “overfitting,” so that it’s only right 80% of the time when it sees new data? (This type of question may alert the salesperson that the usual jargon-laced pitch will not work on this customer.)
- Question #3: “Does your AI rely on information that we don’t normally have?” In other words, does it need perfect and complete data on each patient to function as promised? Or can it deal with the real world? (When it can’t, that’s called “training leakage.”)
Of course, the acquisition team will ask the same questions they’d ask about any software: “Who’s using it now, what’s their return on investment, and may I have their contact information?” And, when they contact the reference sites, they can use their AI studies to evaluate other organizations’ experiences in light of their own organization’s requirements.
A formal understanding of AI can also help leaders actively plan for adoption, rather than simply giving a thumbs-up or thumbs-down to options presented by vendors. They can launch searches for specific capabilities that can address the organization’s business and clinical challenges, and they know how to identify best-in-class when multiple sources offer those capabilities. They can evaluate when it’s in their organization’s interest to invest directly in AI development or to be a test site for a new application.
Governance for AI
As AI slips into all kinds of administrative and clinical applications, it’s easy to adopt it piecemeal, sometimes without even being fully aware that it’s there and how it’s functioning. An unprepared organization risks having to answer critical questions on the fly. When an AI system’s treatment recommendation isn’t the same as a human physician’s, what policy determines the best path for the patient and the least liability risk for the organization? Is it possible that AI-based recommendations are discriminating against certain types of patients–perhaps recommending “wait and see” when more aggressive care is needed, because the AI was trained on datasets that didn’t include enough of those patients? Is the AI setting priorities for resources that aren’t consistent with the organization’s needs and goals?
These ethical, legal, and practical questions, and many others, should be considered early in the adoption cycle. No leader can anticipate all the issues that might arise, but an immersive training to study the experiences of early adopters can provide a huge head start, as can discussions with other leaders whose organizations are at a similar stage. This immersion can provide a foundation for designing effective governance structures from the outset.
An organization-wide venture
As AI continues to digitally transform the healthcare industry, basic AI education shouldn’t just be for the C-suite. The entire organization should have a robust knowledge base, adjusted as needed depending on each person’s role. A chief medical officer will have different concerns from a COO, or from a supervisor in charge of rolling out new applications among the staff, and each should receive education and training tailored to those concerns. The more everyone understands what AI is (and isn’t), how it can be leveraged, and how to recognize its potential pitfalls, the more engaged and invested they will be. AI education should go beyond nuts-and-bolts training in the workings of specific new products, and give the entire staff a working knowledge of how AI works and how it relates to the organization’s various performance metrics. This shared understanding can help leaders be transparent about plans for AI adoption – and in turn, help frontline workers keep leadership informed about what’s working, what’s not working, and what other opportunities they’ve identified for using these revolutionary capabilities.
AI may transform health care across the board, but not by itself. It’s a tool, and like any tool, it will work more effectively and more safely in trained hands. As the AI revolution continues to grow, we believe organizations should invest in that training now.
Photo: VectorUp, Getty Images
Ranil Herath is the President at Emeritus Healthcare, an online healthcare course operator. He also serves as a board advisor to three medical schools and is an advisory council member to the National League for Nursing. Ranil has over 20 years of leadership experience in higher education and healthcare. Prior to Emeritus, he was at Adtalem Global Education where he led transformation and growth across Nursing, Medical, Veterinary, Business and Technology institutions in Canada, USA and across the world. Ranil is also an executive coach, organizational culture champion and mindfulness meditation facilitator.
Tinglong Dai is Professor of Operations Management and Business Analytics at the Johns Hopkins Carey Business School, with a joint faculty appointment at the Johns Hopkins School of Nursing. He serves on the leadership team of the Hopkins Business of Health Initiative and the executive committee of the Institute for Data-Intensive Engineering and Science. He joined Carey in 2013 after receiving a PhD in Operations Management/Robotics from Carnegie Mellon.
This post appears through the MedCity Influencers program. Anyone can publish their perspective on business and innovation in healthcare on MedCity News through MedCity Influencers. Click here to find out how.