The tools available to physicians in the “doctor’s black bag” haven’t changed much in over a century. Twenty years ago, my colleagues and I published a paper in which we advocated for the addition of a portable ultrasound device – not much bigger than a smartphone – to be available to clinicians in the exam room. And for the last two decades, many clinicians have begun using a portable ultrasound, not to replace a stethoscope but to provide information quickly and accurately that can’t be obtained from unaided clinical assessment alone. This has made us better clinicians, and here’s the key, it’s provided more time for clinicians to get to know the patient as a person so that medical evidence can be tailored to the unique needs of the individual in front of us.
It’s now time for the doctor’s black bag to receive another update: artificial intelligence (AI) decision-support tools. When designed responsibly, AI can be a powerful instrument that provides instant access to expert-curated information. As a result, clinicians can make more informed decisions without sacrificing that vital human-to-human connection.
The case for AI clinical decision support
Transforming the OR: CEO Reveals Game-Changing AI Tech for Better Efficiency
How Apella leverages technology to increase OR efficiency.
With the remarkable increase in published medical articles in recent years, clinicians need to be able to rapidly obtain reliable and trustworthy clinical decision support at the point of care. Until recently, most clinicians have had to rely on information platforms that deliver content in the form of dense prose that is not easily usable at the bedside. Fortunately, AI tools now provide information much more quickly.
However, when it comes to health care, we can’t sacrifice accuracy or reliability for the sake of speed. Given the high stakes of the exam room, it’s not surprising that the major concern clinicians and patients have with AI relates to trust. We must insist the AI tool used to provide clinical decision support retrieves data that has been carefully assessed for quality by experts in evidence evaluation and grading and by expert clinicians.
How would you feel if the next time you boarded a plane, you heard the flight attendant announce your flight would be directed by an AI pilot that was up to 90-95% accurate? Given the consequences of a mistake, this degree of accuracy is simply not good enough. It’s not good enough for the cockpit, and it’s not good enough for the exam room.
It turns out some clinicians are using general purpose AI tools like ChatGPT to help with patient care. And while ChatGPT is great at planning a trip or finding recipes, the thought that it is helping your doctor should terrify you. In fact, reports show a majority (85%) of healthcare leaders are exploring or have already adopted generative AI capabilities. Frankly, this is concerning in a profession where the stakes are often life and death, as trusting unverified information from this technology is downright dangerous.
Integrating GLP-1s: How Berry Street is Redefining Nutrition Care
Richard Fu details the company's approach to nutrition therapy and strategy for patients using GLP-1s.
Guardrails for trustworthy AI
Going back to the portable ultrasound device, its utility is based on its ability to provide reliable, trustworthy, and accurate images rapidly at the point of care, every time. The utility of AI clinical decision support tools is similarly based on its ability to provide reliable, trustworthy, and accurate information rapidly at the point of care, every time. To accomplish that, AI must be built with clear guardrails. This comes down to two key practices: vetting and transparency as well as safe integration and oversight.
First, vetting and transparency are non-negotiable. General purpose AI models are trained on large amounts of unfiltered data from the internet. This inevitably includes a messy mix of both fact and fiction, making it a risky source for medical information. Medical AI must be foundationally different. Its models must be trained exclusively on vetted, evidence-based clinical data and research, guaranteeing the information is correct and free from noise of the public web. Additionally, the AI must be transparent about how it arrives at its recommendations. This visibility is needed to build confidence with both clinicians and patients, as it allows each to understand the reasoning behind a recommendation and to spot potential errors.
Second, safe integration and oversight are required. AI should not act as an autonomous agent; it must be a support tool that fits seamlessly into existing clinical workflows. This means it should complement, not complicate, a clinician’s routine to be truly useful. And most importantly, it must be monitored closely by humans through what is termed a human-in-the-loop model, which is crucial for addressing complex scenarios or edge cases where the technology may waver. However, medical AI doesn’t need just any human-in-the-loop; it needs experts in evaluating and grading scientific and medical evidence and expert clinicians in the loop.
How can AI restore humanity to medical care?
As clinicians are asked to do more with less, health care visits have become less personal and more transactional. Patients do not feel known as individuals, and when clinicians don’t know their patients, they can’t tailor treatment to patients’ unique needs. AI – the right AI – as a tool in the doctor’s black bag can provide reliable, trustworthy information to assist clinical decision making. AI can reduce the time clinicians need to obtain information and document in the medical record, allowing them to get to know their patients as people and restoring the human touch to the practice of medicine.
Image: Flickr user Eva Blue
Dr. Roy Ziegelstein, Editor-in- Chief, DynaMed has more than 30 years of experience in medical education and healthcare. He joined Johns Hopkins in 1986 after earning his M.D. from Boston University. He completed his internal medicine residency and chief residency on the Osler Medical Service and his cardiology fellowship at Johns Hopkins School of Medicine before joining the faculty there in 1993. He has held numerous leadership positions, including Director of the Internal Medicine Residency Program, Executive Vice Chairman, and Vice Chair for Humanism in the Department of Medicine at Johns Hopkins Bayview Medical Center. Since 2013, he has served as Vice Dean for Education at Johns Hopkins University School of Medicine. A dedicated educator and co-director of the Aliki Initiative on patient-centered care, Dr. Ziegelstein has received numerous awards for teaching excellence and is an internationally recognized expert on the connection between depression and cardiovascular disease.
This post appears through the MedCity Influencers program. Anyone can publish their perspective on business and innovation in healthcare on MedCity News through MedCity Influencers. Click here to find out how.
