MedCity Influencers

ChatGPT in Healthcare – The Promise and the Danger

AI-generated information can be considered akin to obtaining a second or even third opinion regarding one’s health – it makes up part of what needs to be known, but it’s not the entire story.

The artificial intelligence (AI) revolution is well underway and its latest phenom, ChatGPT, is taking the technology space by storm. What was once a go-to plot device of science fiction is now reality – people are communicating with machines and machines are responding and doing so with authority. As the world continues to size up the newest chatbot to make the scene, concerns on how ChatGPT might impact healthcare have already garnered significant coverage.

Just as the Dr. Google effect causes countless individuals to use search engines and other e-resources to generate self-diagnoses that can be as dire as they are inaccurate, ChatGPT is expected to play an increasing role over time in providing answers to medical questions. As care providers try to help people differentiate helpful online medical resources from misleading ones, it remains to be seen whether ChatGPT will quell or proliferate cyberchondria. And, at present, ChatGPT is only able to draw upon information that was documented through the end of 2021.

While misleading information can cause unnecessary fear about symptoms that may only be signs of a benign issue, there’s also the danger that exists when a potentially serious ailment is not recognized by generative AI that analyzes what it’s been told. The authoritative voice that ChatGPT uses may cause its users to trust inaccurate or incomplete information. This dynamic, which I’ll cite using PG rated terminology, has been referred to in some academic and industry circles as “authoritative hogwash”.

Nonetheless, generative AI can effect positive change in many ways including its impact on the ability for people across many demographics to access information that will help them to live healthier. Social determinants of health such as economic status, health literacy, and educational level, can have a detrimental impact on a person’s health. Generative AI brings with it the promise to democratize information as individuals who might otherwise be underserved now have equal access to answers pertaining to questions about their health. In some cases, the information obtained could be a matter of life and death.

Undoubtedly, the promise of AI in general in healthcare is enormous as, for example, its application in diagnostic imaging in recent years has been transformative. By improving image quality, enhancing image reconstruction, and increasing lesion detection, AI has had a significant impact on the early detection and diagnosis of a range of diseases including cancer.

Accuracy and security – where we are and what lies ahead

sponsored content

A Deep-dive Into Specialty Pharma

A specialty drug is a class of prescription medications used to treat complex, chronic or rare medical conditions. Although this classification was originally intended to define the treatment of rare, also termed “orphan” diseases, affecting fewer than 200,000 people in the US, more recently, specialty drugs have emerged as the cornerstone of treatment for chronic and complex diseases such as cancer, autoimmune conditions, diabetes, hepatitis C, and HIV/AIDS.

In addition to the accuracy or lack thereof when AI is utilized to inform individuals about health-related decisions, concerns are also on the rise about the role of AI in taking over medical notetaking for clinicians. While such technology is expected to give medical professionals a more efficient and less manual process for gathering data, the push to hand machines the responsibility of accurately recording an individual’s sensitive medical information needs to be implemented gradually and strategically.

As fewer humans are included in the process of checking the work of AI-generated notetaking, diagnosis and treatment regimens will become increasingly reliant on information produced by fully automated AI. This scenario carries with it a range of significant ramifications for patients, insurers, clinicians, and virtually all other entities across the healthcare continuum.

The security of AI-generated data is also a concern as technologies like ChatGPT require vast amounts of data to inform and drive their language processing capabilities. The effort to safeguard data breaches involving personal information and electronic health records (EHRs) will require vigilance. In March, OpenAI announced that it temporarily shut down its ChatGPT service after a bug allowed users to see the titles of other users’ chat histories. Although this bug was quickly patched, time will tell what extent and frequency of growing pains this emerging technology will experience.

Using AI to plan trips versus guide healthcare journeys

While generative AI technologies may be invaluable for planning your next trip abroad and listing what Rick Steves has deemed to be the must-see attractions on your itinerary, putting one’s health journey in the hands of a computer is a different matter altogether.  In some cases, it might be advantageous for someone to ask ChatGPT about treatment options for an illness that they or a loved one is dealing with, but it’s not realistic to rely solely on the information generated by the inquiry. Put another way, no one is going to benefit by making life decisions based on input from a “technological child”.

The sweet spot for those seeking healthcare-related information is to view AI as a helpful tool to use in combination with seeking care from medical professionals. In some ways, AI-generated information can be considered akin to obtaining a second or even third opinion regarding one’s health – it makes up part of what needs to be known, but it’s not the entire story.

Photo: venimo, Getty Images

Christopher brings his 20+ years of consumer and client engagement expertise to Carenet Health, guiding all facets of service delivery and client operations. Prior to joining Carenet, he served as executive integration advisor for Sitel Group and chief security officer at Sykes Enterprises (acquired by Sitel Group). During his more than 20 years at Sykes, Christopher also directed high-performance teams in client management, operations and strategic planning in the U.S. and Asia. He is a graduate of the University of Texas at Austin.