Health Tech

ChatGPT Outperforms Doctors In Answering Patient Messages, Study Shows

A new study found that ChatGPT might actually be quite successful in providing high-quality answers to patient questions during an era in which doctors and nurses are too busy to do so. The research evaluated two sets of answers to patient inquiries — one written by physicians, the other by ChatGPT. A panel of healthcare professionals determined that ChatGPT’s answers were significantly more detailed and empathetic.

AI, artificial intelligence, chatbot, bot

The healthcare sector has been notoriously slow to adopt new technologies in the past, but it certainly hasn’t been ignoring ChatGPT

Healthcare leaders are already beginning to explore potential use cases for ChatGPT, such as assisting with clinical notetaking and generating hypothetical patient questions to which medical students can respond. And just last week, healthcare software giant Epic announced that it will integrate GPT-4, the latest version of the AI model, into its electronic health record.

Meanwhile, researchers are hard at work measuring ChatGPT’s ability to alleviate healthcare’s workforce and burnout crises. A new study published Friday in JAMA Internal Medicine found that ChatGPT might actually be quite successful in providing high-quality, empathetic answers to patient questions during an era in which doctors and nurses are too busy to do so.

The study — led by John Ayers, a public health researcher at the University of California San Diego  — compared two sets of written responses to real-world patient questions. One set was written by physicians, the other by ChatGPT. The patient questions were sourced from Reddit’s AskDocs, a subreddit with about 452,000 members who ask medical questions and receive answers from licensed medical professionals.

Both sets of answers were evaluated by a panel of licensed healthcare professionals. The panel preferred ChatGPT’s responses 79% of the time. 

More than a quarter of physicians’ responses were deemed as less than acceptable in quality, while this was the case for only 3% of ChatGPT responses. The panel also rated ChatGPT’s responses as more empathetic. Nearly half of the AI model’s responses were categorized as empathetic, while just 5% of physicians’ responses were.

sponsored content

A Deep-dive Into Specialty Pharma

A specialty drug is a class of prescription medications used to treat complex, chronic or rare medical conditions. Although this classification was originally intended to define the treatment of rare, also termed “orphan” diseases, affecting fewer than 200,000 people in the US, more recently, specialty drugs have emerged as the cornerstone of treatment for chronic and complex diseases such as cancer, autoimmune conditions, diabetes, hepatitis C, and HIV/AIDS.

It might be surprising that responses from an AI model ended up being deemed more empathetic than those given by humans, but Ayers said it makes sense when you consider how pressed physicians are for time.

Because ChatGPT doesn’t have a jam-packed schedule and isn’t suffering from burnout, it’s easier for the AI tool to express empathy in its responses. For example, when you tell ChatGPT that you have a headache and need advice, the first thing it does is express that it’s sorry you’re feeling that way. Physicians usually have so much on their plate that they forget that step, Ayers pointed out.

The study demonstrated that ChatGPT has significant potential to alleviate the massive burden physicians face in their inboxes, but Ayers argued that his research points to an even more important ability that the technology could have: the power to improve patient outcomes. The research proves that ChatGPT can provide faster, more detailed responses to patient questions, so Ayers is betting that the AI model can boost patients’ health by helping them better manage their conditions at home.

“I want to emphasize the potential to improve patient outcomes. Now, our data does not tell you that at all. Our data just shows us an extraordinary amount of promise for using AI assistants to help manage workflows and communication with patients. I would guess that if you did the right clinical studies, in some settings, it would potentially improve outcomes. I may be wrong, but we have to do those studies,” he declared.

Photo: Jull1491, Getty Images

Topics