MedCity Influencers

Here Are ChatGPT’s Limitations Compared to Specialized AI Chatbots in Symptom Assessment

When using ChatGPT, it’s crucial to consider the potential risks of the “hallucination effect.” This phenomenon refers to the model’s tendency to express responses with unwarranted confidence, even when the information is uncertain or inaccurate.

AI is transforming the way we perceive and manage our healthcare system. From its potential to provide decision support for healthcare professionals to assisting in patient triage, the impact of this revolution is yet to be discovered. Among the many intriguing developments in this field, the effects of ChatGPT have attracted significant attention, leaving many questions about its specific implications for healthcare.

This cutting-edge AI has gained recognition for its remarkable ability to provide valuable information and help solve various concerns across multiple life contexts. ChatGPT offers a user-friendly interface that facilitates natural conversations and provides organized and practical assistance across diverse topics. However, involving healthcare, especially in symptom assessment, worries arise regarding its efficacy and reliability.

When defining its potential in the healthcare industry, research has shown that ChatGPT can perform at a level comparable to a third-year medical student, delivering reasoning and practical context in healthcare-related questions. Similar to the widely used “Dr. Google,” ChatGPT works efficiently as an information provider.

The prevalence of people researching their symptoms online is a growing concern in the healthcare industry. Studies indicate that a staggering 89% of patients search for their symptoms on the internet before consulting a healthcare professional. Unfortunately, this behavior can lead to misinterpretation of health conditions and impact the decisions made regarding their care.

Where does the data come from?

One of the critical worries surrounding ChatGPT is its unsupervised learning approach, which generates responses based on data collected from uncertain sources across the web. This approach poses a significant danger when assisting individuals with symptoms, as it may deploy incorrect or misleading information. To its credit, ChatGPT acknowledges this issue and warns users about the occasional generation of inaccurate or biased content. Envisioning the application of this model to the healthcare domain raises legitimate concerns.

AI-based medical assistants utilize comprehensive medical databases and verified sources to provide precise and secure responses. These software solutions leverage medical truth and incorporate objective medical evidence created by healthcare professionals, thereby guaranteeing reliable outputs.

Ethical considerations

When using ChatGPT, it’s crucial to consider the potential risks of the “hallucination effect.” This phenomenon refers to the model’s tendency to express responses with unwarranted confidence, even when the information is uncertain or inaccurate. ChatGPT presents the information in a manner that appears to be precise even if it’s not, which can be confusing for some users, especially during the common uncertainty of going through an unknown illness and needing assistance. This poses a significant threat to healthcare, where responsible and personalized reporting is crucial to deliver care at the right time.

Moreover, it is paramount to consider the technological disparities between black-box and white-box AI for ethical and operational considerations. AI-based medical assistants typically employ transparent and explainable white-box algorithms. This transparency allows healthcare professionals and patients to understand how decisions are made. For instance, if a white box AI provides a Covid-19 pre-diagnosis, users can clearly comprehend the symptoms they entered into the algorithm associated with this condition. Healthcare professionals could also leverage this technology for decision support because they can discern and analyze the underlying operations.

Conversely, ChatGPT operates as a black box model, making it difficult to comprehend its processes’ technicalities and potentially hindering ethical compliance verification. Furthermore, bias is a concern that ChatGPT itself highlights. Equitable healthcare is threatened when achieved by relying on black-box technologies. It is nearly impossible to provide unbiased and accurate responses without clear supervision. White boxes, on the other hand, are flexible to adapt and mold to ethical concerns.

The World Health Organization expressed the ethical concerns surrounding extensive language model tools (LLMs). It urged the healthcare community to adhere to key values like transparency, inclusion, public engagement, expert supervision, and rigorous evaluation. Diverting attention to these values can not only erode trust in AI models – already facing difficulties- but also undermine or delay the potential long-term benefits and uses of such technologies around the world.

Clarifying each AI’s intentions

The distinctions discussed above emphasize each AI tool’s primary focus and purpose. ChatGPT, since its inception, has aimed to provide users with organized data collected from the web. While it offers valuable context and detailed answers, it does not rely on specific medical literature or professional expertise. Consequently, it lacks the flexibility required to function as a comprehensive clinical AI-based assessment tool.

AI-based medical assistants have a clear objective: guiding patients toward the appropriate care pathway online. These assistants are designed to enhance the user experience through a language model incorporating medical inputs, an intuitive interface, and a range of actions ensuring clinical validation. Chatbots powered by AI specialized in symptom assessments have the potential to revolutionize the healthcare industry by providing a secure digital tool for all stakeholders involved in the care pathway across various business platforms.

Specialized AI solutions for equitable healthcare

ChatGPT represents an impressive application of an intelligent language model that can address a wide range of questions. AI chatbots, incorporating machine learning, natural language processing (NLP), and AI technologies, deliver exceptional results for specific, well-defined problems.

AI has firmly established its presence in the healthcare system. As it continues to evolve, healthcare stakeholders must be diligent in selecting and implementing AI tools that align with their values and prioritize the well-being of patients. It is crucial to strike a balance between transparency, explainability, and the need for prompt and accurate assessments in the pursuit of equitable healthcare.

Photo: venimo, Getty Images

Cristian Pascual is an industrial engineer and MBA. For 19 years he held various top management positions in large companies before founding Mediktor, the most advanced AI-based medical assistant for triage and pre-diagnosis. In addition, he is an angel investor in more than 20 startups.

He is an entrepreneur with more than 10 years of real-life experience immersed in the Digital Health Ecosystems in US, Europe and Latin America.

This post appears through the MedCity Influencers program. Anyone can publish their perspective on business and innovation in healthcare on MedCity News through MedCity Influencers. Click here to find out how.