MedCity Influencers

Why AI Won’t Replace Human Psychotherapists

Despite AI’s growing ability to simulate empathy and even outperform humans in some areas of emotional awareness, can it truly replace human therapists and take the burden off the mental health care system?

The rise of AI changes many spheres of life, with a projected growth rate of nearly 37% by 2030 — while healthcare is expected to see the most impact from AI use with a long-term adoption rate of 40%

In the realm of mental health, artificial intelligence aims to tackle fundamental problems — the shortage of practitioners and high cost of therapy. Whether it’s relieving an overburdened mental health system, breaking financial or location barriers, addressing the need for first aid or overcoming stigmas, it may seem like AI has answers to democratizing therapy for the wider public. And it’s reflected in the surging volume of searches for ‘ai mental health’ on Google in the past 2 years.

AI-powered chatbots, such as Wysa, Woebot, Earkick, Youper, combine machine learning algorithms and natural language processing with psychological methods to offer support and therapeutic interventions to those in need of mental health care. Large U.S. employers increasingly offer access to mental health AI apps as workplace benefits.

presented by

While AI-powered tools take steps to democratize access to therapy, their rise has sparked both excitement and concern. 

Limitations of AI in psychotherapy

AI’s understanding of human experience is limited by its inability to grasp social and cultural contexts. Missing crucial social cues, the AI therapy experience can lead to misunderstandings, diminishing the patient’s progress.

Scenario: a Chinese-American individual struggling with feelings of isolation and cultural identity conflict, attempts to use an AI chatbot.

  • Patient: A “I feel lost between two worlds. I’m not fully accepted in either Chinese or American culture.”
  • AI chatbot: “It sounds like you’re experiencing a lot of stress. Have you tried practicing mindfulness?”

The chatbot, while offering a generic coping mechanism, fails to acknowledge the cultural nuances of the patient’s experience. The response might be perceived as insensitive, further exacerbating the patient’s feelings of isolation.

Ethical considerations such as data privacy concerns and the potential for AI bias are also paramount and must be addressed in the development of AI in mental health. Biases in training data can result in misguided treatment decisions. 

Scenario: an AI chatbot designed to assist with anxiety and depression has been trained on a dataset that predominantly includes experiences of male respondents.

  • Patient: “I’m a woman struggling with anxiety and depression. I feel like I’m constantly being judged and stereotyped.”
  • AI chatbot: “It sounds like you’re feeling overwhelmed. Perhaps you should try gentle exercising to reduce stress.”

The chatbot fails to acknowledge the unique stressors faced by women, such as societal pressure and gender discrimination. That can lead to minimization of experiences and reinforcement of harmful stereotypes.

Finally, AI might struggle to handle complex cases involving severe mental health conditions or trauma. It might struggle to demonstrate the nuanced understanding and therapeutic interventions offered by human therapists. 

Scenario: a patient who has experienced severe childhood trauma, leading to complex post-traumatic stress disorder (C-PTSD), may exhibit flashbacks, nightmares, hypervigilance, and difficulty trusting others. 

  • Patient: “I have nightmares every night. I can’t sleep. I feel like I’m always on edge, like something bad is going to happen”. 
  • AI chatbot: “It sounds like you’re going through a difficult time. Have you tried relaxation techniques like deep breathing?” 

The chatbot oversimplifies the complexity of trauma and PTSD. It might struggle to fully comprehend the emotional pain and might resort to simple solutions which further trigger negative emotions. 

In all three cases, the lack of empathetic presence, cultural sensitivity, and deep understanding makes it challenging for AI to build the level of trust essential for effective mental health support. That takes us to discussing the role of human therapists in the era of AI.

The irreplaceable human element

While some of the concerns are being — or can potentially be solved — we believe that AI would never be able to demonstrate empathy and build trust.

Empathy, the ability to understand and share another person’s feelings, is a cornerstone of psychotherapy. Research shows that the extent of empathy demonstrated by the therapist and perceived by the patient, rather than any specific modality used, has a significant correlation with the success of treatment. 

AI by definition does not partake in emotional experiences and is not capable of empathic listening. Regardless of how eloquent and statistically substantiated its response to a patient’s need, it does not share an experience — and is incapable of understanding how a person feels about the response. 

If artificial intelligence is trained to be empathic — for example, through a continuous interaction with the same patient who openly expresses the feelings — it risks being unethical by undermining the meaning and expectations for real empathy. 

The same empathy barrier is reflected in the way patients perceive AI interactions. While AI-generated messages can make recipients feel heard, the same recipients feel less heard and perceive messages as less authentic and trustworthy when they realize that those came from AI. 

Building a trusting relationship, known as the therapeutic alliance, is also vital for successful therapy. Unlike AI, humans excel at establishing rapport and creating a safe space for clients. Through empathetic listening, therapists can often sense when a client is feeling overwhelmed or when a new approach is needed, and use intuition and judgment to adapt to unexpected situations.

AI as a sidekick

While AI cannot fully replace human psychotherapists, it can serve as a valuable complementary tool. Mental health professionals spend over 20% of their working hours on admin tasks — while this valuable time could have been used to help patients. By automating administrative tasks, AI can free up therapists’ time and prevent burnouts. 

AI can also gather and analyze patient data to identify patterns and inform treatment decisions with data-driven insights. Furthermore, AI can be used to track patient’s progress and  develop personalized treatment plans based on individual needs

To sum up, while AI offers advancements in mental health care, it cannot fully replace the human elements of psychotherapy. Empathy, trust, intuition, and judgment are irreplaceable qualities that mental health professionals bring to the therapeutic relationship. AI can serve as an assistive tool, but it should be used in conjunction with human expertise to enhance, rather than replace, the therapist-patient relationship.

Photo: Vladyslav Bobuskyi, Getty Images

Stanley Efrem is the Clinical Director and Co-Founder of Yung Sidekick. He is a psychotherapist and mental health counselor with over a decade of experience in private practice. He has contributed to the field of psychodrama as an Associate Professor at the Institute of Psychodrama and Psychological Counseling and has chaired major psychodramatic conferences. Previously, Stanley served as Chief Technology Officer and Co-Founder of Stratagam, a business simulation platform. He co-founded Yung Sidekick to leverage his background in psychodrama and technology to develop innovative AI tools for mental health professionals.

Michael Reider is the CEO and Co-Founder of Yung Sidekick. He is a serial entrepreneur and mental health advocate with over 10 years of experience in strategy consulting and executive management. Before co-founding Yung Sidekick, Michael led Bright Kitchen, a Cyprus-based dark kitchen chain, as CEO, and served as General Manager at Uber Eats. Michael holds an MBA in Strategic Management, Marketing, and Finance from Indiana University's Kelley School of Business. He leverages his diverse experience and passion for mental health to lead Yung Sidekick in developing AI tools that improve therapeutic outcomes for both therapists and patients.

This post appears through the MedCity Influencers program. Anyone can publish their perspective on business and innovation in healthcare on MedCity News through MedCity Influencers. Click here to find out how.