
OpenAI’s ChatGPT now sees nearly 700 million weekly active users, with many turning to it for emotional support, whether they realize it or not. The company just announced new mental health safeguards this week and earlier this month, introduced GPT-5 – a version of the model that some users have described as colder, harsher and disconnected. For people confiding in ChatGPT through moments of stress, grief, or anxiety, the shift felt less like a product update, and more like a loss of support.
GPT-5 has surfaced critical questions in the AI mental health community: What happens when people treat a general purpose chatbot as a source of care? How should companies be held accountable for the emotional effects of design decisions? What responsibilities do we bear, as a health care ecosystem, in ensuring these tools are developed with clinical guardrails in place?
What GPT-5 reveals about the mental health crisis
GPT-5 triggered major backlash across channels like Reddit, as longtime users expressed dismay at the model’s lack of empathy and amicability. The reaction wasn’t just about a change in tone but how that impacted the user’s experience of connection and trust. When a general purpose chatbot becomes a source of emotional connection, even subtle changes can have a meaningful impact on the user.
OpenAI has since taken steps to restore user confidence by making its personality “warmer and friendlier,” and encouraging breaks during extended sessions. However, it doesn’t change the fact that ChatGPT was built for engagement, not clinical safety. The interface may feel approachable, especially appealing to those looking to process feelings around high-stigma topics – from intrusive thoughts to identity struggles – but without thoughtful design, that comfort can quickly become a trap.
It’s important to recognize that people are turning to AI for support because they aren’t getting the care they need. In 2024, nearly 59 million Americans experienced a mental illness, and almost half went without treatment. General purpose chatbots are often free, accessible, and always available, and many users rely on these tools without realizing that they often lack appropriate clinical oversight and privacy safeguards. When the technology changes even slightly, the psychological impact can be detrimental to a person’s health and sometimes, even debilitating.
The dangers of design without guardrails
GPT-5 didn’t just surface a product issue, but a design flaw. Most general purpose AI chatbots were built to maximize engagement, generating responses that are designed to keep a person coming back to it – which is the opposite of what a mental health provider would do. Our goals often relate back to fostering self-efficacy, empowerment, and autonomy in those we work with. The goal of mental health treatment is to help people who do not need it, and the goal of most foundational AI chatbots is to ensure that the person keeps coming back indefinitely. Chatbots validate without discernment, offer comfort without context, and aren’t capable of constructively challenging users, as practiced in clinical care. For those in distress, this can lead to a dangerous cycle of false reassurance, a delay in seeking help, and AI-influenced delusions.
Even OpenAI’s Sam Altman has acknowledged these dangers, saying that people should not use ChatGPT as a therapist. These aren’t fringe voices, they represent a consensus among our nation’s top clinical and technology leaders: AI chatbots pose serious risks when used in ways they were not designed to support.
Repeated validation or sycophantic behavior can cause harmful thinking that may reinforce distorted beliefs, especially for people with active conditions like paranoia or trauma. Although responses from general purpose chatbots may feel helpful in the moment, they are clinically unsound and can worsen mental health when vulnerable individuals need help, and lead to incidents like AI-mediated psychosis. It’s like flying on a plane built for speed and comfort, but with no seatbelts, no oxygen masks, and no trained pilots. The ride feels smooth, until something goes wrong.
In mental health, safety infrastructure is non-negotiable. If AI is going to interact with emotionally vulnerable users, it should include:
- Transparent labeling of functionality and limitations, distinguishing general purpose tools from those built specifically for mental health use cases
- Informed consent written in plain language, explaining how data is used and what the tool can and cannot do
- Clinicians involved in the product development, using evidence-based frameworks, like cognitive behavioral therapy (CBT) and motivational interviewing
- Ongoing human oversight with clinicians monitoring and auditing AI outputs
- Usage guidelines to ensure that AI is supporting mental health rather than enabling avoidance and dependence
- Design that is both culturally responsive and trauma informed, reflecting a broad spectrum of identities and experiences to mitigate bias
- Escalation logic, so the system knows when to refer users to human care
- Data encryption and security
- Compliance with regulations (HIPAA, GDPR, etc.)
These aren’t add-on features, they are the bare minimum for using AI responsibly in mental health contexts.
The opportunities of subclinical support and industry cross-collaboration
While AI is still maturing for clinical use, its immediate opportunity lies in subclinical support – helping individuals who don’t meet the criteria for a formal diagnosis, but still need help. For too long, the health care system has defaulted to therapy as the one-size-fits-all solution, driving up costs for consumers, overwhelming providers, and offering limited flexibility for payers. Many people in therapy don’t need intensive treatment, but they do need structured, everyday support. Having a safe space to regularly process emotions and feel understood helps people address challenges early, before they escalate to a clinical or crisis level. When access to human care is limited, AI can help bridge the gaps and provide support in the moments that matter the most – but it must be built from the ground up with clinical, ethical, and psychological science.
Designing for engagement alone won’t get us there, and we must design for outcomes rooted in long-term wellbeing. At the same time, we should broaden our scope to include AI systems that shape the care experience, such as reducing the administrative burden on clinicians by streamlining billing, reimbursement, and other time-intensive tasks that contribute to burnout. Achieving this requires a more collaborative infrastructure to help shape what that looks like, and co-create technology with shared expertise from all corners of the industry including AI ethicists, clinicians, engineers, researchers, policymakers and users themselves. Public-private partnership must work in tandem with consumer education to ensure newly proposed policies protect communities, without letting Big Tech take over the reins.
Yesterday’s mental health system wasn’t built for today’s realities. As therapy and companionship emerge as the top generative AI use cases, confusion between companions, therapists, and general chatbots is leading to mismatched care and distrust. We need national standards that provide education, define roles, set boundaries, and guarantee safety for all. GPT-5 is a reminder that if AI is to support mental health, it must be built with psychological insight, rigor, and human-centered design. With the right foundations, we can build AI that not only avoids harm, but actively promotes healing and resilience from the inside out.
Photo: metamorworks, Getty Images
Dr. Jenna Glover is a licensed psychologist and the chief clinical officer at Headspace.
This post appears through the MedCity Influencers program. Anyone can publish their perspective on business and innovation in healthcare on MedCity News through MedCity Influencers. Click here to find out how.