MedCity Influencers

When There’s No Appointment Available, Patients Are Opening ChatGPT

The limitations of general-purpose AI in an emotional support context aren't what it says. They're about what it can't detect, including the markers of acute distress that a purpose-built system is specifically architected to recognize.

The mental health system has a gap problem. Not a gap in awareness — the clinical community understands the shortage, the waitlists, the cost barriers. The gap is in what happens to patients during the space between recognizing they need support and actually receiving it.

That space, for a growing number of people, is now filled by ChatGPT.

Using ChatGPT as a therapist isn’t a fringe behavior. It’s a documented trend, driven not by a preference for AI but by a shortage of alternatives. When the next available appointment is three weeks out, when a session costs $150 (or more) out of pocket, when stigma makes picking up the phone feel impossible — a text box that responds instantly and never judges becomes an appealing substitute. Healthcare leaders who dismiss this as a consumer quirk are missing a signal worth taking seriously.

The problem with general-purpose AI in an emotional context

Understanding why this matters requires understanding what general-purpose AI actually is and what it isn’t.

ChatGPT was built for breadth: writing, coding, analysis, customer service, and creative work. That versatility makes it genuinely useful across a wide range of contexts. It also means it was never designed with mental wellness as an organizing principle. There is no clinical map underneath the conversation. No structured framework for recognizing cognitive distortions, managing escalation, or knowing when a response is doing more harm than good.

This creates a specific category of risk. The limitations of general-purpose AI in an emotional support context aren’t primarily about what the AI says; they’re in what it can’t detect. A general AI can produce a compassionate-sounding response without any architecture for determining whether that response is therapeutically appropriate. It can engage with someone in acute distress the same way it engages with someone planning a dinner party: responsively, but without clinical grounding.

presented by

For most use cases, that’s fine. In an emotional support context, it’s a structural problem.

What ‘purpose-built’ actually means and why it matters to payers and systems

The answer isn’t to stop people from seeking support between appointments. It’s to ensure that when they do, they’re engaging with tools designed for that specific function.

Building AI tools that are actually safe for emotional support requires a different architecture than general AI assistance. The distinction starts with intent: a purpose-built mental wellness tool that isn’t trying to do everything. It’s designed around one outcome — helping a person regulate, reflect, and stay grounded until they can access human care.

In practice, that design difference shows up in several ways.

Clinical guardrails are the most foundational. These aren’t content filters layered on top of a general model. They’re hard-coded behavioral frameworks — rooted in approaches like CBT and DBT — that shape how the system guides a conversation. Rather than responding to every prompt with maximum engagement, a purpose-built system is designed to recognize when a conversation needs to redirect, de-escalate, or hand off to a human professional entirely.

Crisis detection is a related and equally critical feature. General AI can offer a generic list of crisis resources when prompted. A dedicated mental wellness platform builds crisis recognition into the interaction model itself — detecting the markers of acute distress in real time and structuring a response accordingly, without waiting for the user to ask.

The modality of interaction also matters more than the industry has widely acknowledged. Humans are neurobiologically wired to respond to faces. The difference between a text interface and a face-to-face AI presence isn’t just in a cosmetic facade. Seeing a face affects whether the brain perceives the interaction as a genuine moment of accompaniment or as an exchange with a search engine. Platforms that incorporate a visual presence are making a therapeutic choice, rooted in how human nervous systems actually process connection.

Some platforms are now being built explicitly around this design philosophy — face-to-face interaction, clinical protocols, crisis detection — rather than adapted from general-purpose models.

The right frame: AI therapy vs. human therapist isn’t the question

The debate over AI therapy versus human therapists tends to generate more heat than clarity, because it sets up a comparison that misrepresents how these tools are best deployed.

Purpose-built AI mental wellness tools aren’t competing with therapists. They’re addressing the layer of need that exists before a patient ever reaches a therapist’s office and the layer that persists between sessions. The goal isn’t clinical replacement. It’s continuity of support for a population that currently has none.

For healthcare systems, insurers, and employers building behavioral health strategy, the question isn’t whether AI can do what a therapist does. It’s whether AI can meaningfully reduce the number of people who fall out of the care continuum entirely because the wait was too long, the cost too high, or the barrier too invisible to name.

The evidence suggests it can — when the tool is built for that purpose, with the clinical architecture to match.

What healthcare leaders should be looking for

Not all AI mental wellness tools meet this bar. As the category grows, the burden of evaluation falls increasingly on the systems and stakeholders deciding which tools to integrate, recommend, or fund.

The questions worth asking are structural: Does the platform have documented clinical guardrails or generic safety filters? Does it have a defined crisis protocol with clear escalation pathways? Was it designed with a specific therapeutic outcome in mind, or adapted from a general-purpose model?

The patients turning to ChatGPT at midnight aren’t always doing so because they prefer AI. They’re doing so because the system hasn’t yet given them something better. That’s not a consumer problem. It’s a healthcare design problem, and it has a solution, if the industry is willing to build for it.

Photo: SIphotography, Getty Images

Rodin Younessi is the CEO and founder of myHOMA, an AI-driven mental wellness platform designed to bridge the gap between the moment support is needed and when care becomes accessible. With a background spanning software engineering, law, and entrepreneurship, he brings a systems-level approach to problems that traditional models have struggled to solve.

Younessi founded myHOMA after recognizing a persistent gap in mental health support: millions of people are willing to seek help but lack immediate, practical access to it. His vision for the platform is an experience that meets users where they are — delivering real-time, personalized interaction that feels natural, supportive, and deeply human. He believes thoughtfully designed technology can meaningfully improve well-being when it is both intelligent and accessible. In 2006, Younessi was knighted and awarded the title of Chevalier by the Order of St. John of Jerusalem, Knights Hospitaller, in recognition of his philanthropic contributions.

This post appears through the MedCity Influencers program. Anyone can publish their perspective on business and innovation in healthcare on MedCity News through MedCity Influencers. Click here to find out how.