Search
pexels-rdne-6669811

Can we trust AI therapy bots, or are they just telling us what we want to hear?

We earn a commission for products purchased through some links in this article

NHS waiting lists are at an all-time high, and many are increasingly turning to chatbots for mental health support. But is this really the solution? L360’s Katie Sipp-Hurley investigates.

People have been turning to search engines for health advice since the internet’s conception. Type any question (or symptom) into Google — or now Instagram or TikTok — and within seconds, you’ll be flooded with solutions. Society loves a quick fix and, better yet, one that offers what we want to hear, whether that be for our health or mental health.

Once a taboo topic, discussion about therapy now saturates social media; feeds are full of self-help hacks and pseudo-psychological explanations for every behaviour imaginable. Those with TikTok will be well acquainted with the phrase ‘I’m a qualified therapist, and…’.

The normalisation of therapy and increasing online presence has fed our appetite for instant solutions. So, when ChatGPT arrived — a machine built for quick fixes and answers — what started as harmless everyday queries evolved into users seeking endless advice on relationships, mental health and personal crises.

Barely a year after ChatGPT went mainstream, tech and mental health companies like Headspace, Ash, Wysa and Youper rushed to capitalise on the market, launching AI therapy bots that claim to mirror therapeutic skills. Now, in 2025, a JMIR Mental Health study found that 60% of people use AI as a personal therapist.

That willingness to seek help is, in one sense, positive. But when AI starts to function as an advice-giver, confidante or therapeutic resource (roles we’d usually reserve for humans), the lines of emotional and clinical care begin to blur.

How is an AI therapy chatbot different from confiding in ChatGPT? (Picture: Pexels)

How does AI therapy work?

We’ve all seen films like Her and Ex Machina, where AI becomes sentient or indistinguishable from human connection. Thankfully, we’re not there yet, though reports of people developing attachments to chatbots suggest we may be closer to this reality than we think.

These AI chatbots or ‘conversational agents’ use natural language processing to guide users through techniques like reframing negative thoughts, mindfulness or goal setting. Because they’re large language models (LLMs), they can consolidate session takeaways, repeat recommendations and present techniques in digestible formats.

In practice, that means instant, on-demand responses. If someone has a panic attack at 2am, the chatbot can immediately offer grounding exercises or breathing techniques.

Many platforms now include features like mood tracking, psychoeducation and behavioural prompts.

Read more: Former Lioness Fran Kirby is calling for women to prioritise their heart health — and this is why

What problems does AI therapy aim to solve?

At a time when the NHS is under enormous strain, the appeal is obvious. Its immediate, accessible and cheaper than private therapy, while also being available 24/7, cutting out six-week minimum waiting lists.

It’s also, of course, not human. As chartered psychologist Dr Katie Barge puts it, AI “doesn’t get tired or distracted”, which means it can offer consistency at any hour.

After a period of struggling with his mental health, Brian Davis, a father and CEO of his own cleaning company, turned to AI for support.

Most of his sessions would occur late at night at a time that was convenient for him — after he’d completed his work and family responsibilities — and was surprised to find it beneficial.

“Receiving instant feedback helped me reframe things quickly and break unhelpful cycles,” he says.

Read more: The rise of cosy gaming: meet the women levelling up their mental health through virtual worlds
Removing human-to-human interaction from therapy might remove some of the core benefits (Picture: Pexels)

Not just useful for instant feedback, these tools can also speed up the process of seeking help. Clinics are already reporting that patients use AI tools to research treatment options or prepare for admission. Head of therapy and programmes at Delamere, Chris Lomas, supports this: “It can be a positive way for people to understand their choices before committing to clinical care.”

Some tools, such as Wysa and Youper, were developed and tested with clinical input and use CBT, DBT and ACT techniques, and meta-analyses even show statistically significant improvements in depression and anxiety symptoms.

What are the limits of AI therapy?

Of course, AI has its issues. In therapy, the defining feature is also the biggest flaw: it’s not human.

While Brian benefited from his conversations, he noticed some cracks. “There were times where it didn’t really ‘get’ me, and the responses were generic.” He adds that, sometimes, it would misunderstand him altogether.

But you can’t automate empathy. At best, AI offers an imitation — one that risks flattening complex emotions and misinterpreting nuance. In some cases, it can fail to pick up on signs of crisis.

The issue is systemic: these models are trained on human language, inheriting our biases without the ability to gauge tone, intention or relational nuance. “AI can’t unpick signs of paranoia, delusion or mental illness — it simply takes everything at face value,” says Dr Katie.

Read more: The body-centred therapy practice transforming how we heal trauma
woman at laptop in white home
Many people report chatbots misunderstanding them, which can further contribute to negative feelings (Picture: Pexels)

Do chatbots just ‘please’ the user?

Brian noticed another issue: “It often erred on the side of overly optimistic, safe answers, as though it wanted to give me what it thought I wanted to hear, rather than prompting me to dig deeper.”

This is where users can miss out on the probing and reflection that make therapy so beneficial. Good therapy depends on nuance, intuition and treatment that’s tailored to the individual — things no algorithm can replicate on language alone.

Many models are designed to produce quick, plausible, agreeable responses. As Chris explains, “An LLM will stop searching its database as soon as it finds a viable response, rather than considering the whole picture.”

That’s fine if you’re looking for a pasta recipe or travel tip, but when someone’s mental health is at stake, this can be dangerous.

Now, therapists are recommending that AI tools include crisis pathways, human oversight and strict usage limits. Without these, emotional dependency on bots could become another risk.

In traditional therapy, a key part of the process is internalising the tools you learn and then applying them to your life. “If someone turns to a chatbot whenever they have a wobble or moment of discomfort, they may miss out on developing that vital sense of resilience, self-trust and confidence in their own coping strategies,” explains Dr Katie.

That said, AI can still play a helpful role if used mindfully — such as a holding space between sessions or as a prompt for reflection; the danger lies in using it as a constant crutch.

Read more: Gen Z are facing a mental health crisis in the workplace — this is what they can do
dark laptop screen with code
Aside from clinical concerns, is your personal data safe? (Picture: Pexels)

Can AI therapy help with anxiety?

Still, there are some scenarios where these limits matter less. Evidence suggests that chatbot tools can help reduce symptoms of generalised anxiety for those with mild to moderate needs, such as breathing exercises, CBT strategies and daily check-ins.

As Chris explains, “If you input a phrase like ‘When I start to panic, advise me to call a family member and take myself to a safe space’, the chatbot (LLM) can adapt its response for those moments.”

But more severe cases of anxiety or trauma will almost always require human-led therapy and often medical support.

Is your data safe with an AI therapy chatbot?

Beyond clinical limitations, there’s the very real question of privacy. If even period-tracking apps can’t be trusted to protect sensitive data, what about therapy bots holding your innermost fears and trauma?

Many of these tools aren’t properly regulated and “may use anonymised data for research or commercial purposes,” warns Dr Katie.

A new study about chatbot privacy concerns led by privacy and data policy fellow at Stanford University, Jennifer King, found that any data shared with these models will likely be collected and used for training, even if it’s in a separate file that you uploaded during the conversation.

Brian admits he was cautious, never sharing too much personal information. But the hesitancy held him back: “I was never sure that I could be completely candid with it.”

And this hesitancy undermines the honesty and vulnerability central to effective therapy.

Read more: We tried ear seeding — the wellness trend that promises better sleep and stress management

So, what role should AI play in therapy?

The solution, then, is neither blind trust nor blanket dismissal. AI could serve as a triage tool, helping people with mild to moderate issues while referring complex cases to clinicians — an idea that’s similar to the NHS’s proposed 10-year plan.

The concern, however, is that bots may still miss warning signs. Stronger monitoring is needed than simply being designed and occasionally audited by experts.

As useful as chatbots may be, they can’t replicate what most people seek in therapy: “They can’t mirror that depth of human connection, empathy and ability to hold complex emotions,” as Dr Katie puts it. “To be seen, heard and understood by another human being remains central to healing.”

As Brian sums it up: “It could feign empathy in its wording, but it never felt as comforting or genuine as when you have a human who truly understands. The support was nice, but it wasn’t the same.”

Feature image: Pexels

Share this article

Facebook
Twitter
LinkedIn
WhatsApp
Email
Secret Link