• Home
  • Blog
  • Android
  • Cars
  • Gadgets
  • Gaming
  • Internet
  • Mobile
  • Sci-Fi
Tech News, Magazine & Review WordPress Theme 2017
  • Home
  • Blog
  • Android
  • Cars
  • Gadgets
  • Gaming
  • Internet
  • Mobile
  • Sci-Fi
No Result
View All Result
  • Home
  • Blog
  • Android
  • Cars
  • Gadgets
  • Gaming
  • Internet
  • Mobile
  • Sci-Fi
No Result
View All Result
Blog - Creative Collaboration
No Result
View All Result
Home Sci-Fi

ChatGPT therapy? Why experts say it’s a bad idea.

August 28, 2025
Share on FacebookShare on Twitter

The recent suicide death of a young woman led her parents to a painful revelation: She’d been confiding in a ChatGPT “therapist” named Harry, and she told it that she was planning to die.

While the chatbot didn’t seem to encourage her to take her own life, the product also didn’t actively seek help on her behalf, like a real therapist would, according to an op-ed her mother wrote in the New York Times.

Sophie, who was 29 when she died, was not alone in seeking mental health help from ChatGPT or other AI chatbots. A 16-year-old boy discussed suicide with ChatGPT before he died, according to a wrongful death lawsuit filed by his parents against OpenAI this week.

OpenAI has since acknowledged that ChatGPT has failed to detect high-risk exchanges and, in response, plans to introduce new safeguards, including potentially alerting a user’s emergency contacts when they’re in distress.

Yet for those who consult AI chatbots about their mental health, many say it’s the best help they can access, often because they can’t find a therapist or afford one.

SEE ALSO:

Explaining the phenomenon known as ‘AI psychosis’

Experts, however, caution that the risks are unlikely to be worth the potential benefits. In extreme cases, some users may develop so-called AI psychosis as a result of lengthy, ongoing conversations with a chatbot that involve delusions or grandiose thinking. More typically, people seeking help may instead end up in a harmful feedback loop that only gives them the illusion of emotional or psychological healing.

Even OpenAI CEO Sam Altman says that he doesn’t want users engaging with ChatGPT like a therapist, partly because there are no legal protections for sensitive information. A therapist, on the other hand, is bound in most circumstances by patient confidentiality.

Rebekah Bodner, a graduate clinical coordinator at Beth Israel Deaconess Medical Center, is investigating how many people are using AI chatbots for therapy. The question is difficult to answer because of limited data on the trend. She told Mashable a conservative estimate, based on past research, would be at least 3 percent of people; OpenAI’s ChatGPT has 700 million weekly users, according to the company.

Mashable asked OpenAI whether it knew how many of its users turn to ChatGPT for therapy-like interactions, but the company declined to answer.

Dr. Matthew Nour, a psychiatrist and neuroscientist using AI to study the brain and mental health, understands why people treat a chatbot as a therapist, but he believes doing so can be dangerous.

One of the chief risks is “that the person begins to view the chatbot as…maybe the only entity/person that really understands them,” said Nour, a researcher in the department of psychiatry at the University of Oxford. “So they begin to confide in the chatbot with all their most concerning worries and thoughts to the exclusion of other people.”

Getting to this point isn’t immediate either, Nour adds. It happens over time, and can be hard for users to identify as an unhealthy pattern.

To better understand how this dynamic can play out, here are four reasons why you shouldn’t turn any AI chatbot into a source of mental health therapy:

Chatbot “therapy” could just be a harmful feedback loop

Nour recently published a paper in the pre-print journal arXiv about the risk factors that arise when people converse with AI chatbots. The paper is currently undergoing peer review.

Nour and his co-authors, which included Google DeepMind scientists, argued that a powerful combination of anthropomorphism (attributing human characteristics to a non-human) and confirmation bias creates the condition for a feedback loop for humans.

Mashable Trend Report

Chatbots, they wrote, play on a human tendency for anthropomorphism, because humans may ascribe emotional states or even consciousness to what is actually a complex probabilistic system. If you’ve ever thanked a chatbot or asked how it’s doing, you’ve felt a very human urge to anthropomorphize.

Humans are also prone to what’s known as confirmation bias, or interpreting the information they receive in ways that match their existing beliefs and expectations. Chatbots regularly give users opportunities to confirm their own bias because the products learn to produce responses that users prefer, Nour said in an interview.

Ultimately, even an AI chatbot with safeguards could still reinforce a user’s harmful beliefs, like the idea that no one in their life truly cares about them. This dynamic can subsequently teach the chatbot to generate more responses that further solidify those ideas.

While some users try to train their chatbots to avoid this trap, Nour said it’s nearly impossible to successfully steer a model away from feedback loops. That’s partly because models are complex and can act in unpredictable ways that no one fully understands, Nour said.

But there’s another significant problem. A model constantly picks up on subtle language cues and uses them to inform how it responds to the user. Think, for example, of the difference between thanks and thanks! The question, “Are you sure?” can produce a similar effect.

“We are leaking information all the time to these models about how we would like to be interacted with,” Nour said.

AI chatbots fail in lengthy discussions

Talking to an AI chatbot about mental health is likely to involve long, in-depth exchanges, which is exactly when the product struggles with performance and accuracy. Even OpenAI recognizes this problem.

“Our safeguards work more reliably in common, short exchanges,” the company said in its recent blog post about safety concerns. “We have learned over time that these safeguards can sometimes be less reliable in long interactions: as the back-and-forth grows, parts of the model’s safety training may degrade.”

As an example, the company noted that ChatGPT may share a crisis hotline when a user first expresses suicidal intent, but that it could also provide an answer that “goes against” the platform’s safeguards after exchanges over a long period of time.

Nour also noted that when AI chatbots incorporate an error early on in a conversation, that mistaken or false belief only compounds over time, rendering the model “pretty useless.”

Additionally, AI chatbots don’t have what therapists call a “theory of mind,” which is a model of their client’s thinking and behavior that’s based on consistent therapeutic conversations. They only have what the user has shared up to a certain point, Nour said.

AI chatbots also aren’t great at setting and tracking long-term goals on behalf of a user like a therapist can. While they might excel at giving advice for common problems, or even providing short-term, daily reminders and suggestions for dealing with anxiety or managing depression, they shouldn’t be relied on for healing treatment, Nour said.

Teens and people with mental illness are particularly vulnerable to harm

Dr. Scott Kollins, a child psychologist and chief medical officer of the identity protection and online safety app Aura, told Mashable that teens may be especially prone to misinterpreting an AI chatbot’s caring tone for genuine human empathy. This anthropomorphism is partly why chatbots can have an outsize influence on a user’s thinking and behavior.

Teens, who are still grasping social norms and developing critical relationship skills, may also find the always-on nature of a “therapist” chatbot especially alluring, Kollins said.

Aura’s proprietary data show that a minority of teen users whose phones are monitored by the company’s software are talking to AI chatbots. However, those who do engage with chatbots spend an inordinate amount of time having those conversations. Kollins said such use outpaced popular apps like iPhone messages and Snapchat. The majority of those users are engaging in romantic or sexual behavior with chatbots that Kollins described as “troubling.” Some rely on them for emotional or mental health support.

Kollins also noted that AI chatbot apps were proliferating by the “dozens” and that parents need to be aware of products beyond ChatGPT. Given the risks, he does not recommend coaching or therapy-like chatbot use for teens at this time.

Nour advises his patients to view AI chatbots as a tool, like a calculator or word processor, not as a friend. For those with anxiety, depression, or another mental health condition, Nour strongly recommends against engaging AI chatbots in any kind of emotional relationship, because of how an accidental feedback loop may reinforce existing false or harmful beliefs about themselves and the world around them.

There are safer ways to reach out for mental health help

Kollins said that teens seeking advice or guidance from an AI chatbot should first ensure they’ve exhausted their list of trusted adults. Sometimes a teen might forget or initially pass over an older cousin, coach, or school counselor, he said.

Though it’s not risk-free, Kollins also recommended considering online communities as one space to be heard, before consulting an AI chatbot, provided the teen is also receiving real-life support and practicing healthy habits.

If a teen still doesn’t feel safe approaching a peer or adult in their life, Kollins suggested an exercise like writing down their feelings, which can be cathartic and lead to personal insight or clarity.

Nour urges people to communicate with a friend or loved one about their mental health concerns and to seek professional care when possible.

Still, he knows that some people will still try to turn an AI chatbot into their therapist, despite the risks. He advises his patients to keep another human in the loop: “[C]heck in with a person every now and again, just to get some feedback on what the model is telling you, because [AI chatbots] are unpredictable.”

Next Post

Fix shadowing, glare, and reflections in bad photos

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

No Result
View All Result

Recent Posts

  • Metal Gear Solid Delta: Snake Eater Review | NoobFeed
  • Emma Stone rocks out to Chappell Roan in ‘Bugonia’ trailer
  • Google Pixel 10 Pro UFS 4.0 storage explained
  • Best gaming monitor deal: Grab the ASUS ROG Swift gaming monitor for $330 off
  • Best TV deal: Grab the 100-inch Hisense Class U8 Mini-LED TV for its lowest price ever

Recent Comments

    No Result
    View All Result

    Categories

    • Android
    • Cars
    • Gadgets
    • Gaming
    • Internet
    • Mobile
    • Sci-Fi
    • Home
    • Shop
    • Privacy Policy
    • Terms and Conditions

    © CC Startup, Powered by Creative Collaboration. © 2020 Creative Collaboration, LLC. All Rights Reserved.

    No Result
    View All Result
    • Home
    • Blog
    • Android
    • Cars
    • Gadgets
    • Gaming
    • Internet
    • Mobile
    • Sci-Fi

    © CC Startup, Powered by Creative Collaboration. © 2020 Creative Collaboration, LLC. All Rights Reserved.

    Get more stuff like this
    in your inbox

    Subscribe to our mailing list and get interesting stuff and updates to your email inbox.

    Thank you for subscribing.

    Something went wrong.

    We respect your privacy and take protecting it seriously