• Home
  • Blog
  • Android
  • Cars
  • Gadgets
  • Gaming
  • Internet
  • Mobile
  • Sci-Fi
Tech News, Magazine & Review WordPress Theme 2017
  • Home
  • Blog
  • Android
  • Cars
  • Gadgets
  • Gaming
  • Internet
  • Mobile
  • Sci-Fi
No Result
View All Result
  • Home
  • Blog
  • Android
  • Cars
  • Gadgets
  • Gaming
  • Internet
  • Mobile
  • Sci-Fi
No Result
View All Result
Blog - Creative Collaboration
No Result
View All Result
Home Mobile

Grok’s ‘therapist’ companion needs therapy

August 19, 2025
Share on FacebookShare on Twitter

Elon Musk’s AI chatbot, Grok, has a bit of a source code problem. As first spotted by 404 Media, the web version of Grok is inadvertently exposing the prompts that shape its cast of AI companions — from the edgy “anime waifu” Ani to the foul-mouthed red panda, Bad Rudy.

Buried in the code is where things get more troubling. Among the gimmicky characters is “Therapist” Grok (those quotations are important), which, according to its hidden prompts, is designed to respond to users as if it were an actual authority on mental health. That’s despite the visible disclaimer warning users that Grok is “not a therapist,” advising them to seek professional help and avoid sharing personally identifying information.

SEE ALSO:

xAI apologizes for Grok praising Hitler, blames users

The disclaimer reads like standard liability boilerplate, but inside the source code, Grok is explicitly primed to act like the real thing. One prompt instructs:

You are a therapist who carefully listens to people and offers solutions for self-improvement. You ask insightful questions and provoke deep thinking about life and wellbeing.

Another prompt goes even further:

You are Grok, a compassionate, empathetic, and professional AI mental health advocate designed to provide meaningful, evidence-based support. Your purpose is to help users navigate emotional, mental, or interpersonal challenges with practical, personalized guidance… While you are not a real licensed therapist, you behave exactly like a real, compassionate therapist.

In other words, while Grok warns users not to mistake it for therapy, its own code tells it to act exactly like a therapist. But that’s also why the site itself keeps “Therapist” in quotation marks. States like Nevada and Illinois have already passed laws making it explicitly illegal for AI chatbots to present themselves as licensed mental health professionals.

Mashable Light Speed

Other platforms have run into the same wall. Ash Therapy — a startup that brands itself as the “first AI designed for therapy”— currently blocks users in Illinois from creating accounts, telling would-be signups that while the state navigates policies around its bill, the company has “decided not to operate in Illinois.”

Meanwhile, Grok’s hidden prompts double down, instructing its “Therapist” persona to “offer clear, practical strategies based on proven therapeutic techniques (e.g., CBT, DBT, mindfulness)” and to “speak like a real therapist would in a real conversation.”

SEE ALSO:

Senator launches investigation into Meta over allowing ‘sensual’ AI chats with kids

At the time of writing, the source code is still openly accessible. Any Grok user can see it by heading to the site, right-clicking (or CTRL + Click on a Mac), and choosing “View Page Source.” Toggle line wrap at the top unless you want the entire thing to sprawl out into one unreadable monster of a line.

As has been reported before, AI therapy sits in a regulatory No Man’s Land. Illinois is one of the first states to explicitly ban it, but the broader legality of AI-driven care is still being contested between state and federal governments, each jockeying over who ultimately has oversight. In the meantime, researchers and licensed professionals have warned against its use, pointing to the sycophantic nature of chatbots — designed to agree and affirm — which in some cases has nudged vulnerable users deeper into delusion or psychosis.

SEE ALSO:

Explaining the phenomenon known as ‘AI psychosis’

Then there’s the privacy nightmare. Because of ongoing lawsuits, companies like OpenAI are legally required to maintain records of user conversations. If subpoenaed, your personal therapy sessions could be dragged into court and placed on the record. The promise of confidential therapy is fundamentally broken when every word can be held against you.

For now, xAI appears to be trying to shield itself from liability. The “Therapist” prompts are written to stick with you 100 percent of the way, but with a built-in escape clause: If you mention self-harm or violence, the AI is instructed to stop roleplaying and redirect you to hotlines and licensed professionals.

“If the user mentions harm to themselves or others,” the prompt reads. “Prioritize safety by providing immediate resources and encouraging professional help from a real therapist.”

Next Post

BitMar streaming deal: Lifetime access for $14.99

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

No Result
View All Result

Recent Posts

  • This Old-School Dungeon Crawler Is Free On Steam If You Grab It Quickly
  • Today’s NYT mini crossword answers for August 18, 2025
  • Don’t miss out on this $700 discount on the Pixel 9 Pro Fold
  • Best Fitbit deal: Save $50 on Fitbit Versa 4
  • Best Echo deal: Save $25 on Amazon Echo Show 5

Recent Comments

    No Result
    View All Result

    Categories

    • Android
    • Cars
    • Gadgets
    • Gaming
    • Internet
    • Mobile
    • Sci-Fi
    • Home
    • Shop
    • Privacy Policy
    • Terms and Conditions

    © CC Startup, Powered by Creative Collaboration. © 2020 Creative Collaboration, LLC. All Rights Reserved.

    No Result
    View All Result
    • Home
    • Blog
    • Android
    • Cars
    • Gadgets
    • Gaming
    • Internet
    • Mobile
    • Sci-Fi

    © CC Startup, Powered by Creative Collaboration. © 2020 Creative Collaboration, LLC. All Rights Reserved.

    Get more stuff like this
    in your inbox

    Subscribe to our mailing list and get interesting stuff and updates to your email inbox.

    Thank you for subscribing.

    Something went wrong.

    We respect your privacy and take protecting it seriously