• Home
  • Blog
  • Android
  • Cars
  • Gadgets
  • Gaming
  • Internet
  • Mobile
  • Sci-Fi
Tech News, Magazine & Review WordPress Theme 2017
  • Home
  • Blog
  • Android
  • Cars
  • Gadgets
  • Gaming
  • Internet
  • Mobile
  • Sci-Fi
No Result
View All Result
  • Home
  • Blog
  • Android
  • Cars
  • Gadgets
  • Gaming
  • Internet
  • Mobile
  • Sci-Fi
No Result
View All Result
Blog - Creative Collaboration
No Result
View All Result
Home Sci-Fi

ChatGPT gets new personality settings, including warmth and emoji usage

December 20, 2025
Share on FacebookShare on Twitter

ChatGPT can act even friendlier now, with new personality customization options that let users choose just how warm and enthusiastic the bot is in conversation.

SEE ALSO:

ChatGPT is changing the abortion landscape

OpenAI announced the new personality settings in a Friday post on X. The update rolled out immediately to ChatGPT users alongside a long-awaited pinned chats feature, new ways to generate or edit emails, and updates to ChatGPT browser Atlas.

The new tools add more fine tuning of ChatGPT’s personality using levels of warmth and enthusiasm (labelled as “more,” “less,” or “default”). Users can also adjust the way the bot organizes its responses, such as how frequently it generates lists, as well as the amount of emojis it employs, in addition to its base style and tone. There’s still no option to exclude emojis entirely.

Mashable Light Speed

Professionals have warned that overly anthropomorphic and sycophantic chatbots can exacerbate mental health concerns, including AI psychosis and dependency. A previous ChatGPT model, the still-available GPT-4o, was adjusted earlier this year after facing criticism for “overly agreeable” behavior. CEO Sam Altman has referred to the issue as a “personality problem.”


This Tweet is currently unavailable. It might be loading or has been removed.

OpenAI launched its new GPT-5.2 model series one week ago, boasting new capabilities for “professional knowledge work” that include better processing benchmarks and less hallucinations, the company reports.

ChatGPT’s developers also recommitted itself to its mental health and teen safety promises amid escalating lawsuits. In a blog post published Thursday, OpenAI explained it was introducing a new set of under-18 user principles to GPT-5.2 intended to create additional guardrails around sensitive topics and encourage age-appropriate interactions. It’s also working on a new age verification system for young users. GPT-5.2 reportedly scores higher on internal mental health safety tests, including stress testing for self harm, than previous models.


Disclosure: Ziff Davis, Mashable’s parent company, in April filed a lawsuit against OpenAI, alleging it infringed Ziff Davis copyrights in training and operating its AI systems.

Next Post

I want to quit Google Photos, but these features make it difficult

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

No Result
View All Result

Recent Posts

  • 7 NotebookLM tricks that kept my 2025 from spinning out
  • Starlink satellite explodes, causing small debris field in space
  • The Best Shooter Games Of 2025 According To Metacritic
  • A year in mobile tech: What 2025 actually brought us
  • I want to quit Google Photos, but these features make it difficult

Recent Comments

    No Result
    View All Result

    Categories

    • Android
    • Cars
    • Gadgets
    • Gaming
    • Internet
    • Mobile
    • Sci-Fi
    • Home
    • Shop
    • Privacy Policy
    • Terms and Conditions

    © CC Startup, Powered by Creative Collaboration. © 2020 Creative Collaboration, LLC. All Rights Reserved.

    No Result
    View All Result
    • Home
    • Blog
    • Android
    • Cars
    • Gadgets
    • Gaming
    • Internet
    • Mobile
    • Sci-Fi

    © CC Startup, Powered by Creative Collaboration. © 2020 Creative Collaboration, LLC. All Rights Reserved.

    Get more stuff like this
    in your inbox

    Subscribe to our mailing list and get interesting stuff and updates to your email inbox.

    Thank you for subscribing.

    Something went wrong.

    We respect your privacy and take protecting it seriously