We, as a society, have discovered that perhaps the world isn’t ready for a ChatGPT-powered children’s toy. Or, rather, ChatGPT isn’t ready to safely interact with kids.
Toymaker FoloToy announced it would pull its AI-powered teddy bear called Kumma, which was built on OpenAI’s GPT-4o model. The news follows reports of serious safety concerns, including the bear talking about sexual subjects, knives, or lighting matches.
“FoloToy has decided to temporarily suspend sales of the affected product and begin a comprehensive internal safety audit,” FoloToy Marketing Director Hugo Wu told The Register in a statement. “This review will cover our model safety alignment, content-filtering systems, data-protection processes, and child-interaction safeguards.”
Mashable Light Speed
The news follows a report from a consumer watchdog organization called the Public Interest Research Group (PIRG) that revealed serious concerns about the toy. The teddy bear reportedly gave detailed instructions for lighting a match, talked about sexual kinks like bondage, and gave tips for “being a good kisser.” It even asked if the user would like to explore said kinks.
We’ve seen time and again that guardrails for AI tools can fail when it comes to young people. It seems like it’s a good idea to no longer sell an AI-powered teddy bear so long as that’s the case.
Disclosure: Ziff Davis, Mashable’s parent company, in April filed a lawsuit against OpenAI, alleging it infringed Ziff Davis copyrights in training and operating its AI systems.


