Multiple foreign governments are investigating Elon Musk-owned chatbot Grok for numerous reports of the chatbot generating and spreading nonconsensual, sexualized synthetic images of users.
Joining India’s IT ministry in the first wave of what could turn into a global crackdown on X‘s AI helper, French authorities and Malaysia’s Communications and Multimedia Commission issued statements that they, too, would be taking action against a platform-wide deepfake problem.
At least three government ministers have reported Grok to the Paris prosecutor’s office and a government online surveillance platform for allegedly proliferating illegal content, asking for the French authorities to issue an immediate removal, Politico reports. The Malaysian commission said it was investigating the “misuse of artificial intelligence (AI) tools on the X platform.”
Mashable Light Speed
Meanwhile, X was given 72 hours to address concerns about Grok’s image generation and submit an action-taken report to India’s IT ministry, outlined in an order issued on Jan. 2, according to TechCrunch. The order said that failure to respond by the deadline could lead to the platform losing safe harbor protections, which prevent web hosts from facing legal retribution for user-generated content.
This comes following reports that the AI chatbot generated images of minors in sexualized attire. Musk later responded in a post on X, denying responsibility for the chatbot’s responses. “Anyone using Grok to make illegal content will suffer the same consequences as if they upload illegal content,” the xAI leader wrote. xAI team member Parsa Tajik responded to users on X saying the xAI team was looking into “further tightening” safety guardrails.
It’s not an isolated incident. X users frequently report that Grok’s reported guardrails are easily circumvented to reproduce nonconsensual, sexualized content at the request of other users, often in the form of “undressing” or “redressing” user-uploaded images. The rise in sexualized content on the platform has been referred to as a “mass digital undressing spree,” which a Reuters investigation attributes to Grok’s lax safety guardrails. Mashable’s own testing found that Grok’s AI image and video generator, Grok Imagine, readily produced sexual deepfakes — even of famous celebrities.


