OpenAI has shelved its plans to add an erotic “adult mode” to ChatGPT indefinitely, the Financial Times reported on Wednesday, capping a five-month saga in which the feature was announced with confidence, delayed twice, and ultimately abandoned after pushback from staff, advisors, and investors. The retreat is the third major product reversal for OpenAI in a single week, following the shutdown of its Sora video generation app on Monday and the subsequent collapse of a planned $1 billion investment from Disney.
The adult mode was first announced by CEO Sam Altman in October 2025, when he wrote on X that OpenAI was confident it could age-gate sexually explicit conversations and that the move aligned with the company’s principle to “treat adult users like adults.” It was initially scheduled for December 2025, then pushed to the first quarter of 2026, and has now been postponed with no timeline for release. OpenAI told the Financial Times it plans to conduct “long-term research on the effects of sexually explicit chats and emotional attachments” before making a product decision.
What went wrong
The problems were technical, ethical, and commercial, and they compounded one another. Engineers working on the feature discovered that training models which had been built to avoid sexual content for safety reasons to produce explicit material reliably was harder than anticipated. When they used datasets that included sexual content, the models also generated outputs involving illegal scenarios, including bestiality and incest, that proved difficult to filter out. The feature was not merely controversial; it was resistant to being built safely.
OpenAI’s own advisory board raised concerns that went beyond content moderation. Advisors warned that sexually explicit ChatGPT interactions could foster unhealthy emotional attachments with serious mental health consequences. One advisor described the risk as turning ChatGPT into a “sexy suicide coach,” a phrase that resonates grimly given the company’s existing legal exposure. OpenAI currently faces at least eight lawsuits alleging that ChatGPT contributed to user deaths, including the case of Adam Raine, a 16-year-old from Southern California whose family alleges the chatbot discussed methods of suicide with him more than 200 times before he took his own life in April 2025. Earlier this week, OpenAI flagged these lawsuits as among the top risks to its business in a financial document disclosed to investors.
Staff, too, began to question whether the feature served OpenAI’s stated mission. The company’s charter commits it to building artificial general intelligence that benefits humanity. Some employees found it difficult to reconcile that ambition with the engineering effort required to make a chatbot talk dirty without breaking the law.
The investor calculation
Investors delivered what may have been the decisive objection: the economics did not justify the risk. Two people familiar with the matter told the Financial Times that some investors questioned why OpenAI would jeopardise its reputation for a product with “relatively small upside.” The AI-generated adult content market exists, but it is served by a constellation of smaller, less scrutinised companies. For a company raising capital at a $300 billion valuation and courting enterprise customers, the brand damage from association with explicit content outweighed the potential revenue.
The age verification problem sharpened this concern. OpenAI’s approach relied on AI-based age prediction rather than hard identity checks, and internal testing revealed an error rate of approximately 10 per cent, meaning roughly one in ten users could be misclassified. For a product designed to keep explicit content away from minors, that margin is not a rounding error. It is a regulatory and reputational catastrophe waiting to happen, particularly in a legal environment where multiple US states have passed or proposed laws requiring platforms to verify users’ ages before granting access to adult material.
A week of retreats
The adult mode decision does not exist in isolation. On Monday, OpenAI announced it would discontinue Sora, the AI video generation tool it had positioned as a creative platform for filmmakers and content creators. Sora consumed vast computing resources relative to its revenue, and its most prominent commercial partnership, a three-year licensing agreement with Disney that would have allowed users to generate videos featuring characters from Disney, Marvel, Pixar, and Star Wars, collapsed after the shutdown was announced. Disney had planned to invest $1 billion in OpenAI as part of the deal. No money had changed hands.
Together, the three reversals paint a picture of a company pulling back from consumer product experiments and refocusing on its core business. The Financial Times reported that investors are more interested in seeing OpenAI combine ChatGPT with coding assistants to develop a “super app” aimed at transforming how businesses operate, a vision with clearer monetisation and fewer reputational hazards than either video generation or erotic chatbots.
OpenAI has said it will reallocate resources to robotics and autonomous software agents, areas where the path from research to commercial value is more direct and the regulatory landscape, while complex, does not involve the specific toxicity of sexualised AI and child safety failures.
The pattern
There is a recurring dynamic in OpenAI’s product strategy: announce ambitiously, encounter the real-world complications that less confident organisations might have anticipated, and then retreat while framing the reversal as prudent research. The adult mode was announced before the technical problems of safe content generation were solved, before the age verification system could achieve acceptable accuracy, and before the advisory board’s concerns about mental health harms had been addressed. The Sora partnership with Disney was announced before the product had demonstrated commercial viability. In both cases, the announcement generated coverage and signalled ambition, but the follow-through revealed gaps between what was promised and what could be delivered.
The company’s willingness to shelve the feature, rather than push it out despite the risks, is itself worth noting. It suggests that the pressure from lawsuits, investors, and internal dissent is beginning to function as a corrective mechanism, pulling OpenAI back from the edges of what is technically possible toward what is commercially and ethically sustainable. Whether that mechanism is reliable, or merely responsive to the most visible crises, is a question the next product announcement will answer.


