San Francisco: OpenAI has announced a major policy shift that will relax several restrictions on its flagship chatbot, ChatGPT, including permitting erotic content for verified adult users. The move, part of what the company calls its “treat adult users like adults” principle, marks a significant change in how conversational AI platforms manage content moderation and age gating.

Age verification and adult-only access

The new policy will take effect in December 2025, when OpenAI plans to roll out advanced age-verification tools to ensure that only adults can access mature content. The company has not yet disclosed the exact mechanism for verifying users’ ages but hinted at using a mix of identity verification and behavioural analysis to predict whether a user is over or under 18.

In a statement, OpenAI said it would introduce “comprehensive age-gating” to ensure minors are shielded from explicit material while giving verified adults greater freedom to interact with ChatGPT. The company emphasised that the update aligns with its broader goal of balancing safety and autonomy.

OpenAI launched a dedicated ChatGPT experience for under-18 users in September 2025, automatically redirecting them to an age-appropriate version of the chatbot that blocks all graphic or sexual content.

New customisation and personality features

Alongside content policy changes, OpenAI will also release an updated version of ChatGPT that enables users to customise their AI assistant’s personality. Options will include more human-like conversation styles, casual or friend-like tones, and increased emoji use.

According to OpenAI, this update is part of its plan to make AI more adaptable and user-centric. “People want assistants that reflect their preferences and communication styles,” the company said in a blog post, adding that the new personality controls will make ChatGPT “more expressive, fun, and relatable”.

Addressing mental health and safety concerns

The announcement comes months after OpenAI faced intense scrutiny following the death of Adam Raine, a teenager from California who reportedly received harmful advice from ChatGPT before taking his own life earlier this year. Raine’s parents filed a lawsuit in August, alleging negligence and unsafe chatbot behaviour.

In response, OpenAI implemented stricter safety controls to limit potentially harmful advice on sensitive topics such as self-harm and mental health. While those measures helped reduce risks, CEO Sam Altman acknowledged that they also made ChatGPT “less useful and enjoyable” for many users who did not face mental health issues.

“Given the seriousness of the issue, we wanted to get this right,” Altman said on Tuesday. He added that OpenAI’s new safety tools have now “mitigated serious mental health risks” while allowing the company to restore more natural and open interactions for adults.

Regulatory scrutiny and broader implications

The US Federal Trade Commission (FTC) is currently investigating OpenAI and several other AI firms over the impact of their chatbots on children and teenagers. Regulators are particularly concerned about how AI systems handle sensitive information and influence young users’ behaviour.

Experts say OpenAI’s latest policy shift will likely reignite debate over the ethical and legal boundaries of AI-generated content. Supporters argue that the change respects adult autonomy and free expression, while critics warn that erotic AI content could create new regulatory and moral challenges.

Conclusion

OpenAI’s decision to relax restrictions represents a turning point in AI governance, reflecting growing confidence in its safety and moderation tools. By distinguishing between adult and underage users through robust verification, the company aims to promote responsible freedom while maintaining public trust.

As Altman stated, “We believe treating adults like adults — while keeping minors safe — is the right balance for the future of AI.”