SAN FRANCISCO — OpenAI is rolling out a new age prediction feature on ChatGPT that estimates whether an account likely belongs to someone under 18, a move aimed at increasing safety for younger users as the company prepares to allow mature content on the popular AI chatbot.
The technology uses behavioural and account-level signals — such as how long an account has existed, usage patterns and stated age — to guess a user’s age and automatically apply additional protections if an account appears to be operated by a minor.
How It Works
If the system estimates a user is under 18, ChatGPT will automatically introduce stricter content filters designed to limit exposure to sensitive or potentially harmful material, such as graphic violence, risky challenges, sexual or romantic role-play, or depictions of self-harm.
Adults who are incorrectly classified as under-18 can regain full access by verifying their age through Persona, a third-party identity verification partner, typically via a selfie check.
Global Rollout and Future Plans
OpenAI said the feature is being deployed globally, with the European Union rollout scheduled in the coming weeks to align with regional requirements. The update comes as OpenAI prepares to introduce an “adult mode” for verified users early in 2026, according to company executives.
Why It Matters
The age prediction system builds on pre-existing protections for users who self-declare as minors, and reflects growing attention on AI safety for young people, amid concerns about the psychological impact of AI interactions. The model is designed to default to the safest experience when it is uncertain or lacks sufficient information.
OpenAI’s move also comes as the company expands monetisation efforts, including testing advertisements in ChatGPT, and continues to refine its safety policies for a user base of hundreds of millions of weekly active users.
