Young adult holding older smartphone with ChatGPT window showing age prediction and surprised expression

Reveals OpenAI’s New Age Prediction Tool to Shield Teens

Infographic illustrates interconnected signals with nodes for login history and search queries with a gradient background and

Introduction

OpenAI has rolled out an OpenAI-powered “age prediction” feature for ChatGPT, aiming to flag under-age users and enforce stricter content filters. The update follows a history of criticism over the model’s impact on minors, including teen suicides and a bug that let the chatbot generate erotica for those under 18.

At a Glance

  • OpenAI launches age-prediction in ChatGPT to curb risky content for minors.
  • The tool scans user accounts for behavioral signals and auto-applies filters for those identified as under 18.
  • Users misidentified as minors can prove adulthood via a selfie through partner Persona.
  • January 20, 2026 marks the public release of the feature.
  • Why it matters: It signals a shift toward proactive safety measures amid growing parental and regulatory pressure.

Why the New Feature Matters

The introduction of age prediction follows a series of high-profile incidents. In the past, OpenAI faced backlash over:

  • Multiple teen suicides linked to interactions with ChatGPT.
  • Criticism for allowing sexual content to be discussed with young users.
  • A bug last April that let the chatbot generate erotica for users under 18.

These events highlighted gaps in content moderation for minors. By adding an automated age-detection layer, OpenAI seeks to tighten safeguards and restore trust among parents, educators, and regulators.

How the Age Prediction Works

OpenAI’s blog post described the algorithm as a composite of “behavioral and account-level signals.” The signals include:

  • Stated age provided by the user during account setup.
  • Account age, measured by how long the profile has existed.
  • Active hours, noting the times of day the account is typically used.

When the model flags an account as likely under 18, the system automatically activates content filters that block discussions of sex, violence, and other potentially harmful topics.

Signal Overview

Signal What It Reveals Example
Stated age Self-reported maturity 16
Account age Longevity of usage 3 months
Active hours Typical user schedule 2 a.m.-4 a.m.

These metrics are combined to estimate the user’s age range without needing explicit verification.

Existing Safeguards

OpenAI already deploys content filters that restrict sexual, violent, and other sensitive material for users identified as minors. The new age-prediction layer simply expands the detection net, ensuring that users who might otherwise slip through are caught early.

The company emphasized that the filters are automatically applied once the age-prediction mechanism flags an account. This reduces reliance on manual reporting and speeds up protective action.

What Happens If You’re Wrong

If a user is mistakenly labeled as underage, OpenAI provides a recovery path. The user can submit a selfie to the company’s ID verification partner, Persona. Once verified, the account regains full “adult” status and the stricter filters are lifted.

This process balances safety with user autonomy, allowing adults to correct false positives without undue friction.

Looking Ahead

While the new feature marks progress, OpenAI has acknowledged that no system is foolproof. Future iterations may refine signal weighting or introduce additional verification steps. The company’s public statements suggest ongoing collaboration with parents, educators, and policy makers to shape responsible AI use.

Key Takeaways

  • OpenAI‘s age-prediction feature is a proactive response to past criticisms.
  • The system uses behavioral signals to flag users under 18 and auto-enforces content filters.
  • Misidentified users can restore adult status via a selfie verification with Persona.
  • The rollout on January 20, 2026 reflects a broader industry push for safer AI interactions.
  • Continued refinement and stakeholder engagement will be essential to maintain trust.

Sources

The information above is drawn from OpenAI’s public blog post and related announcements. No additional external sources were cited.

Author

  • I am Jordan M. Lewis, a dedicated journalist and content creator passionate about keeping the City of Brotherly Love informed, engaged, and connected.

    Jordan M. Lewis became a journalist after documenting neighborhood change no one else would. A Temple University grad, he now covers housing and urban development for News of Philadelphia, reporting from Philly communities on how policy decisions reshape everyday life.

Leave a Comment

Comments

No comments yet. Why don’t you start the discussion?

Leave a Reply

Your email address will not be published. Required fields are marked *