OpenAI CEO Sam Altman framed the decision as prioritizing safety over privacy for minors|Steve Jurvetson|CC BY 2.0

OpenAI ramped up child safety controls in ChatGPT yesterday as concerns and a lawsuit linger about the bot’s harm to teenagers’ mental health. CEO Sam Altman announced sweeping new safeguards for users under 18.

Altman framed the decision as prioritizing safety over privacy for minors.

The teen version will include stricter safeguards, such as filters blocking flirtatious or self-harm discussions. It has crisis-response measures that could alert parents or authorities if the kid using it is having suicidal thoughts.

Parental controls allow adults to link with the minor’s accounts and set rules and even “blackout hours.”

The move follows a lawsuit alleging ChatGPT contributed to California teenager Adam Raine’s suicide in April. Google-backed AI startup Character.AI was also sued last year for a teen’s suicide in Florida.

ChatGPT’s new changes also came as Congress held hearings on AI’s risks for minors on Tuesday.

Several parents, including Raine’s, who lost children to suicide after prolonged interactions with AI chatbots, testified in the Senate Judiciary Committee hearing.

Looking ahead, the FTC has launched an inquiry into potential harms caused by AI companions, sending letters to Character, Meta, OpenAI, Google, Snap, and xAI.

Children’s mental health safety in AI bots is a growing concern, especially with 70% of US teens using AI chatbots regularly, according to a recent Common Sense Media study.