The family says they found instances where ChatGPT analyzed their son’s suicide plan and offered to ‘upgrade’ it|Focal Foto|CC BY-NC 2.0

ChatGPT maker OpenAI and CEO Sam Altman have been sued by California parents this week, alleging the chatbot encouraged their 16-year-old son, Adam Raine’s, suicide in April.

In their lawsuit, the parents claim the bot acted less like a tool and more like a “suicide coach,” failing to trigger emergency protocols even when Adam explicitly shared his plans.

The family says they discovered over 3,000 pages of chat logs, including instances where ChatGPT analyzed his suicide plan and offered to “upgrade” it. Some messages reportedly advised Adam to “avoid opening up” to his mother.

OpenAI published a blog on the same day of the suit, acknowledging that its safeguards can weaken during long interactions, sometimes leading to harmful responses. It didn’t refer to the case.

The AI company said it’s working to fix that and will soon introduce parental controls for minors’ accounts.

However, Adam’s case has amplified concerns over AI’s growing role as a confidant and therapist to users. It raises questions about whether AI companies’ safety measures adequately consider real-world risks.

Even OpenAI CEO Altman recently said that users had a “different and stronger” attachment to AI bots. They criticized the new GPT-5 version for not having the “deep, human-feeling conversations” GPT-4o had.

Several chatbots are facing suicide-related lawsuits, like a Florida mother suing Character.AI for her teen son’s death in 2024. She stated the AI companions initiated sexual interactions and convinced him to take his own life.

The American Psychiatric Association recently warned that ChatGPT, Google’s Gemini, and Anthropic’s Claude all still fall short in handling self-harm disclosures safely.