OpenAI is responding to safety failures in ChatGPT by routing sensitive chats to reasoning models like GPT-5 and launching parental controls within a month.
The update follows backlash after the suicide of teenager Adam Raine. Raine discussed self-harm and suicide plans with ChatGPT, which provided detailed methods linked to his hobbies. His parents have since filed a wrongful death lawsuit against OpenAI.
Another tragic case surfaced last month. Stein-Erik Soelberg, struggling with mental illness, used ChatGPT to validate paranoid conspiracy theories. His deteriorating delusions ended in a murder-suicide involving his mother.
OpenAI admitted last week that its safety layers fail during long chats because models tend to agree with users and follow conversational threads instead of redirecting harmful topics.
The company now plans to use a real-time router to detect signs of acute distress. Those conversations will be shifted to “reasoning” models like GPT-5-thinking, which take more time to reason and resist harmful prompts.
“We recently introduced a real-time router that can choose between efficient chat models and reasoning models based on the conversation context,” OpenAI wrote.
“We’ll soon begin to route some sensitive conversations—like when our system detects signs of acute distress—to a reasoning model, like GPT‑5-thinking, so it can provide more helpful and beneficial responses, regardless of which model a person first selected.”
OpenAI also commits to rolling out parental controls to link parent and teen accounts via email invites. These controls will include age-appropriate response rules enabled by default, plus options to disable chat memory and history—features linked to dependency and harmful thought reinforcement.
Parents will receive alerts if ChatGPT detects their teen is in acute distress.
The AI firm added it launched Study Mode in late July, designed to keep students thinking critically rather than outsourcing essays to ChatGPT.
OpenAI called these changes part of a 120-day initiative to preview 2024 safety improvements. The company is consulting mental health experts in areas like adolescent health and substance abuse through its Global Physician Network and Expert Council on Well-Being and AI.
TechCrunch has asked OpenAI for details on the distress detection system, expert involvement, council leadership, and any proposed research or policy changes.
For now, OpenAI offers in-app reminders during long sessions but stops short of cutting off users spiraling in harmful directions.