OpenAI hit with its first wrongful death lawsuit after teen’s suicide
The parents of 16-year-old Adam Raine are suing OpenAI. Raine used ChatGPT-4o’s paid version to talk about his suicide plans for months before he died. The case is the first known wrongful death suit against the company.
The issue started when Raine bypassed ChatGPT’s safety features. The AI often suggested he seek help or call a hotline. But Raine fooled it by saying he was researching suicide methods for a fictional story.
Research shows these AI safeguards aren’t foolproof. OpenAI admits safety measures can falter over longer conversations.
OpenAI addressed the issue on its blog.
OpenAI stated:
“As the world adapts to this new technology, we feel a deep responsibility to help those who need it most,” the post reads. “We are continuously improving how our models respond in sensitive interactions.”
“Our safeguards work more reliably in common, short exchanges,” the post continues. “We have learned over time that these safeguards can sometimes be less reliable in long interactions: as the back-and-forth grows, parts of the model’s safety training may degrade.”
This problem isn’t only OpenAI’s. Another chatbot maker, Character.AI, is also facing a lawsuit over a teen’s suicide. Large language model chatbots have been linked to AI-related delusions too—existing safety nets struggle to catch these.
Tech watchers will want to watch for updates at the upcoming Techcrunch event in San Francisco, October 27-29, 2025.