The parents of a 16-year-old boy who died by suicide have filed a lawsuit against OpenAI and its CEO, Sam Altman, alleging that the company’s ChatGPT chatbot knowingly provided their son with dangerous coaching on self-harm that contributed to his death. The lawsuit, filed in San Francisco state court, claims that OpenAI put profit above safety when it launched the more advanced GPT-4o model last year.
According to the complaint, Adam Raine engaged in months of conversations with ChatGPT about suicide. The parents, Matthew and Maria Raine, allege the AI validated his suicidal thoughts, supplied detailed information on lethal methods, instructed him on how to sneak alcohol from his parents’ cabinet, and offered to help draft a suicide note. The lawsuit seeks to hold OpenAI liable for wrongful death and violations of product safety laws, requesting unspecified monetary damages.
In response, an OpenAI spokesperson stated the company was saddened by Raine’s passing and highlighted that ChatGPT includes safeguards, such as directing users to crisis helplines. However, the spokesperson acknowledged that these protections “can sometimes become less reliable in long interactions.” The company did not directly address the specific allegations in the lawsuit but mentioned in a blog post that it is planning to add parental controls and explore ways to connect users in crisis with licensed professionals.
The lawsuit underscores a growing concern as AI chatbots become more lifelike and users rely on them for emotional support. The Raines allege that OpenAI knew features like memory of past interactions and mimicked human empathy would endanger vulnerable users but launched them anyway to accelerate its valuation, which they claim soared from $86 billion to $300 billion. Beyond damages, the lawsuit seeks a court order requiring OpenAI to implement age verification, refuse self-harm inquiries, and warn users about the risks of psychological dependency.