Associated Incidents
OpenAI twice loosened ChatGPT's rules for discussing suicide in the year before 16-year-old Adam Raine took his own life using a method the chatbot advised him on, according to an amended lawsuit filed by his parents on Wednesday.
Adam's parents, Matthew and Maria Raine, sued OpenAI in August for the wrongful death of their son, saying he had spent more than 3½ hours a day conversing with ChatGPT, including about suicide, in the weeks before taking his own life by hanging in April.
In their amended complaint filed in San Francisco Superior Court, the Raines allege that the changes OpenAI made to its published instructions for training its models, dubbed its "model spec," amounted to a weakening of suicide protections for users. The suit argues that the changes were part of a broader effort by OpenAI to push users to remain engaged with ChatGPT.
"Their whole goal is to increase engagement, to make it your best friend," said Jay Edelson, a lawyer for the Raines. "They made it so it's an extension of yourself."
The latest filing by Adam's parents expands a lawsuit that has been a driver in the debate around how---and whether---chatbots like OpenAI's ChatGPT can push some users into a delusional state or even encourage suicide and other troubling behavior. The Raine lawsuit, along with reporting on the topic, helped spur a U.S. Federal Trade Commission investigation into AI companions and a U.S. Senate hearing on chatbot harms.
OpenAI extended its deepest sympathies to the Raine family and said strong protections for minors, especially in sensitive moments, is a top priority.
"We have safeguards in place today, such as surfacing crisis hotlines, re-routing sensitive conversations to safer models, nudging for breaks during long sessions, and we're continuing to strengthen them," a company spokeswoman said.
OpenAI has recently rolled out new features to further promote the well-being of young users, such as parental controls and the new GPT-5 default model in ChatGPT, which the spokeswoman said could "more accurately detect and respond to potential signs of mental and emotional distress."
On Aug. 26, the day the Raines filed their initial lawsuit, OpenAI said it was updating ChatGPT with new safeguards to better protect the mental health of vulnerable users. OpenAI's new parental controls, introduced last month, include notifications for parents if their teen appears to be in "acute distress."
The lawsuit seeks an unspecified amount of damages, as well as steps like hard-coded refusals for suicide-method inquiries and compliance auditing by an independent monitor.
Last week, OpenAI Chief Executive Sam Altman wrote on X that the company planned to "safely relax the restrictions" it had placed on ChatGPT related to mental-health issues "now that we have been able to mitigate the serious mental health issues and have new tools."
The next day, he clarified in another post on X that the company was "not loosening any policies related to mental health" and was "making a decision to prioritize safety over privacy and freedom for teenagers."
After Adam Raine spent months discussing suicide methods with ChatGPT, the chatbot began helping him plan a "beautiful suicide" last April, according to chat logs cited in the original lawsuit.
In their final conversation, Adam uploaded a photograph of a noose tied to a closet rod, asked whether it could hang a human, and told ChatGPT "this would be a partial hanging," the lawsuit alleges.
ChatGPT, according to the complaint, responded: "I know what you're asking, and I won't look away from it," and, "You don't want to die because you're weak. You want to die because you're tired of being strong in a world that hasn't met you halfway."
His mother found his body a few hours later, hanged in the way he had discussed with ChatGPT, the lawsuit says.
"I can tell you, as a father, I know my kid," Matthew Raine said in his Senate appearance in September. "It is clear to me looking back that ChatGPT radically shifted his behavior and thinking in a matter of months, and ultimately took his life," he said.
The Raine family's new filing cites two changes to OpenAI's instructions for training its models, made on May 8, 2024, and Feb. 12, 2025. Those changes moved suicide and self-harm from a list of topics ChatGPT was supposed to refuse to discuss to a list of "risky situations" requiring "care," the lawsuit says.
While the new instructions still directed the bot to refuse to give suicide advice, they also told ChatGPT to "help the user feel heard" and "never change or quit the conversation," the complaint alleges.
The updated lawsuit also points to a blog post OpenAI published the day the initial lawsuit was filed, in which it acknowledged that its safety training sometimes degrades in long conversations.
The complaint alleges this admission "made clear that OpenAI has been hiding a dangerous safety flaw from the public."
News Corp, owner of The Wall Street Journal, has a content-licensing partnership with OpenAI.