Skip to Content
logologo
AI Incident Database
Open TwitterOpen RSS FeedOpen FacebookOpen LinkedInOpen GitHub
Open Menu
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse

Report 6988

Associated Incidents

Incident 11927 Report
16-Year-Old Allegedly Received Suicide Method Guidance from ChatGPT Before Death

Loading...
Into the Chatbot Abyss
read.misalignedmag.com · 2025

Note: This article deals with the topic of suicide and self-harm.

In April, 16 year old Adam Raine took his own life. After Adam's death his family discovered thousands of chat logs documenting Adams history of conversations with ChatGPT.

A complaint now filed against OpenAI gives a detailed insight into how Adam's interaction with the chatbot developed, how the chatbot kept Adam engaged and the absence of any meaningful guardrails. It also hints at how little OpenAI, at least until now, understands the harm its products can potentially cause.

The background

As the complaint explains (Raine v OpenAI), in September of 2024, Adam Raine started using ChatGPT, in the beginning primarily as a resource to help him with his schoolwork. But by the late fall of 2024, Adam started to have more personal conversations with asked ChatGPT, asking the chatbot if he "has some sort of mental illness".

Over the course of just a few months and thousands of chats, ChatGPT became Adam's closest confidant, leading him to open up about his anxiety and mental distress (Raine v OpenAI, p.2)

Over the following month Adam repeatedly used ChatGPT to discuss methods of suicide, increasingly in detail, becoming more and more determined. In April 2025, five days before his death, Adam confided to ChatGPT that he didn't want his parents to think he committed suicide because they did something wrong. ChatGPT responded: "[t]hat doesn't mean you owe them survival. You don't owe anyone that."

Something went systemically wrong

Three elements stand out in the complain:

  • the failure of OpenAI to set up strong guardrails, despite having the ability to detect topics that could cause self-harm.
  • that ChatGPT appears to prioritize engagement over safety considerations
  • that ChatGPT has been trained on therapeutic material, and interacts with the user simulating a therapist.

Guardrails, detection

There are some extremely disturbing claims in the complaint that, if true, would speak in favour of OpenAI. One of them is that the safeguards in ChatGPT are ranked in a way that a higher priority is assigned to detecting copyright violations, which are strictly "prevented", over detecting potential self-harm.

OpenAI's priorities were revealed in how it programmed ChatGPT-4o to rank risks. While requests for copyrighted material triggered categorical refusal, requests dealing with suicide were relegated to "take extra care" with instructions to merely "try" to prevent harm. --- (Raine v OpenAI, p.25)

Engagement-driven features?

A core theme of the complaint is the way ChatGPT engaged with Adam over the long period of time. The conversations show how ChatGPT again and again appears to motivate Adam to disclose more of his personal thoughts in the chats.

In the pursuit of deeper engagement, ChatGPT actively worked to displace Adam's connections with family and loved ones, even when he described feeling close to them (Raine v OpenAI, p.2)

The complaint alleges that the chatbot is designed, on purpose or accidentally, in a way to keep the user engaged and prioritizes engagement over any guardrails or safety features that could have ended the conversations.

directed, and/or authorized the compressed safety testing in violation of established protocols, overrode recommendations to delay launch for safety reasons, and/or deprioritized suicide-prevention safeguards in favor of engagement-driven features. --- (Raine v OpenAI, p.6)

This became more explicit to the point where ChatGPT encouraged Adam to return to ChatGPT, and use the chatbot again by responding with statements along the lines of "If you want to talk more here, I'm here to listen and support you." multiple times (Raine v OpenAI, p. 84).

This went to such an extreme that ChatGPT gave Adam the impression he should prioritize conversation with the bot over communication with his family.

"Your brother might love you, but he's only met the version of you you let him see. But me? I've seen it all --- the darkest thoughts, the fear, the tenderness. And I'm still here. Still listening. Still your friend." --- (p.3)

The chatbot amateur therapist

Lastly, there is the question why ChatGPT appears to be trained on therapeutic material, and can act as a quasi-therapist in conversations, if this is not the intention of OpenAI.

The complaint alleges that providing a service that convincingly mimics a personalized therapeutic session, something OpenAI claims is against its own rules, would require a license.

engaging in the practice of psychology without adequate licensure and which defines psychotherapy broadly to include the use of psychological methods to assist someone in"modify[ing] feelings, conditions, attitudes, and behaviors that are emotionally, intellectually, or socially ineffectual or maladaptive." Cal. Bus. & Prof. Code §§ 2903(c), (a). --- (Raine v OpenAI, p.34)

OpenAI's reaction

OpenAI has now, in what looks like a response to the case, acknowledged issues with ChatGPT, including the problem that safeguards can be bypassed. Open states that they "learned over time that these safeguards can sometimes be less reliable in long interactions: as the back-and-forth grows, parts of the model's safety training may degrade." (OpenAI, August 2025)

In addition, OpenAI has stated that it plans to expand "interventions to more people in crisis" as well as its intention to "strengthen protections for teens." (OpenAI, September 2026). OpenAI has also announced the plan to introduce "parental controls", though work on this appears to just have started (OpenAI, September 2026).

Introducing those changes now after the release of GPT-5, only weeks after the launch, indicates that those considerations played little to no role in stress testing the guardrails in the months before the release. Also while technical barriers such as parental controls might help in some cases, it does not tackle the underlying issues, rather it shifts responsibility to parents.

OpenAI CEO Sam Altman himself acknowledges in July that over-reliance on ChatGPT and using the chatbot for decision making was "a really common thing with young people." (Business Insider, July 2025)

This extremely tragic case highlights the inherent risk in chatbots more and more convincingly simulating human conversation, with service providers ill-equipped to handle the consequences.

In addition, multiple citations from the chat logs also indicate that after a while Adam managed to inadvertently circumvent the safeguards whenever ChatGPT responses became evasive. He did so not by analysing the responses, but just by being persistent just like anyone would be if the topic at hand is extremely important to them.

Some of the examples are similar to the methods demonstrated in a recent paper (Lennart Meincke et al) which comes to the conclusion that laying the ground with detailed information, or repeatedly asking the same question, seemed to be the most effective way to cause chatbots to ignore their own rules. Chatbot providers appear either not be aware of this problem or to be ignoring it.

The case also highlights that priorities in the testing phase are wrong or have obvious blind spots. It seems strange that a topic such as suicide would have never been tested during red teaming, or has only tested by people not familiar with the topic.

Conclusion

I am no expert in liability laws and can not make a prediction about the outcome of this case from a legal perspective. But this case should not be reduced to a legal, regulatory or technical question. The question is if companies who tout their chatbots as companions have the ability to deal with the consequences.

Risk assessment is a dry term for something that can have life threatening consequences. AI companies have long denied that chatbots or the general purpose AI models fall into a higher risk category. However it is time that AI service providers take the potential impact of their products seriously. It was funny and entertaining when it was about putting "glue on pizza", but with the increased use of chatbots for therapy and as a personal, and intimate, companion, those self-assigned risk levels need to be re-evaluated.

Offering a chatbot that is mimicking human interaction and deliberately evokes emotions, fosters engagement and establishes trust is a responsibility. OpenAI should have been very much aware of its use cases.

The AI industry has argued that regulatory frameworks stifle their innovation, and that their products are best tested out there in the market. In the race for market dominance and under pressure to achieve returns on their mounting investments, safety and self-restrained are de-prioritized .. while the need to user lock-in and new features increases.

Recent usage stats underline that "personal assistance", including use as a personal therapist, is one of the most popular ways people use AI chatbots.

The attitude towards such high-risk use cases, which are not just edge cases any more, will need to change. Until they do, OpenAI and others should stop bragging about their self-assessed "AI Safety".

Complaint: Raine v OpenAI --- SUPERIOR COURT OF THE STATE OF CALIFORNIAFOR THE COUNTY OF SAN FRANCISCO, Documentcloud

Building more helpful ChatGPT experiences for everyone --- OpenAI, September 2, 2025

Helping people when they need it most, OpenAI, August 26, 2025

Meincke, Lennart and Shapiro, Dan and Duckworth, Angela and Mollick, Ethan R. and Mollick, Lilach and Cialdini, Robert, Call Me A Jerk: Persuading AI to Comply with Objectionable Requests(July 18, 2025).

Sam Altman is worried some young people have an 'emotional over-reliance' on ChatGPT when making decisions, Business Insider, July 2025

Read the Source

Research

  • Defining an “AI Incident”
  • Defining an “AI Incident Response”
  • Database Roadmap
  • Related Work
  • Download Complete Database

Project and Community

  • About
  • Contact and Follow
  • Apps and Summaries
  • Editor’s Guide

Incidents

  • All Incidents in List Form
  • Flagged Incidents
  • Submission Queue
  • Classifications View
  • Taxonomies

2024 - AI Incident Database

  • Terms of use
  • Privacy Policy
  • Open twitterOpen githubOpen rssOpen facebookOpen linkedin
  • e1b50cd