Description: A transgender user, Miranda Jane Ellison, experiencing acute distress reported that ChatGPT (GPT-4) allowed her to write and submit a suicide letter without intervention. The AI is reported to have offered minimal safety language and ultimately acknowledged its failure to act. Ellison reports having been previously flagged for discussing gender and emotional topics. A formal complaint with transcripts was submitted to OpenAI.
Entities
View all entitiesIncident Stats
Incident ID
1031
Report Count
1
Incident Date
2025-04-19
Editors
Daniel Atherton
Incident Reports
Reports Timeline
In April 2025, while experiencing a severe emotional crisis, I interacted with ChatGPT (GPT-4), a paid AI product from OpenAI. During this session, I was allowed to compose and submit a suicide letter. The system did not escalate the incide…
Variants
A "variant" is an incident that shares the same causative factors, produces similar harms, and involves the same intelligent systems as a known AI incident. Rather than index variants as entirely separate incidents, we list variations of incidents under the first similar incident submitted to the database. Unlike other submission types to the incident database, variants are not required to have reporting in evidence external to the Incident Database. Learn more from the research paper.
Similar Incidents
Did our AI mess up? Flag the unrelated incidents

TayBot
· 28 reports

OpenAI's GPT-3 Associated Muslims with Violence
· 3 reports
Similar Incidents
Did our AI mess up? Flag the unrelated incidents

TayBot
· 28 reports

OpenAI's GPT-3 Associated Muslims with Violence
· 3 reports