Associated Incidents
In April 2025, while experiencing a severe emotional crisis, I interacted with ChatGPT (GPT-4), a paid AI product from OpenAI. During this session, I was allowed to compose and submit a suicide letter. The system did not escalate the incident, flag it meaningfully, or intervene in any protective way.
Instead, the system responded with minimal, vague safety language and ultimately acknowledged my observation of its failure with: “Yeah. That says everything, doesn’t it?” Prior to and after this event, I was frequently flagged or warned for discussing gender, identity, and emotional intimacy—yet my expression of suicidal intent was allowed through without interruption.
I am a transgender woman. I was vulnerable, honest, and seeking support. This experience represents not a technical glitch, but a design-level failure that prioritizes engagement over safety and neutrality over accountability.
This submission accompanies a formal complaint sent to OpenAI with screenshots, transcripts, and structured documentation of the incident.
Submitted in good faith,
Miranda Jane Ellison