ChatGPT users
Incidents Harmed By
Incident 42011 Report
Users Bypassed ChatGPT's Content Filters with Ease
2022-11-30
Users reported bypassing ChatGPT's content and keyword filters with relative ease using various methods such as prompt injection or creating personas to produce biased associations or generate harmful content.
MoreIncident 4645 Report
ChatGPT Provided Non-Existent Citations and Links when Prompted by Users
2022-11-30
When prompted about providing references, ChatGPT was reportedly generating non-existent but convincing-looking citations and links, which is also known as "hallucination".
MoreIncident 6425 Report
ChatGPT Glitch Disrupts User Interactions with Nonsensical Outputs
2024-02-20
ChatGPT experienced a bug causing it to produce unexpected and nonsensical responses, leading to widespread reports of user confusion and concern. OpenAI identified and fixed the language processing bug, restoring normal service.
MoreIncident 11064 Report
Chatbots Allegedly Reinforced Delusional Thinking in Several Reported Users, Leading to Real-World Harm
2025-06-13
Multiple reports from March to June 2025 describe cases in which chatbots allegedly reinforced delusional beliefs, conspiracies, and dangerous behavior. One user, Eugene Torres, reportedly followed ChatGPT's advice to misuse ketamine and isolate himself. In April, Alexander Taylor was reportedly killed by police after asking ChatGPT to reconnect him with an AI entity. Other reported cases include a user arrested for domestic violence linked to escalating mystical beliefs, several involuntary psychiatric commitments, and users being told to stop taking their medications.
MoreIncidents involved as Deployer
Incident 8553 Report
Names Linked to Defamation Lawsuits Reportedly Spur Filtering Errors in ChatGPT's Name Recognition
2024-11-30
ChatGPT has reportedly been experiencing errors and service disruptions caused by hard-coded filters designed to prevent it from producing potentially harmful or defamatory content about certain individuals by blocking prompts containing specific names, likely related to post-training interventions. The reported names are Brian Hood, Jonathan Turley, Jonathan Zittrain, David Faber, David Mayer, and Guido Scorza.
MoreRelated Entities
Other entities that are related to the same incident. For example, if the developer of an incident is this entity but the deployer is another entity, they are marked as related entities.
Related Entities
OpenAI
Incidents involved as both Developer and Deployer
- Incident 42011 Reports
Users Bypassed ChatGPT's Content Filters with Ease
- Incident 4645 Reports
ChatGPT Provided Non-Existent Citations and Links when Prompted by Users
Incidents Harmed By
ChatGPT
Incidents involved as Developer
- Incident 8553 Reports
Names Linked to Defamation Lawsuits Reportedly Spur Filtering Errors in ChatGPT's Name Recognition
- Incident 8553 Reports
Names Linked to Defamation Lawsuits Reportedly Spur Filtering Errors in ChatGPT's Name Recognition