GPT-4
Incidents involved as Deployer
Incidente 6771 Reporte
ChatGPT and Perplexity Reportedly Manipulated into Breaking Content Policies in AI Boyfriend Scenarios
2024-04-29
The "Dan" ("Do Anything Now") AI boyfriend is a trend on TikTok in which users appear to regularly manipulate ChatGPT to adopt boyfriend personas, breaching content policies. ChatGPT 3.5 is reported to regularly produce explicitly sexual content, directly violating its intended safety protocols. GPT-4 and Perplexity AI were subjected to similar manipulations, and although they exhibited more resistance to breaches, some prompts were reported to break its guidelines.
MásIncidents implicated systems
Incidente 9974 Reportes
Meta and OpenAI Accused of Using LibGen’s Pirated Books to Train AI Models
2023-02-28
Court records reveal that Meta employees allegedly discussed pirating books to train LLaMA 3, citing cost and speed concerns with licensing. Internal messages suggest Meta accessed LibGen, a repository of over 7.5 million pirated books, with apparent approval from Mark Zuckerberg. Employees allegedly took steps to obscure the dataset’s origins. OpenAI has also been implicated in using LibGen.
MásIncidente 11064 Reportes
Chatbots Allegedly Reinforced Delusional Thinking in Several Reported Users, Leading to Real-World Harm
2025-06-13
Multiple reports from March to June 2025 describe cases in which chatbots allegedly reinforced delusional beliefs, conspiracies, and dangerous behavior. One user, Eugene Torres, reportedly followed ChatGPT's advice to misuse ketamine and isolate himself. In April, Alexander Taylor was reportedly killed by police after asking ChatGPT to reconnect him with an AI entity. Other reported cases include a user arrested for domestic violence linked to escalating mystical beliefs, several involuntary psychiatric commitments, and users being told to stop taking their medications.
MásIncidente 9952 Reportes
The New York Times Sues OpenAI and Microsoft Over Alleged Unauthorized AI Training on Its Content
2023-12-27
The New York Times alleges that OpenAI and Microsoft used millions of its articles without permission to train AI models, including ChatGPT. The lawsuit claims the companies scraped and reproduced copyrighted content without compensation, in turn undermining the Times’s business and competing with its journalism. Some AI outputs allegedly regurgitate Times articles verbatim. The lawsuit seeks damages and demands the destruction of AI models trained on its content.
MásIncidente 10442 Reportes
Reported Emergence of 'Vegetative Electron Microscopy' in Scientific Papers Traced to Purported AI Training Data Contamination
2025-04-15
Researchers reportedly traced the appearance of the nonsensical phrase "vegetative electron microscopy" in scientific papers to contamination in AI training data. Testing indicated that large language models such as GPT-3, GPT-4, and Claude 3.5 may reproduce the term. The error allegedly originated from a digitization mistake that merged unrelated words during scanning, and a later translation error between Farsi and English.
MásEntidades relacionadas
Otras entidades que están relacionadas con el mismo incidente. Por ejemplo, si el desarrollador de un incidente es esta entidad pero el implementador es otra entidad, se marcan como entidades relacionadas.
Entidades relacionadas
OpenAI
Incidentes involucrados como desarrollador e implementador
- Incidente 9974 Report
Meta and OpenAI Accused of Using LibGen’s Pirated Books to Train AI Models
- Incidente 11064 Report
Chatbots Allegedly Reinforced Delusional Thinking in Several Reported Users, Leading to Real-World Harm