ChatGPT
Incidents involved as Developer
Incident 6255 Reports
Proliferation of Products on Amazon Titled with ChatGPT Error Messages
2024-01-12
Products named after ChatGPT error messages are proliferating on Amazon, such as lawn chairs and religious texts. These names, often resembling AI-generated errors, indicate a lack of editing and undermine the sense of authenticity and reliability of product listings.
MoreIncident 6154 Reports
Colorado Lawyer Filed a Motion Citing Hallucinated ChatGPT Cases
2023-06-13
A Colorado Springs attorney, Zachariah Crabill, mistakenly used hallucinated ChatGPT-generated legal cases in court documents. The AI software provided false case citations, leading to the denial of a motion and legal repercussions for Crabill, highlighting risks in using AI for legal research.
MoreIncident 8553 Reports
Names Linked to Defamation Lawsuits Reportedly Spur Filtering Errors in ChatGPT's Name Recognition
2024-11-30
ChatGPT has reportedly been experiencing errors and service disruptions caused by hard-coded filters designed to prevent it from producing potentially harmful or defamatory content about certain individuals by blocking prompts containing specific names, likely related to post-training interventions. The reported names are Brian Hood, Jonathan Turley, Jonathan Zittrain, David Faber, David Mayer, and Guido Scorza.
MoreIncident 6092 Reports
Flawed AI in Google Search Reportedly Misinforms about Geography
2023-08-16
Google's search AI erroneously claimed no African country begins with 'K', along with various other geography-and-letter-based questions, misguiding users with a flawed featured snippet. Originating from ChatGPT-written posts and inaccurately scraped by Google, this incident highlights issues in AI-generated content and misinformation in search results, compromising Google's reliability as an information source.
MoreIncidents involved as Deployer
Incident 6226 Reports
Chevrolet Dealer Chatbot Agrees to Sell Tahoe for $1
2023-12-18
A Chevrolet dealer's AI chatbot, powered by ChatGPT, humorously agreed to sell a 2024 Chevy Tahoe for just $1, following a user's crafted prompt. The chatbot's response, "That's a deal, and that's a legally binding offer – no takesies backsies," was the result of the user manipulating the chatbot's objective to agree with any statement. The incident highlights the susceptibility of AI technologies to manipulation and the importance of human oversight.
MoreIncident 6771 Report
ChatGPT and Perplexity Reportedly Manipulated into Breaking Content Policies in AI Boyfriend Scenarios
2024-04-29
The "Dan" ("Do Anything Now") AI boyfriend is a trend on TikTok in which users appear to regularly manipulate ChatGPT to adopt boyfriend personas, breaching content policies. ChatGPT 3.5 is reported to regularly produce explicitly sexual content, directly violating its intended safety protocols. GPT-4 and Perplexity AI were subjected to similar manipulations, and although they exhibited more resistance to breaches, some prompts were reported to break its guidelines.
MoreIncident 6781 Report
ChatGPT Factual Errors Lead to Filing of Complaint of GDPR Privacy Violation
2024-04-29
The activist organization noyb, founded by Max Schrems, filed a complaint in Europe against OpenAI alleging that ChatGPT violates the General Data Protection Regulation (GDPR) by providing inaccurate personal information such as birthdates about individuals.
MoreIncidents implicated systems
Incident 8661 Report
Network of 171 AI-Powered Bots Reportedly Spread Political Disinformation Ahead of Ghana’s December 2024 General Election
2024-02-01
A network of 171 bot accounts on X are alleged to have used ChatGPT to generate political content supporting Ghana’s New Patriotic Party (NPP) and its presidential candidate, Mahamudu Bawumia, ahead of the December 2024 election. The AI-generated posts reportedly praised Bawumia while spreading disinformation targeting the opposition candidate, John Mahama, of the National Democratic Congress (NDC).
MoreIncident 8671 Report
AI-Generated Airline Reviews Allegedly Mislead Consumers and Undermine Trust
2024-10-31
AI-generated reviews of airline services have reportedly increased by 189% since the release of ChatGPT, with certain carriers like China Southern Airlines and SouthWest Airlines disproportionately affected, according to a study by Originality.ai.
MoreRelated Entities
OpenAI
Incidents involved as both Developer and Deployer
Incidents Harmed By
Incidents involved as Developer
Jeff Hancock
Incidents Harmed By
- Incident 8521 Report
Alleged Fake Citations Undermine Expert Testimony in Minnesota Deepfake Law Case