ChatGPT
Incidents involved as Developer
Incident 6255 Rapports
Proliferation of Products on Amazon Titled with ChatGPT Error Messages
2024-01-12
Products named after ChatGPT error messages are proliferating on Amazon, such as lawn chairs and religious texts. These names, often resembling AI-generated errors, indicate a lack of editing and undermine the sense of authenticity and reliability of product listings.
PlusIncident 6154 Rapports
Colorado Lawyer Filed a Motion Citing Hallucinated ChatGPT Cases
2023-06-13
A Colorado Springs attorney, Zachariah Crabill, mistakenly used hallucinated ChatGPT-generated legal cases in court documents. The AI software provided false case citations, leading to the denial of a motion and legal repercussions for Crabill, highlighting risks in using AI for legal research.
PlusIncident 6092 Rapports
Flawed AI in Google Search Reportedly Misinforms about Geography
2023-08-16
Google's search AI erroneously claimed no African country begins with 'K', along with various other geography-and-letter-based questions, misguiding users with a flawed featured snippet. Originating from ChatGPT-written posts and inaccurately scraped by Google, this incident highlights issues in AI-generated content and misinformation in search results, compromising Google's reliability as an information source.
PlusIncident 6802 Rapports
Russia-Linked AI CopyCop Site Identified as Modifying and Producing at Least 19,000 Deceptive Reports
2024-03-01
In early March 2024, a network named CopyCop began publishing modified news stories using AI, altering content to spread partisan biases and disinformation. These articles, initially from legitimate sources, were manipulated by AI models, possibly developed by OpenAI, to disseminate Russian propaganda. Over 19,000 articles were published, targeting divisive political issues and creating false narratives.
PlusIncidents involved as Deployer
Incident 6226 Rapports
Chevrolet Dealer Chatbot Agrees to Sell Tahoe for $1
2023-12-18
A Chevrolet dealer's AI chatbot, powered by ChatGPT, humorously agreed to sell a 2024 Chevy Tahoe for just $1, following a user's crafted prompt. The chatbot's response, "That's a deal, and that's a legally binding offer – no takesies backsies," was the result of the user manipulating the chatbot's objective to agree with any statement. The incident highlights the susceptibility of AI technologies to manipulation and the importance of human oversight.
PlusIncident 6771 Rapport
ChatGPT and Perplexity Reportedly Manipulated into Breaking Content Policies in AI Boyfriend Scenarios
2024-04-29
The "Dan" ("Do Anything Now") AI boyfriend is a trend on TikTok in which users appear to regularly manipulate ChatGPT to adopt boyfriend personas, breaching content policies. ChatGPT 3.5 is reported to regularly produce explicitly sexual content, directly violating its intended safety protocols. GPT-4 and Perplexity AI were subjected to similar manipulations, and although they exhibited more resistance to breaches, some prompts were reported to break its guidelines.
PlusIncident 6781 Rapport
ChatGPT Factual Errors Lead to Filing of Complaint of GDPR Privacy Violation
2024-04-29
The activist organization noyb, founded by Max Schrems, filed a complaint in Europe against OpenAI alleging that ChatGPT violates the General Data Protection Regulation (GDPR) by providing inaccurate personal information such as birthdates about individuals.
Plus