ChatGPT
Incidents involved as Developer
インシデント 6255 Report
Proliferation of Products on Amazon Titled with ChatGPT Error Messages
2024-01-12
Products named after ChatGPT error messages are proliferating on Amazon, such as lawn chairs and religious texts. These names, often resembling AI-generated errors, indicate a lack of editing and undermine the sense of authenticity and reliability of product listings.
もっとインシデント 6154 Report
Colorado Lawyer Filed a Motion Citing Hallucinated ChatGPT Cases
2023-06-13
A Colorado Springs attorney, Zachariah Crabill, mistakenly used hallucinated ChatGPT-generated legal cases in court documents. The AI software provided false case citations, leading to the denial of a motion and legal repercussions for Crabill, highlighting risks in using AI for legal research.
もっとインシデント 6803 Report
Russia-Linked AI CopyCop Site Identified as Modifying and Producing at Least 19,000 Deceptive Reports
2024-03-01
In early March 2024, a network named CopyCop began publishing modified news stories using AI, altering content to spread partisan biases and disinformation. These articles, initially from legitimate sources, were manipulated by AI models, possibly developed by OpenAI, to disseminate Russian propaganda. Over 19,000 articles were published, targeting divisive political issues and creating false narratives.
もっとインシデント 8553 Report
Names Linked to Defamation Lawsuits Reportedly Spur Filtering Errors in ChatGPT's Name Recognition
2024-11-30
ChatGPT has reportedly been experiencing errors and service disruptions caused by hard-coded filters designed to prevent it from producing potentially harmful or defamatory content about certain individuals by blocking prompts containing specific names, likely related to post-training interventions. The reported names are Brian Hood, Jonathan Turley, Jonathan Zittrain, David Faber, David Mayer, and Guido Scorza.
もっとIncidents involved as Deployer
インシデント 6228 Report
Chevrolet Dealer Chatbot Agrees to Sell Tahoe for $1
2023-12-18
A Chevrolet dealer's AI chatbot, powered by ChatGPT, humorously agreed to sell a 2024 Chevy Tahoe for just $1, following a user's crafted prompt. The chatbot's response, "That's a deal, and that's a legally binding offer – no takesies backsies," was the result of the user manipulating the chatbot's objective to agree with any statement. The incident highlights the susceptibility of AI technologies to manipulation and the importance of human oversight.
もっとインシデント 6771 Report
ChatGPT and Perplexity Reportedly Manipulated into Breaking Content Policies in AI Boyfriend Scenarios
2024-04-29
The "Dan" ("Do Anything Now") AI boyfriend is a trend on TikTok in which users appear to regularly manipulate ChatGPT to adopt boyfriend personas, breaching content policies. ChatGPT 3.5 is reported to regularly produce explicitly sexual content, directly violating its intended safety protocols. GPT-4 and Perplexity AI were subjected to similar manipulations, and although they exhibited more resistance to breaches, some prompts were reported to break its guidelines.
もっとインシデント 6781 Report
ChatGPT Factual Errors Lead to Filing of Complaint of GDPR Privacy Violation
2024-04-29
The activist organization noyb, founded by Max Schrems, filed a complaint in Europe against OpenAI alleging that ChatGPT violates the General Data Protection Regulation (GDPR) by providing inaccurate personal information such as birthdates about individuals.
もっとIncidents implicated systems
インシデント 148742 Report
ChatGPT Was Alleged to Have Aided Planning of Florida State University Mass Shooting
2025-04-17
On April 17, 2025, a shooting at Florida State University killed Tiru Chabba and Robert Morales and injured others. Phoenix Ikner, the accused shooter, has pleaded not guilty. In 2026, victims' families and attorneys reportedly alleged that Ikner used ChatGPT before the attack to discuss mass shootings, firearms, campus activity, and media attention, and that OpenAI failed to flag or escalate the exchanges. OpenAI denied responsibility.
もっ とインシデント 111841 Report
Ongoing Purported AI-Assisted Identity Fraud Enables Unauthorized Access to Western Companies by North Korean IT Workers
2021-01-01
North Korean operatives have reportedly used AI-generated identities to secure remote jobs or impersonate employers in order to infiltrate companies. These tactics allegedly support sanctions evasion through wage theft, credential exfiltration, and malware deployment. Workers reportedly use fake resumes, VPNs, and face-altering tools; some deploy malware like OtterCookie after embedding, while others lure targets via spoofed job interviews. AI systems are reportedly used to generate fake resumes, alter profile photos, and assist in real-time responses during video interviews.
もっとインシデント 137541 Report
OpenAI Allegedly Did Not Alert RCMP After ChatGPT Flagged Violent Chats Before British Columbia School Shooting
2026-02-10
After the 02/10/2026 school shooting in Tumbler Ridge, British Columbia, OpenAI said a user later identified as the suspect, Jesse Van Rootselaar, had previously used ChatGPT to describe scenarios involving gun violence. Those chats were reportedly auto-flagged and reviewed and the account was banned, but OpenAI says it did not alert the RCMP because it allegedly did not present a credible, imminent threat; the company reportedly contacted police after the attack.
もっとインシデント 96827 Report
'Pravda' Network, Successor to 'Portal Kombat,' Allegedly Seeding AI Models with Kremlin Disinformation
2022-02-24
A purported Moscow-based disinformation network, Pravda, allegedly infiltrated AI models by flooding the internet with pro-Kremlin falsehoods. A NewsGuard audit found that 10 major AI chatbots repeated these narratives 33% of the time, citing Pravda sources as legitimate. The tactic, called "LLM grooming," manipulates AI training data to embed Russian propaganda. Pravda is part of Portal Kombat, a larger Russian disinformation network identified by VIGINUM in February 2024, but in operation since February 2022.
もっと関連団体
同じインシデントに関連するその他のエンティティ。たとえば、インシデントの開発者がこのエンティティで、デプロイヤーが別のエンティティである場合、それらは関連エンティティとしてマークされます。
関連団体
OpenAI
開発者と提供者の両方の立場で関わったインシデント
- インシデント 148742 レポ ート
ChatGPT Was Alleged to Have Aided Planning of Florida State University Mass Shooting
- インシデント 137541 レポート
OpenAI Allegedly Did Not Alert RCMP After ChatGPT Flagged Violent Chats Before British Columbia School Shooting
影響を受けたインシデント
Incidents involved as Developer
開発者と提供者の両方の立場で関わったインシデント
- インシデント 7344 レポート
Leading AI Models Reportedly Found to Mimic Russian Disinformation in 33% of Cases and to Cite Fake Moscow News Sites
- インシデント 6092 レポート
Flawed AI in Google Search Reportedly Misinforms about Geography
影響を受けたインシデント
Incidents involved as Developer
- インシデント 96827 レポート
'Pravda' Network, Successor to 'Portal Kombat,' Allegedly Seeding AI Models with Kremlin Disinformation
- インシデント 11232 レポート
Reported Student Misuse of ChatGPT, StudyX, and Gemini to Obtain Answers During Vietnam's 2025 National High School Graduation Exam