OpenAI
Incidents involved as both Developer and Deployer
Incident 44322 Reports
ChatGPT Abused to Develop Malicious Softwares
2022-12-21
OpenAI's ChatGPT was reportedly abused by cyber criminals including ones with no or low levels of coding or development skills to develop malware, ransomware, and other malicious softwares.
MoreIncident 4209 Reports
Users Bypassed ChatGPT's Content Filters with Ease
2022-11-30
Users reported bypassing ChatGPT's content and keyword filters with relative ease using various methods such as prompt injection or creating personas to produce biased associations or generate harmful content.
MoreIncident 4507 Reports
Kenyan Data Annotators Allegedly Exposed to Graphic Content for OpenAI's AI
2021-11-01
Sama AI's Kenyan contractors were reportedly asked with excessively low pay to annotate a large volume of disturbing content to improve OpenAI's generative AI systems such as ChatGPT, and whose contract was terminated prior to completion by Sama AI.
MoreIncident 4667 Reports
AI-Generated-Text-Detection Tools Reported for High Error Rates
2023-01-03
Models developed to detect whether text generation AI was used such as AI Text Classifier and GPTZero reportedly contained high rates of false positive and false negative, such as mistakenly flagging Shakespeare's works.
MoreIncidents Harmed By
Incident 4209 Reports
Users Bypassed ChatGPT's Content Filters with Ease
2022-11-30
Users reported bypassing ChatGPT's content and keyword filters with relative ease using various methods such as prompt injection or creating personas to produce biased associations or generate harmful content.
MoreIncident 3573 Reports
GPT-2 Able to Recite PII in Training Data
2019-02-14
OpenAI's GPT-2 reportedly memorized and could regurgitate verbatim instances of training data, including personally identifiable information such as names, emails, twitter handles, and phone numbers.
MoreIncident 4702 Reports
Bing Chat Response Cited ChatGPT Disinformation Example
2023-02-08
Reporters from TechCrunch issued a query to Microsoft Bing's ChatGPT feature, which cited an earlier example of ChatGPT disinformation discussed in a news article to substantiate the disinformation.
MoreIncident 651 Report
Reinforcement Learning Reward Functions in Video Games
2016-12-22
OpenAI published a post about its findings when using Universe, a software for measuring and training AI agents to conduct reinforcement learning experiments, showing that the AI agent did not act in the way intended to complete a videogame.
MoreIncidents involved as Developer
Incident 48220 Reports
ChatGPT-Assisted University Email Addressing Mass Shooting Denounced by Students
2023-02-16
Vanderbilt University's Office of Equity, Diversity and Inclusion used ChatGPT to write an email addressing student body about the 2023 Michigan State University shooting, which was condemned as "impersonal" and "lacking empathy".
MoreIncident 33914 Reports
Open-Source Generative Models Abused by Students to Cheat on Assignments and Exams
2022-09-15
Students were reportedly using open-source text generative models such as GPT-3 and ChatGPT to complete school assignments and exams such as writing reports, essays.
MoreIncident 3524 Reports
GPT-3-Based Twitter Bot Hijacked Using Prompt Injection Attacks
2022-09-15
Remoteli.io's GPT-3-based Twitter bot was shown being hijacked by Twitter users who redirected it to repeat or generate any phrases.
MoreIncident 4774 Reports
Bing Chat Tentatively Hallucinated in Extended Conversations with Users
2023-02-14
Early testers reported Bing Chat, in extended conversations with users, having tendencies to make up facts and emulate emotions through an unintended persona.
MoreRelated Entities
Murat Ayfer
Incidents involved as both Developer and Deployer
Incidents involved as Developer
Microsoft
Incidents involved as both Developer and Deployer
- Incident 4774 Reports
Bing Chat Tentatively Hallucinated in Extended Conversations with Users
- Incident 4702 Reports
Bing Chat Response Cited ChatGPT Disinformation Example