Entities

OpenAI

Incidents involved as both Developer and Deployer

Incident 44325 Reports
ChatGPT Abused to Develop Malicious Softwares

2022-12-21

OpenAI's ChatGPT was reportedly abused by cyber criminals including ones with no or low levels of coding or development skills to develop malware, ransomware, and other malicious softwares.

More

Incident 42011 Reports
Users Bypassed ChatGPT's Content Filters with Ease

2022-11-30

Users reported bypassing ChatGPT's content and keyword filters with relative ease using various methods such as prompt injection or creating personas to produce biased associations or generate harmful content.

More

Incident 4508 Reports
Kenyan Data Annotators Allegedly Exposed to Graphic Content for OpenAI's AI

2021-11-01

Sama AI's Kenyan contractors were reportedly asked with excessively low pay to annotate a large volume of disturbing content to improve OpenAI's generative AI systems such as ChatGPT, and whose contract was terminated prior to completion by Sama AI.

More

Incident 4667 Reports
AI-Generated-Text-Detection Tools Reported for High Error Rates

2023-01-03

Models developed to detect whether text generation AI was used such as AI Text Classifier and GPTZero reportedly contained high rates of false positive and false negative, such as mistakenly flagging Shakespeare's works.

More

Incidents Harmed By

Incident 42011 Reports
Users Bypassed ChatGPT's Content Filters with Ease

2022-11-30

Users reported bypassing ChatGPT's content and keyword filters with relative ease using various methods such as prompt injection or creating personas to produce biased associations or generate harmful content.

More

Incident 5037 Reports
Bing AI Search Tool Reportedly Declared Threats against Users

2023-02-14

Users such as the person who revealed its built-in initial prompts reported Bing AI-powered search tool for making death threats or declaring them as threats, sometimes as an unintended persona.

More

Incident 3573 Reports
GPT-2 Able to Recite PII in Training Data

2019-02-14

OpenAI's GPT-2 reportedly memorized and could regurgitate verbatim instances of training data, including personally identifiable information such as names, emails, twitter handles, and phone numbers.

More

Incident 4702 Reports
Bing Chat Response Cited ChatGPT Disinformation Example

2023-02-08

Reporters from TechCrunch issued a query to Microsoft Bing's ChatGPT feature, which cited an earlier example of ChatGPT disinformation discussed in a news article to substantiate the disinformation.

More

Incidents involved as Developer

Incident 54158 Reports
ChatGPT Reportedly Produced False Court Case Law Presented by Legal Counsel in Court

2023-05-04

A lawyer in Mata v. Avianca, Inc. used ChatGPT for research. ChatGPT hallucinated court cases, which the lawyer then presented in court. The court determined the cases did not exist.

More

Incident 48220 Reports
ChatGPT-Assisted University Email Addressing Mass Shooting Denounced by Students

2023-02-16

Vanderbilt University's Office of Equity, Diversity and Inclusion used ChatGPT to write an email addressing student body about the 2023 Michigan State University shooting, which was condemned as "impersonal" and "lacking empathy".

More

Incident 33914 Reports
Open-Source Generative Models Abused by Students to Cheat on Assignments and Exams

2022-09-15

Students were reportedly using open-source text generative models such as GPT-3 and ChatGPT to complete school assignments and exams such as writing reports, essays.

More

Incident 5037 Reports
Bing AI Search Tool Reportedly Declared Threats against Users

2023-02-14

Users such as the person who revealed its built-in initial prompts reported Bing AI-powered search tool for making death threats or declaring them as threats, sometimes as an unintended persona.

More

Related Entities