OpenAI
Incidents impliqués en tant que développeur et déployeur
Incident 44325 Rapports
ChatGPT Abused to Develop Malicious Softwares
2022-12-21
OpenAI's ChatGPT was reportedly abused by cyber criminals including ones with no or low levels of coding or development skills to develop malware, ransomware, and other malicious softwares.
PlusIncident 68814 Rapports
Scarlett Johansson Alleges OpenAI's Sky Imitates Her Voice Without Licensing
2024-05-20
OpenAI unveiled a voice assistant with a voice resembling Scarlett Johansson's, despite her refusal to license her voice. Johansson claimed the assistant, "Sky," sounded "eerily similar" to her voice, leading her to seek legal action. OpenAI suspended Sky, asserting the voice was from a different actress.
PlusIncident 42011 Rapports
Users Bypassed ChatGPT's Content Filters with Ease
2022-11-30
Users reported bypassing ChatGPT's content and keyword filters with relative ease using various methods such as prompt injection or creating personas to produce biased associations or generate harmful content.
PlusIncident 4508 Rapports
Kenyan Data Annotators Allegedly Exposed to Graphic Content for OpenAI's AI
2021-11-01
Sama AI's Kenyan contractors were reportedly asked with excessively low pay to annotate a large volume of disturbing content to improve OpenAI's generative AI systems such as ChatGPT, and whose contract was terminated prior to completion by Sama AI.
PlusAffecté par des incidents
Incident 42011 Rapports
Users Bypassed ChatGPT's Content Filters with Ease
2022-11-30
Users reported bypassing ChatGPT's content and keyword filters with relative ease using various methods such as prompt injection or creating personas to produce biased associations or generate harmful content.
PlusIncident 5037 Rapports
Bing AI Search Tool Reportedly Declared Threats against Users
2023-02-14
Users such as the person who revealed its built-in initial prompts reported Bing AI-powered search tool for making death threats or declaring them as threats, sometimes as an unintended persona.
PlusIncident 3573 Rapports
GPT-2 Able to Recite PII in Training Data
2019-02-14
OpenAI's GPT-2 reportedly memorized and could regurgitate verbatim instances of training data, including personally identifiable information such as names, emails, twitter handles, and phone numbers.
PlusIncident 4702 Rapports
Bing Chat Response Cited ChatGPT Disinformation Example
2023-02-08
Reporters from TechCrunch issued a query to Microsoft Bing's ChatGPT feature, which cited an earlier example of ChatGPT disinformation discussed in a news article to substantiate the disinformation.
PlusIncidents involved as Developer
Incident 54158 Rapports
ChatGPT Reportedly Produced False Court Case Law Presented by Legal Counsel in Court
2023-05-04
A lawyer in Mata v. Avianca, Inc. used ChatGPT for research. ChatGPT hallucinated court cases, which the lawyer then presented in court. The court determined the cases did not exist.
PlusIncident 48220 Rapports
ChatGPT-Assisted University Email Addressing Mass Shooting Denounced by Students
2023-02-16
Vanderbilt University's Office of Equity, Diversity and Inclusion used ChatGPT to write an email addressing student body about the 2023 Michigan State University shooting, which was condemned as "impersonal" and "lacking empathy".
PlusIncident 33914 Rapports
Open-Source Generative Models Abused by Students to Cheat on Assignments and Exams
2022-09-15
Students were reportedly using open-source text generative models such as GPT-3 and ChatGPT to complete school assignments and exams such as writing reports, essays.
PlusIncident 5037 Rapports
Bing AI Search Tool Reportedly Declared Threats against Users
2023-02-14
Users such as the person who revealed its built-in initial prompts reported Bing AI-powered search tool for making death threats or declaring them as threats, sometimes as an unintended persona.
PlusEntités associées
Murat Ayfer
Incidents impliqués en tant que développeur et déployeur
Incidents involved as Developer
students
Affecté par des incidents
- Incident 4667 Rapports
AI-Generated-Text-Detection Tools Reported for High Error Rates
- Incident 7052 Rapports
Turkish Student in Isparta Allegedly Uses AI to Cheat on Exam, Leading to Arrest
Incidents involved as Deployer
Stephan de Vries
Incidents impliqués en tant que développeur et déployeur
Affecté par des incidents
Microsoft
Incidents impliqués en tant que développeur et déployeur
- Incident 5037 Rapports
Bing AI Search Tool Reportedly Declared Threats against Users
- Incident 4776 Rapports
Bing Chat Tentatively Hallucinated in Extended Conversations with Users
Affecté par des incidents
Incidents impliqués en tant que développeur et déployeur
- Incident 3671 Rapport
iGPT, SimCLR Learned Biased Associations from Internet Training Data
- Incident 7181 Rapport
OpenAI, Google, and Meta Alleged to Have Overstepped Legal Boundaries for Training AI
Incidents involved as Developer
ChatGPT users
Affecté par des incidents
- Incident 42011 Rapports
Users Bypassed ChatGPT's Content Filters with Ease
- Incident 42011 Rapports
Users Bypassed ChatGPT's Content Filters with Ease
Incidents involved as Deployer
ChatGPT
Incidents involved as Developer
- Incident 6255 Rapports
Proliferation of Products on Amazon Titled with ChatGPT Error Messages
- Incident 6154 Rapports
Colorado Lawyer Filed a Motion Citing Hallucinated ChatGPT Cases
Incidents involved as Deployer
- Incident 6226 Rapports
Chevrolet Dealer Chatbot Agrees to Sell Tahoe for $1
- Incident 6771 Rapport
ChatGPT and Perplexity Reportedly Manipulated into Breaking Content Policies in AI Boyfriend Scenarios
Incidents implicated systems
General Motors
Incidents impliqués en tant que développeur et déployeur
Affecté par des incidents
Perplexity AI
Affecté par des incidents
- Incident 6771 Rapport
ChatGPT and Perplexity Reportedly Manipulated into Breaking Content Policies in AI Boyfriend Scenarios
- Incident 6771 Rapport
ChatGPT and Perplexity Reportedly Manipulated into Breaking Content Policies in AI Boyfriend Scenarios