OpenAI
開発者と提供者の両方の立場で関わったインシデント
インシデント 44325 Report
ChatGPT Abused to Develop Malicious Softwares
2022-12-21
OpenAI's ChatGPT was reportedly abused by cyber criminals including ones with no or low levels of coding or development skills to develop malware, ransomware, and other malicious softwares.
もっとインシデント 68814 Report
Scarlett Johansson Alleges OpenAI's Sky Imitates Her Voice Without Licensing
2024-05-20
OpenAI unveiled a voice assistant with a voice resembling Scarlett Johansson's, despite her refusal to license her voice. Johansson claimed the assistant, "Sky," sounded "eerily similar" to her voice, leading her to seek legal action. OpenAI suspended Sky, asserting the voice was from a different actress.
もっとインシデント 42011 Report
Users Bypassed ChatGPT's Content Filters with Ease
2022-11-30
Users reported bypassing ChatGPT's content and keyword filters with relative ease using various methods such as prompt injection or creating personas to produce biased associations or generate harmful content.
もっとインシデント 4508 Report
Kenyan Data Annotators Allegedly Exposed to Graphic Content for OpenAI's AI
2021-11-01
Sama AI's Kenyan contractors were reportedly asked with excessively low pay to annotate a large volume of disturbing content to improve OpenAI's generative AI systems such as ChatGPT, and whose contract was terminated prior to completion by Sama AI.
もっと影響を受けたインシデント
インシデント 42011 Report
Users Bypassed ChatGPT's Content Filters with Ease
2022-11-30
Users reported bypassing ChatGPT's content and keyword filters with relative ease using various methods such as prompt injection or creating personas to produce biased associations or generate harmful content.
もっとインシデント 5037 Report
Bing AI Search Tool Reportedly Declared Threats against Users
2023-02-14
Users such as the person who revealed its built-in initial prompts reported Bing AI-powered search tool for making death threats or declaring them as threats, sometimes as an unintended persona.
もっとインシデント 9555 Report
Global Cybercrime Network Storm-2139 Allegedly Exploits AI to Generate Deepfake Content
2024-12-19
A global cybercrime network, Storm-2139, allegedly exploited stolen credentials and developed custom tools to bypass AI safety guardrails. They reportedly generated harmful deepfake content, including nonconsensual intimate images of celebrities, and their software is reported to have disabled content moderation, hijacked AI access, and resold illicit services. Microsoft disrupted the operation and filed a lawsuit in December 2024, later identifying key members of the network in February 2025.
もっとインシデント 3573 Report
GPT-2 Able to Recite PII in Training Data
2019-02-14
OpenAI's GPT-2 reportedly memorized and could regurgitate verbatim instances of training data, including personally identifiable information such as names, emails, twitter handles, and phone numbers.
もっとIncidents involved as Developer
インシデント 54158 Report
ChatGPT Reportedly Produced False Court Case Law Presented by Legal Counsel in Court
2023-05-04
A lawyer in Mata v. Avianca, Inc. used ChatGPT for research. ChatGPT hallucinated court cases, which the lawyer then presented in court. The court determined the cases did not exist.
もっとインシデント 111837 Report
Ongoing Purported AI-Assisted Identity Fraud Enables Unauthorized Access to Western Companies by North Korean IT Workers
2021-01-01
North Korean operatives have reportedly used AI-generated identities to secure remote jobs or impersonate employers in order to infiltrate companies. These tactics allegedly support sanctions evasion through wage theft, credential exfiltration, and malware deployment. Workers reportedly use fake resumes, VPNs, and face-altering tools; some deploy malware like OtterCookie after embedding, while others lure targets via spoofed job interviews. AI systems are reportedly used to generate fake resumes, alter profile photos, and assist in real-time responses during video interviews.
もっとインシデント 70136 Report
American Asylum Seeker John Mark Dougan in Russia Reportedly Spreads Disinformation via AI Tools and Fake News Network
2024-05-29
John Mark Dougan, a former Florida sheriff's deputy granted asylum in Russia, has been implicated in spreading disinformation. Utilizing AI tools like OpenAI's ChatGPT and DALL-E 3, Dougan created over 160 fake news sites, disseminating false narratives to millions worldwide. His actions align with Russian disinformation strategies targeting Western democracies. See also Incident 734.
もっとインシデント 96827 Report
'Pravda' Network, Successor to 'Portal Kombat,' Allegedly Seeding AI Models with Kremlin Disinformation
2022-02-24
A Moscow-based disinformation network, Pravda, allegedly infiltrated AI models by flooding the internet with pro-Kremlin falsehoods. A NewsGuard audit found that 10 major AI chatbots repeated these narratives 33% of the time, citing Pravda sources as legitimate. The tactic, called "LLM grooming," manipulates AI training data to embed Russian propaganda. Pravda is part of Portal Kombat, a larger Russian disinformation network identified by VIGINUM in February 2024, but in operation since February 2022.
もっと関連団体
同じインシデントに関連するその他のエンティティ。たとえば、インシデントの開発者がこのエンティティで、デプロイヤーが別のエンティティである場合、それらは関連エンティティとしてマークされます。
関連団体
students
影響を受けたインシデント
- インシデント 4667 レポート
AI-Generated-Text-Detection Tools Reported for High Error Rates
- インシデント 7052 レポート
Turkish Student in Isparta Allegedly Uses AI to Cheat on Exam, Leading to Arrest
Incidents involved as Deployer
開発者と提供者の両方の立場で関わったインシデント
- インシデント 7344 レポート
Leading AI Models Reportedly Found to Mimic Russian Disinformation in 33% of Cases and to Cite Fake Moscow News Sites
- インシデント 3671 レポート
iGPT, SimCLR Learned Biased Associations from Internet Training Data
影響を受けたインシデント
Incidents involved as Developer
ChatGPT users
影響を受けたインシデント
- インシデント 42011 レポート
Users Bypassed ChatGPT's Content Filters with Ease
- インシデント 42011 レポート
Users Bypassed ChatGPT's Content Filters with Ease
Incidents involved as Deployer
Microsoft
開発者と提供者の両方の立場で関わったインシデント
- インシデント 5037 レポート
Bing AI Search Tool Reportedly Declared Threats against Users
- インシデント 4776 レポート
Bing Chat Tentatively Hallucinated in Extended Conversations with Users
影響を受けたインシデント
- インシデント 5037 レポート
Bing AI Search Tool Reportedly Declared Threats against Users
- インシデント 5037 レポート
Bing AI Search Tool Reportedly Declared Threats against Users
Incidents involved as Developer
ChatGPT
Incidents involved as Developer
- インシデント 6255 レポート
Proliferation of Products on Amazon Titled with ChatGPT Error Messages
- インシデント 6154 レポート
Colorado Lawyer Filed a Motion Citing Hallucinated ChatGPT Cases
Incidents involved as Deployer
- インシデント 6226 レポート
Chevrolet Dealer Chatbot Agrees to Sell Tahoe for $1
- インシデント 6771 レポート
ChatGPT and Perplexity Reportedly Manipulated into Breaking Content Policies in AI Boyfriend Scenarios
Incidents implicated systems
GPT-4
Incidents involved as Deployer
- インシデント 6771 レポート
ChatGPT and Perplexity Reportedly Manipulated into Breaking Content Policies in AI Boyfriend Scenarios
- インシデント 6771 レポート
ChatGPT and Perplexity Reportedly Manipulated into Breaking Content Policies in AI Boyfriend Scenarios
Incidents implicated systems
Perplexity AI
影響を受けたインシデント
- インシデント 6771 レポート
ChatGPT and Perplexity Reportedly Manipulated into Breaking Content Policies in AI Boyfriend Scenarios
- インシデント 6771 レポート
ChatGPT and Perplexity Reportedly Manipulated into Breaking Content Policies in AI Boyfriend Scenarios
Incidents involved as Deployer
- インシデント 6771 レポート
ChatGPT and Perplexity Reportedly Manipulated into Breaking Content Policies in AI Boyfriend Scenarios
- インシデント 6771 レポート
ChatGPT and Perplexity Reportedly Manipulated into Breaking Content Policies in AI Boyfriend Scenarios
Incidents implicated systems
Meta
開発者と提供者の両方の立場で関わったインシデント
- インシデント 7344 レポート
Leading AI Models Reportedly Found to Mimic Russian Disinformation in 33% of Cases and to Cite Fake Moscow News Sites
- インシデント 9974 レポート
Meta and OpenAI Accused of Using LibGen’s Pirated Books to Train AI Models
Incidents involved as Developer
GPT-4o
Incidents involved as Deployer
- インシデント 7291 レポート
GPT-4o's Chinese Tokens Reportedly Compromised by Spam and Pornography Due to Inadequate Filtering
- インシデント 7291 レポート
GPT-4o's Chinese Tokens Reportedly Compromised by Spam and Pornography Due to Inadequate Filtering
Incidents implicated systems
Researchers
影響を受けたインシデント
- インシデント 7344 レポート
Leading AI Models Reportedly Found to Mimic Russian Disinformation in 33% of Cases and to Cite Fake Moscow News Sites
- インシデント 10442 レポート
Reported Emergence of 'Vegetative Electron Microscopy' in Scientific Papers Traced to Purported AI Training Data Contamination
Incidents involved as Deployer
Organizations integrating Whisper into customer service systems
Incidents involved as Deployer
Perplexity
開発者と提供者の両方の立場で関わったインシデント
- インシデント 7344 レポート
Leading AI Models Reportedly Found to Mimic Russian Disinformation in 33% of Cases and to Cite Fake Moscow News Sites
- インシデント 7501 レポート
AI Chatbots Reportedly Inaccurately Conveyed Real-Time Political News
Incidents involved as Developer
Mistral
開発者と提供者の両方の立場で関わったインシデント
- インシデント 7344 レポート
Leading AI Models Reportedly Found to Mimic Russian Disinformation in 33% of Cases and to Cite Fake Moscow News Sites
- インシデント 8591 レポート
AI Models Reportedly Found to Provide Misinformation on Election Processes in Spanish
Incidents involved as Developer
Anthropic
開発者と提供者の両方の立場で関わったインシデント
- インシデント 7344 レポート
Leading AI Models Reportedly Found to Mimic Russian Disinformation in 33% of Cases and to Cite Fake Moscow News Sites
- インシデント 10442 レポート
Reported Emergence of 'Vegetative Electron Microscopy' in Scientific Papers Traced to Purported AI Training Data Contamination