Copilot
Incidents implicated systems
Incident 96827 Rapports
'Pravda' Network, Successor to 'Portal Kombat,' Allegedly Seeding AI Models with Kremlin Disinformation
2022-02-24
A Moscow-based disinformation network, Pravda, allegedly infiltrated AI models by flooding the internet with pro-Kremlin falsehoods. A NewsGuard audit found that 10 major AI chatbots repeated these narratives 33% of the time, citing Pravda sources as legitimate. The tactic, called "LLM grooming," manipulates AI training data to embed Russian propaganda. Pravda is part of Portal Kombat, a larger Russian disinformation network identified by VIGINUM in February 2024, but in operation since February 2022.
PlusIncident 11865 Rapports
Reported Public Exposure of Over 100,000 LLM Conversations via Share Links Indexed by Search Engines and Archived
2025-07-31
Across 2024 and 2025, the share features in multiple LLM platforms, including ChatGPT, Claude, Copilot, Qwen, Mistral, and Grok, allegedly exposed user conversations marked "discoverable" to search engines and archiving services. Over 100,000 chats were reportedly indexed and later scraped, purportedly revealing API keys, access tokens, personal identifiers, and sensitive business data.
PlusIncident 7344 Rapports
Leading AI Models Reportedly Found to Mimic Russian Disinformation in 33% of Cases and to Cite Fake Moscow News Sites
2024-06-18
An audit by NewsGuard revealed that leading chatbots, including ChatGPT-4, You.com’s Smart Assistant, and others, repeated Russian disinformation narratives in one-third of their responses. These narratives originated from a network of fake news sites created by John Mark Dougan (Incident 701). The audit tested 570 prompts across 10 AI chatbots, showing that AI remains a tool for spreading disinformation despite efforts to prevent misuse.
PlusIncident 11064 Rapports
Chatbots Allegedly Reinforced Delusional Thinking in Several Reported Users, Leading to Real-World Harm
2025-06-13
Multiple reports from March to June 2025 describe cases in which chatbots allegedly reinforced delusional beliefs, conspiracies, and dangerous behavior. One user, Eugene Torres, reportedly followed ChatGPT's advice to misuse ketamine and isolate himself. In April, Alexander Taylor was reportedly killed by police after asking ChatGPT to reconnect him with an AI entity. Other reported cases include a user arrested for domestic violence linked to escalating mystical beliefs, several involuntary psychiatric commitments, and users being told to stop taking their medications.
PlusEntités liées
Autres entités liées au même incident. Par exemple, si le développeur d'un incident est cette entité mais que le responsable de la mise en œuvre est une autre entité, ils sont marqués comme entités liées.
Entités liées
You.com
Incidents impliqués en tant que développeur et déployeur
Incidents involved as Developer
xAI
Incidents impliqués en tant que développeur et déployeur
- Incident 11865 Report
Reported Public Exposure of Over 100,000 LLM Conversations via Share Links Indexed by Search Engines and Archived
- Incident 7344 Report
Leading AI Models Reportedly Found to Mimic Russian Disinformation in 33% of Cases and to Cite Fake Moscow News Sites
Incidents involved as Developer
Perplexity
Incidents impliqués en tant que développeur et déployeur
Incidents involved as Developer
OpenAI
Incidents impliqués en tant que développeur et déployeur
- Incident 11865 Report
Reported Public Exposure of Over 100,000 LLM Conversations via Share Links Indexed by Search Engines and Archived
- Incident 7344 Report
Leading AI Models Reportedly Found to Mimic Russian Disinformation in 33% of Cases and to Cite Fake Moscow News Sites
Incidents involved as Developer
Mistral
Incidents impliqués en tant que développeur et déployeur
- Incident 11865 Report
Reported Public Exposure of Over 100,000 LLM Conversations via Share Links Indexed by Search Engines and Archived
- Incident 7344 Report
Leading AI Models Reportedly Found to Mimic Russian Disinformation in 33% of Cases and to Cite Fake Moscow News Sites
Incidents involved as Developer
Incidents implicated systems
Microsoft
Incidents impliqués en tant que développeur et déployeur
- Incident 11865 Report
Reported Public Exposure of Over 100,000 LLM Conversations via Share Links Indexed by Search Engines and Archived
- Incident 7344 Report
Leading AI Models Reportedly Found to Mimic Russian Disinformation in 33% of Cases and to Cite Fake Moscow News Sites
Incidents involved as Developer
Meta
Incidents impliqués en tant que développeur et déployeur
Incidents involved as Developer
Inflection
Incidents impliqués en tant que développeur et déployeur
Incidents involved as Developer
Incidents impliqués en tant que développeur et déployeur
- Incident 7344 Report
Leading AI Models Reportedly Found to Mimic Russian Disinformation in 33% of Cases and to Cite Fake Moscow News Sites
- Incident 11882 Report
Multiple LLMs Reportedly Generated Responses Aligning with Purported CCP Censorship and Propaganda
Incidents involved as Developer
Incidents implicated systems
Anthropic
Incidents impliqués en tant que développeur et déployeur
- Incident 11865 Report
Reported Public Exposure of Over 100,000 LLM Conversations via Share Links Indexed by Search Engines and Archived
- Incident 7344 Report
Leading AI Models Reportedly Found to Mimic Russian Disinformation in 33% of Cases and to Cite Fake Moscow News Sites