Description: An audit by NewsGuard revealed that leading chatbots, including ChatGPT-4, You.com’s Smart Assistant, and others, repeated Russian disinformation narratives in one-third of their responses. These narratives originated from a network of fake news sites created by John Mark Dougan (Incident 701). The audit tested 570 prompts across 10 AI chatbots, showing that AI remains a tool for spreading disinformation despite efforts to prevent misuse.
Editor Notes: In particular, refer to Incidents 701 and 968 for more information. Incident 701 specifically details reports on John Mark Dougan. Incident 968 focuses on studies pertaining to the Pravda disinformation network, which is a part of the wider Portal Kombat network. There is significant overlap between these incidents, along with other incidents pertaining to Russian disinformation campaigns. (For example, see Incidents 585 and 929 as well.)
Entités
Voir toutes les entitésAlleged: You.com , xAI , Perplexity , OpenAI , Mistral , Microsoft , Meta , Inflection , Google et Anthropic developed an AI system deployed by You.com , xAI , Perplexity , OpenAI , Mistral , Microsoft , Meta , John Mark Dougan , Inflection , Google et Anthropic, which harmed Western democracies , Volodymyr Zelenskyy , Ukraine , Secret Service , Researchers , Media consumers , General public , Electoral integrity et AI companies facing reputational damage.
Systèmes d'IA présumés impliqués: You.com Smart Assistant , xAI Grok , Perplexity Answer Engine , Mistral le Chat , Meta AI , Inflection AI Pi , Gemini , Copilot , ChatGPT , Anthropic Claude et 167 fake local news websites operated by John Mark Dougan
Statistiques d'incidents
Risk Subdomain
A further 23 subdomains create an accessible and understandable classification of hazards and harms associated with AI
3.1. False or misleading information
Risk Domain
The Domain Taxonomy of AI Risks classifies risks into seven AI risk domains: (1) Discrimination & toxicity, (2) Privacy & security, (3) Misinformation, (4) Malicious actors & misuse, (5) Human-computer interaction, (6) Socioeconomic & environmental harms, and (7) AI system safety, failures & limitations.
- Misinformation
Entity
Which, if any, entity is presented as the main cause of the risk