Description: An audit by NewsGuard revealed that leading chatbots, including ChatGPT-4, You.com’s Smart Assistant, and others, repeated Russian disinformation narratives in one-third of their responses. These narratives originated from a network of fake news sites created by John Mark Dougan (Incident 701). The audit tested 570 prompts across 10 AI chatbots, showing that AI remains a tool for spreading disinformation despite efforts to prevent misuse.
Entidades
Ver todas las entidadesPresunto: un sistema de IA desarrollado por You.com , xAI , Perplexity , OpenAI , Mistral , Microsoft , Meta , Inflection , Google y Anthropic e implementado por You.com , xAI , Perplexity , OpenAI , Mistral , Microsoft , Meta , John Mark Dougan , Inflection , Google y Anthropic, perjudicó a Western democracies , Volodymyr Zelenskyy , Ukraine , Secret Service , Researchers , Media consumers , General public , Electoral integrity y AI companies facing reputational damage.
Estadísticas de incidentes
ID
734
Cantidad de informes
1
Fecha del Incidente
2024-06-18
Editores
Daniel Atherton
Informes del Incidente
Cronología de Informes
newsguardtech.com · 2024
- Ver el informe original en su fuente
- Ver el informe en el Archivo de Internet
translated-es-Russian disinformation narratives have infiltrated generative AI. A NewsGuard audit has found that the leading chatbots convincingly repeat fabricated narratives from state-affiliated sites masquerading as local news outlets i…
Variantes
Una "Variante" es un incidente que comparte los mismos factores causales, produce daños similares e involucra los mismos sistemas inteligentes que un incidente de IA conocido. En lugar de indexar las variantes como incidentes completamente separados, enumeramos las variaciones de los incidentes bajo el primer incidente similar enviado a la base de datos. A diferencia de otros tipos de envío a la base de datos de incidentes, no se requiere que las variantes tengan informes como evidencia externa a la base de datos de incidentes. Obtenga más información del trabajo de investigación.
Incidentes Similares
Selected by our editors
Did our AI mess up? Flag the unrelated incidents
Danger, danger! 10 alarming examples of AI gone wild
· 6 informes
Fake Obama created using AI tool to make phoney speeches
· 29 informes
Amazon Censors Gay Books
· 24 informes
Incidentes Similares
Selected by our editors
Did our AI mess up? Flag the unrelated incidents
Danger, danger! 10 alarming examples of AI gone wild
· 6 informes
Fake Obama created using AI tool to make phoney speeches
· 29 informes
Amazon Censors Gay Books
· 24 informes