Incidente 674: La manipulación de los medios mediante inteligencia artificial, la desinformación y las falsificaciones profundas en las elecciones de 2024 erosionan la confianza en más de 50 países.
Descripción: La desinformación electoral impulsada por IA está aumentando globalmente, aprovechando herramientas de IA generativa fáciles de usar para crear deepfakes convincentes que engañan a los votantes. Este cambio ha simplificado el proceso para que las personas generen contenido falso, lo que ya ha erosionado la confianza en las elecciones al socavar la confianza pública y manipular la percepción de los votantes. Se han documentado evidencias, por ejemplo, en incidentes en Estados Unidos, Moldavia, Eslovaquia, Bangladés y Taiwán.
Editor Notes: This incident ID is for collective incident reports that detail and survey worldwide AI disinformation campaigns rather than just nation, state, or local incidents. Such related incident IDs should when possible be marked as a similar incident ID so that we can connect this ID to the others.
Entidades
Ver todas las entidadesAlleged: Unknown deepfake creators , OpenAI y Google developed an AI system deployed by Russian government , Political operatives , Political consultants y Chinese Communist Party, which harmed Voters , Public trust , Political figures , General public , Electoral integrity , Democracy y Civic society.
Estadísticas de incidentes
Risk Subdomain
A further 23 subdomains create an accessible and understandable classification of hazards and harms associated with AI
4.1. Disinformation, surveillance, and influence at scale
Risk Domain
The Domain Taxonomy of AI Risks classifies risks into seven AI risk domains: (1) Discrimination & toxicity, (2) Privacy & security, (3) Misinformation, (4) Malicious actors & misuse, (5) Human-computer interaction, (6) Socioeconomic & environmental harms, and (7) AI system safety, failures & limitations.
- Malicious Actors & Misuse
Entity
Which, if any, entity is presented as the main cause of the risk
Human
Timing
The stage in the AI lifecycle at which the risk is presented as occurring
Post-deployment
Intent
Whether the risk is presented as occurring as an expected or unexpected outcome from pursuing a goal
Intentional
Informes del Incidente
Cronología de Informes
LONDON (AP) --- Artificial intelligence is supercharging the threat of election disinformation worldwide, making it easy for anyone with a smartphone and a devious imagination to create fake -- but convincing -- content aimed at fooling vot…
Variantes
Una "Variante" es un incidente de IA similar a un caso conocido—tiene los mismos causantes, daños y sistema de IA. En lugar de enumerarlo por separado, lo agrupamos bajo el primer incidente informado. A diferencia de otros incidentes, las variantes no necesitan haber sido informadas fuera de la AIID. Obtenga más información del trabajo de investigación.
¿Has visto algo similar?
Incidentes Similares
Selected by our editors
Manipulated Deepfake Video of Lai Ching-te Endorsing Rivals in Lead-up to January Presidential Elections
· 1 informe

Deepfake of Long-Deceased Suharto Circulating in Run-up to February 2024 Indonesian Elections
· 1 informe
Incidentes Similares
Selected by our editors
Manipulated Deepfake Video of Lai Ching-te Endorsing Rivals in Lead-up to January Presidential Elections
· 1 informe

Deepfake of Long-Deceased Suharto Circulating in Run-up to February 2024 Indonesian Elections
· 1 informe