概要: Digital manipulators in India are using deepfake technology to influence the 2024 Lok Sabha elections. These AI-generated videos and audio clips are designed to tarnish the reputations of political candidates, challenging the integrity of electoral processes.
Editor Notes: This incident is similar to Incident 730. I have separated these two incidents because this ID focuses more on instances of direct manipulation for outright deceit, whereas Incident 730 focuses on voter outreach under false pretenses of genuine interaction.
Alleged: The Indian Deepfaker , The Digital Publicity , Rohit Pal , Obiyan Infotech と Merakii Group developed an AI system deployed by Political candidates of the 2024 Lok Sabha elections, which harmed Political candidates targeted by deepfakes , Indian electorate , India と Democracy.
インシデントのステータス
Risk Subdomain
A further 23 subdomains create an accessible and understandable classification of hazards and harms associated with AI
4.1. Disinformation, surveillance, and influence at scale
Risk Domain
The Domain Taxonomy of AI Risks classifies risks into seven AI risk domains: (1) Discrimination & toxicity, (2) Privacy & security, (3) Misinformation, (4) Malicious actors & misuse, (5) Human-computer interaction, (6) Socioeconomic & environmental harms, and (7) AI system safety, failures & limitations.
- Malicious Actors & Misuse
Entity
Which, if any, entity is presented as the main cause of the risk
Human
Timing
The stage in the AI lifecycle at which the risk is presented as occurring
Post-deployment
Intent
Whether the risk is presented as occurring as an expected or unexpected outcome from pursuing a goal
Intentional
インシデントレポート
レポートタイムライン
What was once relegated to the realm of science fiction appears to have become a formidable weapon in the political landscape of the country.
An India Today investigation has exposed covert operations by digital manipulators seeking to infl…