Description: AI-generated English-language Adolf Hitler speeches have been proliferating on TikTok. They are reportedly accumulating millions of views despite violating the platform’s hate speech policies. The clips are described as often pairing the audio with misleading translations and memes that glorify Hitler and distort historical facts. While some content has been removed, many accounts reportedly continue to post similar videos.
Editor Notes: Reconstructing the timeline of events: On April 7, 2024, Media Matters identified at least one AI-generated English-language speech being attributed to Hitler circulating on TikTok. By early September 2024, the number of videos featuring these speeches began to proliferate, with many gaining significant traction, receiving hundreds of thousands to millions of views. On September 1, 2024, one particularly prominent video featuring an AI-generated Hitler speech garnered over a million views before it was removed. On September 17, 2024, Media Matters published a report documenting the widespread presence of these AI-generated Hitler speeches on the platform. The following day, September 18, 2024, TikTok removed at least one account posting this content, although similar videos are reportedly continuing to circulate.
Note on "NazTok": See editor's note for Incident 810 for an explanation for "NazTok" deployer tag. Incident 809 is a subset of Incident 810.
Entités
Voir toutes les entitésAlleged: Unknown deepfake technology developers developed an AI system deployed by Unknown deepfake creators, which harmed TikTok users.
Statistiques d'incidents
Risk Subdomain
A further 23 subdomains create an accessible and understandable classification of hazards and harms associated with AI
1.2. Exposure to toxic content
Risk Domain
The Domain Taxonomy of AI Risks classifies risks into seven AI risk domains: (1) Discrimination & toxicity, (2) Privacy & security, (3) Misinformation, (4) Malicious actors & misuse, (5) Human-computer interaction, (6) Socioeconomic & environmental harms, and (7) AI system safety, failures & limitations.
- Discrimination and Toxicity
Entity
Which, if any, entity is presented as the main cause of the risk
AI
Timing
The stage in the AI lifecycle at which the risk is presented as occurring
Post-deployment
Intent
Whether the risk is presented as occurring as an expected or unexpected outcome from pursuing a goal
Unintentional
Rapports d'incidents
Chronologie du rapport

Mise à jour (18/09/24) : Après la publication de ce rapport, TikTok a supprimé au moins un compte que nous avons identifié comme publiant des discours d’Hitler en anglais apparemment générés par l’IA. Malgré cette suppression, le contenu de…
Variantes
Une "Variante" est un incident qui partage les mêmes facteurs de causalité, produit des dommages similaires et implique les mêmes systèmes intelligents qu'un incident d'IA connu. Plutôt que d'indexer les variantes comme des incidents entièrement distincts, nous listons les variations d'incidents sous le premier incident similaire soumis à la base de données. Contrairement aux autres types de soumission à la base de données des incidents, les variantes ne sont pas tenues d'avoir des rapports en preuve externes à la base de données des incidents. En savoir plus sur le document de recherche.