Incidente 810: La red TikTok amplifica la propaganda nazi y el discurso de odio generados por IA.
Descripción: Una red neonazi coordinada en TikTok utilizó contenido generado por IA, incluyendo discursos de Hitler, para difundir propaganda nazi y contenido extremista, violando así las políticas de incitación al odio de TikTok. La red evadió la moderación de la plataforma mediante lenguaje, imágenes y música codificados, y algunas cuentas acumularon millones de visualizaciones. El algoritmo de TikTok amplificó aún más el alcance de este contenido, a pesar de las normas de la comunidad.
Editor Notes: Reconstructing the timeline of events: On February 6, 2023, an account with violent antisemitic content calling for armed revolution began posting on TikTok. In May 2024, the Institute for Strategic Dialogue (ISD) created two dummy accounts to track how TikTok's algorithm recommended pro-Nazi content. By June 3, 2024, ISD had identified a Nazi account that had accumulated over 87,000 views before being banned. Despite this, on June 6, 2024, a heavily extremist pro-Nazi account remained active, recruiting members to off-platform groups. Finally, on July 29, 2024, ISD published its full report, revealing a coordinated network of over 200 accounts promoting Nazism and extremist content, some of which used AI-generated media to bypass TikTok’s moderation efforts. For this incident ID, I am taking the ISD's date of publication for their findings as the incident date.
Note on "NazTok": The ISD report uses the term "NazTok" in their title for the report as a shorthand for "Nazi TikTok" while not explicitly defining their use of this term beyond that inferred meaning. I am replicating "NazTok" in the Deployers field to tie likely new incidents that will emerge together in this particular genre of incident. I have retroactively applied "NazTok" to Incident 809's Deployer field too.
Entidades
Ver todas las entidadesAlleged: Pro-Nazi TikTok accounts developed an AI system deployed by Unknown AI developers, which harmed TikTok users.
Estadísticas de incidentes
Risk Subdomain
A further 23 subdomains create an accessible and understandable classification of hazards and harms associated with AI
1.2. Exposure to toxic content
Risk Domain
The Domain Taxonomy of AI Risks classifies risks into seven AI risk domains: (1) Discrimination & toxicity, (2) Privacy & security, (3) Misinformation, (4) Malicious actors & misuse, (5) Human-computer interaction, (6) Socioeconomic & environmental harms, and (7) AI system safety, failures & limitations.
- Discrimination and Toxicity
Entity
Which, if any, entity is presented as the main cause of the risk
AI
Timing
The stage in the AI lifecycle at which the risk is presented as occurring
Post-deployment
Intent
Whether the risk is presented as occurring as an expected or unexpected outcome from pursuing a goal
Unintentional
Informes del Incidente
Cronología de Informes
Los nazis autoidentificados están promoviendo abiertamente el discurso de odio y el reclutamiento en el mundo real en TikTok. La plataforma no solo no elimina estos videos y cuentas, sino que su algoritmo está amplificando su alcance. Adver…
Variantes
Una "Variante" es un incidente de IA similar a un caso conocido—tiene los mismos causantes, daños y sistema de IA. En lugar de enumerarlo por separado, lo agrupamos bajo el primer incidente informado. A diferencia de otros incidentes, las variantes no necesitan haber sido informadas fuera de la AIID. Obtenga más información del trabajo de investigación.
¿Has visto algo similar?
Incidentes Similares
Selected by our editors
Did our AI mess up? Flag the unrelated incidents

Google’s YouTube Kids App Presents Inappropriate Content
· 14 informes

Amazon Censors Gay Books
· 24 informes
Incidentes Similares
Selected by our editors
Did our AI mess up? Flag the unrelated incidents

Google’s YouTube Kids App Presents Inappropriate Content
· 14 informes

Amazon Censors Gay Books
· 24 informes