Description: A coordinated neo-Nazi network on TikTok used AI-generated media, including Hitler speeches, to spread Nazi propaganda and extremist content, violating TikTok’s hate speech policies. The network evaded platform moderation through coded language, imagery, and music, with some accounts accumulating millions of views. TikTok’s algorithm further amplified the reach of this content, despite community guidelines.
Editor Notes: Reconstructing the timeline of events: On February 6, 2023, an account with violent antisemitic content calling for armed revolution began posting on TikTok. In May 2024, the Institute for Strategic Dialogue (ISD) created two dummy accounts to track how TikTok's algorithm recommended pro-Nazi content. By June 3, 2024, ISD had identified a Nazi account that had accumulated over 87,000 views before being banned. Despite this, on June 6, 2024, a heavily extremist pro-Nazi account remained active, recruiting members to off-platform groups. Finally, on July 29, 2024, ISD published its full report, revealing a coordinated network of over 200 accounts promoting Nazism and extremist content, some of which used AI-generated media to bypass TikTok’s moderation efforts. For this incident ID, I am taking the ISD's date of publication for their findings as the incident date.
Note on "NazTok": The ISD report uses the term "NazTok" in their title for the report as a shorthand for "Nazi TikTok" while not explicitly defining their use of this term beyond that inferred meaning. I am replicating "NazTok" in the Deployers field to tie likely new incidents that will emerge together in this particular genre of incident. I have retroactively applied "NazTok" to Incident 809's Deployer field too.
インシデントのステータス
インシデントID
810
レポート数
1
インシデント 発生日
2024-07-29
エディタ
Daniel Atherton
インシデントレポート
レポートタイムライン
isdglobal.org · 2024
- 情報源として元のレポートを表示
- インターネットアーカイブでレポートを表示
translated-ja-Self-identified Nazis are openly promoting hate speech and real-world recruitment on TikTok. Not only is the platform failing to remove these videos and accounts, but its algorithm is amplifying their reach.
Content warning: T…
バリアント
「バリアント」は既存のAIインシデントと同じ原因要素を共有し、同様な被害を引き起こし、同じ知的システムを含んだインシデントです。バリアントは完全に独立したインシデントとしてインデックスするのではなく、データベースに最初に投稿された同様なインシデントの元にインシデントのバリエーションとして一覧します。インシデントデータベースの他の投稿タイプとは違い、バリアントではインシデントデータベース以外の根拠のレポートは要求されません。詳細についてはこの研究論文を参照してください