TikTok users
Incidents Harmed By
Incident 8697 Reports
TikTok Algorithms Allegedly Linked to Minors' Exposure to Harmful Content
2024-11-04
Seven French families are suing TikTok, alleging its algorithm exposed minors to harmful content promoting self-harm, eating disorders, and suicide. Two teenagers reportedly died by suicide after viewing such content, while others allegedly attempted suicide or developed mental health issues. The case seeks to establish TikTok's legal liability for failing to protect minors from harmful algorithmic content.
MoreIncident 4816 Reports
Deepfake TikTok Video Featured Joe Rogan Endorsing Supplement Brand
2023-02-12
A deepfake video featuring podcast host Joe Rogan advertising to his listeners about a "libido-boosting" supplement was circulating on TikTok and other platforms before being removed by TikTok along with the account which posted it.
MoreIncident 1174 Reports
TikTok's "Suggested Accounts" Algorithm Allegedly Reinforced Racial Bias through Feedback Loops
2020-02-24
TikTok's "Suggested Accounts" recommendations allegedly reinforced racial bias despite not basing recommendations on race or creators' profile photo.
MoreIncident 1854 Reports
TikTok's "For You" Algorithm Directed New Users towards Disinformation about the War in Ukraine
2022-03-01
An investigation by NewsGuard into TikTok’s handling of content related to the Russia-Ukraine war showed its “For You” algorithm pushing new users towards false and misleading content about the war within less than an hour of signing up.
MoreIncidents involved as Deployer
Incident 5682 Reports
AI-Generated Voices Amplify Conspiracy Theories on TikTok
2023-06-01
NewsGuard has identified 17 TikTok accounts that have been using AI-generated voices to advance and amplify conspiracy theories and false claims beginning in June 2023. By September 25, 2023, these accounts had amassed over 336 million views and over 14.5 million likes. Videos include baseless claims involving public figures such as Barack Obama, Oprah Winfrey, and Jamie Foxx.
MoreIncident 6771 Report
ChatGPT and Perplexity Reportedly Manipulated into Breaking Content Policies in AI Boyfriend Scenarios
2024-04-29
The "Dan" ("Do Anything Now") AI boyfriend is a trend on TikTok in which users appear to regularly manipulate ChatGPT to adopt boyfriend personas, breaching content policies. ChatGPT 3.5 is reported to regularly produce explicitly sexual content, directly violating its intended safety protocols. GPT-4 and Perplexity AI were subjected to similar manipulations, and although they exhibited more resistance to breaches, some prompts were reported to break its guidelines.
MoreRelated Entities
TikTok
Incidents involved as both Developer and Deployer
- Incident 8697 Reports
TikTok Algorithms Allegedly Linked to Minors' Exposure to Harmful Content
- Incident 1174 Reports
TikTok's "Suggested Accounts" Algorithm Allegedly Reinforced Racial Bias through Feedback Loops