TikTok users

Incidents Harmed By

Incident 4816 Reports
Deepfake TikTok Video Featured Joe Rogan Endorsing Supplement Brand


A deepfake video featuring podcast host Joe Rogan advertising to his listeners about a "libido-boosting" supplement was circulating on TikTok and other platforms before being removed by TikTok along with the account which posted it.


Incident 1174 Reports
TikTok's "Suggested Accounts" Algorithm Allegedly Reinforced Racial Bias through Feedback Loops


TikTok's "Suggested Accounts" recommendations allegedly reinforced racial bias despite not basing recommendations on race or creators' profile photo.


Incident 1854 Reports
TikTok's "For You" Algorithm Directed New Users towards Disinformation about the War in Ukraine


An investigation by NewsGuard into TikTok’s handling of content related to the Russia-Ukraine war showed its “For You” algorithm pushing new users towards false and misleading content about the war within less than an hour of signing up.


Incident 2863 Reports
TikTok’s "For You" Allegedly Pushed Fatal “Blackout” Challenge Videos to Two Young Girls


TikTok’s recommendation algorithm was alleged in a lawsuit to have intentionally and repeatedly pushed videos of the “blackout” challenge onto children’s feeds, incentivizing their participation which ultimately resulted in the death of two young girls.


Incidents involved as Deployer

Incident 5682 Reports
AI-Generated Voices Amplify Conspiracy Theories on TikTok


NewsGuard has identified 17 TikTok accounts that have been using AI-generated voices to advance and amplify conspiracy theories and false claims beginning in June 2023. By September 25, 2023, these accounts had amassed over 336 million views and over 14.5 million likes. Videos include baseless claims involving public figures such as Barack Obama, Oprah Winfrey, and Jamie Foxx.


Incident 6771 Report
ChatGPT and Perplexity Reportedly Manipulated into Breaking Content Policies in AI Boyfriend Scenarios


The "Dan" ("Do Anything Now") AI boyfriend is a trend on TikTok in which users appear to regularly manipulate ChatGPT to adopt boyfriend personas, breaching content policies. ChatGPT 3.5 is reported to regularly produce explicitly sexual content, directly violating its intended safety protocols. GPT-4 and Perplexity AI were subjected to similar manipulations, and although they exhibited more resistance to breaches, some prompts were reported to break its guidelines.


Related Entities