Incidents involved as both Developer and Deployer

Incident 1412 Reports
California Police Turned on Music to Allegedly Trigger Instagram’s DCMA to Avoid Being Live-Streamed


A police officer in Beverly Hills played copyrighted music on his phone when realizing that his interactions were being recorded on a livestream, allegedly hoping the Instagram's automated copyright detection system to end or mute the stream.


Incident 3312 Reports
Bug in Instagram’s “Related Hashtags” Algorithm Allegedly Caused Disproportionate Treatment of Political Hashtags


A bug was reported by Instagram’s spokesperson to have prevented an algorithm from populating related hashtags for thousands of hashtags, resulting in an allege preferential treatment for some politically partisan hashtags.


Incident 3432 Reports
Facebook, Instagram, and Twitter Failed to Proactively Remove Targeted Racist Remarks via Automated Systems


Facebook's, Instagram's, and Twitter's automated content moderation failed to proactively remove racist remarks and posts directing at Black football players after finals loss, allegedly largely relying on user reports of harassment.


Incident 3942 Reports
Social Media's Automated Word-Flagging without Context Shifted Content Creators' Language Use


TikTok's, YouTube's, Instagram's, and Twitch's use of algorithms to flag certain words devoid of context changed content creators' use of everyday language or discussion about certain topics in fear of their content getting flagged or auto-demonetized by mistake.


Incidents involved as Deployer

Incident 4693 Reports
Automated Adult Content Detection Tools Showed Bias against Women Bodies


Automated content moderation tools to detect sexual explicitness or "raciness" reportedly exhibited bias against women bodies, resulting in suppression of reach despite not breaking platform policies.


Incident 5761 Report
Alleged Misuse of PicSo AI for Generating Inappropriate Content Emphasizing "Girls"


PicSo AI, which appears to be getting advertised by Meta over Instagram, is allegedly being used for generating inappropriate content with an emphasis on "girls." This raises concerns about the misuse of generative AI technologies for creating offensive and potentially sexually explicit material that could be used for nefarious and criminal purposes.


Related Entities