Incident 583: Instagram Algorithms Allegedly Promote Accounts Facilitating Child Sex Abuse Content

Description: An investigation disclosed that Instagram's recommendation algorithms are promoting accounts that facilitate and sell child sexual abuse material (CSAM). The study, conducted by The Wall Street Journal and researchers at Stanford University and the University of Massachusetts Amherst, indicates that Instagram's algorithms not only allow for the discovery of such accounts through keyword searches but also actively recommend them to users within the network. The issue is especially concerning given Instagram's popularity among teenagers.


New ReportNew ReportNew ResponseNew ResponseDiscoverDiscoverView HistoryView History
Alleged: Meta and Instagram developed and deployed an AI system, which harmed Children , General public , minors and teenagers.

Incident Stats

Incident ID
Report Count
Incident Date
Daniel Atherton
Instagram's algorithms are promoting accounts that share child sex abuse content, researchers find · 2023

Instagram's recommendation algorithms have been connecting and promoting accounts that facilitate and sell child sexual abuse content, according to an investigation published Wednesday.

Meta's photo-sharing service stands out from other soc…


A "variant" is an incident that shares the same causative factors, produces similar harms, and involves the same intelligent systems as a known AI incident. Rather than index variants as entirely separate incidents, we list variations of incidents under the first similar incident submitted to the database. Unlike other submission types to the incident database, variants are not required to have reporting in evidence external to the Incident Database. Learn more from the research paper.