Incident 268: Permanent Removal of Social Media Content via Automated Tools Allegedly Prevented Investigative Efforts
Description: Automated permanent removal of violating social media content such as terrorism, violent extremism, and hate speech without archival allegedly prevented its potential use to investigate serious crimes and hamper criminal accountability efforts.
EntitiesView all entities
Alleged: Facebook , Twitter and YouTube developed and deployed an AI system, which harmed International Criminal Court investigators , International Court of Justice investigators , investigative journalists , criminal investigators and victims of crimes documented on social media.
Social media platforms are taking down online content they consider terrorist, violently extremist, or hateful in a way that prevents its potential use to investigate serious crimes, including war crimes, Human Rights Watch said in a report…
In recent years, social media platforms have been taking down online content more often and more quickly, often in response to the demands of governments, but in a way that prevents the use of that content to investigate people suspected of…
A "variant" is an incident that shares the same causative factors, produces similar harms, and involves the same intelligent systems as a known AI incident. Rather than index variants as entirely separate incidents, we list variations of incidents under the first similar incident submitted to the database. Unlike other submission types to the incident database, variants are not required to have reporting in evidence external to the Incident Database. Learn more from the research paper.