Incident 233: Tumblr Automated Pornography-Detecting Algorithms Erroneously Flagged Inoffensive Images as Explicit
Description: Tumblr’s automated tools to identify adult content were reported to have incorrectly flagged inoffensive images as explicit, following its announcement to ban all adult content on the platform.
Suggested citation format
Dickinson, Ingrid. (2018-12-03) Incident Number 233. in Lam, K. (ed.) Artificial Intelligence Incident Database. Responsible AI Collaborative.
Tumblr announced earlier today that it will ban all adult content on the platform, starting on December 17th. Now, longtime users are criticizing the company’s auto-detecting algorithms, which appear to be incorrectly flagging some inoffens…
A "variant" is an incident that shares the same causative factors, produces similar harms, and involves the same intelligent systems as a known AI incident. Rather than index variants as entirely separate incidents, we list variations of incidents under the first similar incident submitted to the database. Unlike other submission types to the incident database, variants are not required to have reporting in evidence external to the Incident Database. Learn more from the research paper.