Incident 233: Tumblr Automated Pornography-Detecting Algorithms Erroneously Flagged Inoffensive Images as Explicit

Description: Tumblr’s automated tools to identify adult content were reported to have incorrectly flagged inoffensive images as explicit, following its announcement to ban all adult content on the platform.
Alleged: Tumblr developed and deployed an AI system, which harmed Tumblr content creators and Tumblr users.

Suggested citation format

Dickinson, Ingrid. (2018-12-03) Incident Number 233. in Lam, K. (ed.) Artificial Intelligence Incident Database. Responsible AI Collaborative.

Incident Stats

Incident ID
233
Report Count
1
Incident Date
2018-12-03
Editors
Khoa Lam

Tools

New ReportNew ReportNew ResponseNew ResponseDiscoverDiscover

Incident Reports

Tumblr is already flagging innocent posts as porn

Tumblr announced earlier today that it will ban all adult content on the platform, starting on December 17th. Now, longtime users are criticizing the company’s auto-detecting algorithms, which appear to be incorrectly flagging some inoffens…

Variants

A "variant" is an incident that shares the same causative factors, produces similar harms, and involves the same intelligent systems as a known AI incident. Rather than index variants as entirely separate incidents, we list variations of incidents under the first similar incident submitted to the database. Unlike other submission types to the incident database, variants are not required to have reporting in evidence external to the Incident Database. Learn more from the research paper.