Incident 233: Tumblr Automated Pornography-Detecting Algorithms Erroneously Flagged Inoffensive Images as Explicit

Description: Tumblr’s automated tools to identify adult content were reported to have incorrectly flagged inoffensive images as explicit, following its announcement to ban all adult content on the platform.
Alleged: Tumblr developed and deployed an AI system, which harmed Tumblr content creators and Tumblr users.

Suggested citation format

Dickinson, Ingrid. (2018-12-03) Incident Number 233. in Lam, K. (ed.) Artificial Intelligence Incident Database. Responsible AI Collaborative.

Incident Stats

Incident ID
Report Count
Incident Date
Khoa Lam


New ReportNew ReportNew ResponseNew ResponseDiscoverDiscover

Incident Reports

Tumblr announced earlier today that it will ban all adult content on the platform, starting on December 17th. Now, longtime users are criticizing the company’s auto-detecting algorithms, which appear to be incorrectly flagging some inoffensive images as explicit.

Tumblr is giving users until the start date of the ban later this month to appeal, but the inaccuracies are causing concern that blanket bans on such content could sweep up inoffensive posts and continue to drive a wedge between creators and the Tumblr platform. The algorithms were originally a part of Safe Mode, which is now being replaced with a full-site ban on adult content.

The ban is supposed to include all explicit sexual content and nudity, with a few exceptions such as breastfeeding and nude classical statues. Tumblr explained its decision this morning in clear terms, writing, “Adult content will no longer be allowed here. While we do not judge anyone for their desire to post, engage with, or view this stuff, it is time for us to change our relationship with it.”

Technically, any illustrations that feature sex acts are also banned, while nude illustrations are okay. But a litany of examples gaining steam on Twitter and other forums is highlighting how the underlying software is capable of making serious errors.

One person saw a vase and photos of tights get flagged as explicit. The user noted that photos of dildos had flown under the algorithm’s radar, however. Another artist’s illustration of a witch floating among kelp was also incorrectly flagged. Yet another artist saw their illustrations of people running around and swimming get flagged.

To be fair, these are all mistakes that Tumblr appears to have foreseen. As CEO Jeff D’Onofrio said in a blog post, “We’re relying on automated tools to identify adult content and humans to help train and keep our systems in check. We know there will be mistakes.” The Tumblr staff post further explained that “computers are better than humans at scaling process — and we need them for that — but they’re not as good at making nuanced, contextual decisions.” The company says it will be “an evolving process for all of us, and we’re committed to getting this right.”

All of the mistakenly flagged content can be appealed, but it’s just additional paperwork for the many users on Tumblr who don’t post explicit content and are nonetheless finding themselves mistakenly targeted. For now, it’s just creating a way for Tumblr fans to vent about a policy some are unhappy with.

Tumblr is already flagging innocent posts as porn