Incident 29: Image Classification of Battle Tanks

Description: A potentially apocryphal story in which an image classifier was produced to differentiate types of battle tanks, but the resulting model keyed in on environmental attributes rather than tank attributes


New ReportNew ReportNew ResponseNew ResponseDiscoverDiscoverView HistoryView History
Alleged: United States Government developed and deployed an AI system, which harmed United States Government.

Incident Stats

Incident ID
Report Count
Incident Date
Sean McGregor

CSETv1 Taxonomy Classifications

Taxonomy Details · 2011

Drawing on Google/Google Books/Google Scholar/Libgen/LessWrong/Hacker News/Twitter, I have compiled a large number of variants of the story from various sources; below, in reverse chronological order by decade.

A similar thing happened here…

Tales from the Trenches: AI Disaster Stories (GDC talk) · 2016

His team was working on running simulations of long-distance manned spaceflight. In particular, the goal of their simulations was to determine an algorithm that would optimally allocate food, water, and electricity to 3 crew members. The de…

AI Incident Database Incidents Converted to Issues · 2022

The following former incidents have been converted to "issues" following an update to the incident definition and ingestion criteria.

21: Tougher Turing Test Exposes Chatbots’ Stupidity

Description: The 2016 Winograd Schema Challenge highli…


A "variant" is an incident that shares the same causative factors, produces similar harms, and involves the same intelligent systems as a known AI incident. Rather than index variants as entirely separate incidents, we list variations of incidents under the first similar incident submitted to the database. Unlike other submission types to the incident database, variants are not required to have reporting in evidence external to the Incident Database. Learn more from the research paper.

Similar Incidents

By textual similarity

Did our AI mess up? Flag the unrelated incidents