Incident 6: TayBot

Responded
Description: Microsoft's Tay, an artificially intelligent chatbot, was released on March 23, 2016 and removed within 24 hours due to multiple racist, sexist, and anit-semitic tweets generated by the bot.

Tools

New ReportNew ReportNew ResponseNew ResponseDiscoverDiscover
Alleged: Microsoft developed and deployed an AI system, which harmed Twitter Users.

Incident Stats

Incident ID
6
Report Count
28
Incident Date
2016-03-24
Editors
Sean McGregor

CSET Taxonomy Classifications

Taxonomy Details

Full Description

Microsoft chatbot, Tay, was published onto Twitter on March 23, 2016. Within 24 hours Tay had been removed from Twitter after becoming a "holocaust-denying racist" due to the inputs entered by Twitter users and Tay's ability to craft responses based on what is available to read on Twitter. Tay's "repeat after me" feature allowed any Twitter user to tell Tay what to say and it would be repeated, leading to some of the racist and anti-semitic tweets. "Trolls" also exposed the chatbot to ideas that led to production of sentences like: "Hitler was right I hate the Jews," "i fucking hate feminists," and "bush did 9/11 and Hitler would have done a better job than the monkey we have now. Donald Trump is the only hope we've got." Tay was replaced by Zo. It's noteworthy that Microsoft released a similar chatbot in China named Xiaolce, who ran smoothly without major complications, implying culture and public input had a heavy role in Tay's results.

Short Description

Microsoft's Tay, an artificially intelligent chatbot, was released on March 23, 2016 and removed within 24 hours due to multiple racist, sexist, and anit-semitic tweets generated by the bot.

Severity

Minor

Harm Distribution Basis

Race, Religion, National origin or immigrant status, Sex

Harm Type

Psychological harm, Harm to social or political systems

AI System Description

Microsoft's Tay chatbot, an artificially intelligent chatbot published on Twitter

System Developer

Microsoft

Sector of Deployment

Arts, entertainment and recreation

Relevant AI functions

Perception, Cognition, Action

AI Techniques

content creation, language recognition natural language processing

AI Applications

comprehension, language output, chatbot

Location

Global

Named Entities

Microsoft, Twitter, Tay, Xiaoice

Technology Purveyor

Microsoft, Twitter

Beginning Date

2016-03-23

Ending Date

2016-03-24

Near Miss

Harm caused

Intent

Accident

Lives Lost

No

Data Inputs

Twitter users' input

Worst Chatbot Fails

Worst Chatbot Fails

businessnewsdaily.com

Tay (bot)

Tay (bot)

en.wikipedia.org

Variants

A "variant" is an incident that shares the same causative factors, produces similar harms, and involves the same intelligent systems as a known AI incident. Rather than index variants as entirely separate incidents, we list variations of incidents under the first similar incident submitted to the database. Unlike other submission types to the incident database, variants are not required to have reporting in evidence external to the Incident Database. Learn more from the research paper.

Similar Incidents

By textual similarity

Did our AI mess up? Flag the unrelated incidents