Incident 6: TayBot

Description: Microsoft's Tay, an artificially intelligent chatbot, was released on March 23, 2016 and removed within 24 hours due to multiple racist, sexist, and anit-semitic tweets generated by the bot.


New ReportNew ReportNew ResponseNew ResponseDiscoverDiscoverView HistoryView History
Alleged: Microsoft developed and deployed an AI system, which harmed Twitter Users.

Incident Stats

Incident ID
Report Count
Incident Date
Sean McGregor

CSETv0 Taxonomy Classifications

Taxonomy Details

Full Description

Microsoft chatbot, Tay, was published onto Twitter on March 23, 2016. Within 24 hours Tay had been removed from Twitter after becoming a "holocaust-denying racist" due to the inputs entered by Twitter users and Tay's ability to craft responses based on what is available to read on Twitter. Tay's "repeat after me" feature allowed any Twitter user to tell Tay what to say and it would be repeated, leading to some of the racist and anti-semitic tweets. "Trolls" also exposed the chatbot to ideas that led to production of sentences like: "Hitler was right I hate the Jews," "i fucking hate feminists," and "bush did 9/11 and Hitler would have done a better job than the monkey we have now. Donald Trump is the only hope we've got." Tay was replaced by Zo. It's noteworthy that Microsoft released a similar chatbot in China named Xiaolce, who ran smoothly without major complications, implying culture and public input had a heavy role in Tay's results.

Short Description

Microsoft's Tay, an artificially intelligent chatbot, was released on March 23, 2016 and removed within 24 hours due to multiple racist, sexist, and anit-semitic tweets generated by the bot.



Harm Distribution Basis

Race, Religion, National origin or immigrant status, Sex

Harm Type

Psychological harm, Harm to social or political systems

AI System Description

Microsoft's Tay chatbot, an artificially intelligent chatbot published on Twitter

System Developer


Sector of Deployment

Arts, entertainment and recreation

Relevant AI functions

Perception, Cognition, Action

AI Techniques

content creation, language recognition natural language processing

AI Applications

comprehension, language output, chatbot



Named Entities

Microsoft, Twitter, Tay, Xiaoice

Technology Purveyor

Microsoft, Twitter

Beginning Date


Ending Date


Near Miss

Harm caused



Lives Lost


Data Inputs

Twitter users' input

CSETv1 Taxonomy Classifications

Taxonomy Details

Harm Distribution Basis

race, religion, sex, ideology, nation of origin, citizenship, immigrant status

Sector of Deployment

information and communication

Worst Chatbot Fails

Worst Chatbot Fails

Tay (bot)

Tay (bot)

Microsoft's artificial Twitter bot stunt backfires as trolls teach it racist statements · 2016

Microsoft unveiled Twitter artificial intelligence bot @TayandYou yesterday in a bid to connect with millennials and "experiment" with conversational understanding.

Microsoft's artificial Twitter bot stunt backfires as trolls teach it racis…

Why Microsoft's 'Tay' AI bot went wrong · 2016

Less than a day after she joined Twitter, Microsoft's AI bot,, was taken down for becoming a sexist, racist monster. AI experts explain why it went terribly wrong.

Image: screenshot, Twitter

She was supposed to come off as a normal t…

Here Are the Microsoft Twitter Bot’s Craziest Racist Rants · 2016

Yesterday, Microsoft unleashed Tay, the teen-talking AI chatbot built to mimic and converse with users in real time. Because the world is a terrible place full of shitty people, many of those users took advantage of Tay’s machine learning c…

Microsoft deletes 'teen girl' AI after it became a Hitler-loving sex robot within 24 hours · 2016

A day after Microsoft introduced an innocent Artificial Intelligence chat robot to Twitter it has had to delete it after it transformed into an evil Hitler-loving, incestual sex-promoting, 'Bush did 9/11'-proclaiming robot.

Developers at Mi…

Twitter taught Microsoft’s AI chatbot to be a racist asshole in less than a day · 2016

It took less than 24 hours for Twitter to corrupt an innocent AI chatbot. Yesterday, Microsoft unveiled Tay — a Twitter bot that the company described as an experiment in "conversational understanding." The more you chat with Tay, said Micr…

Microsoft deletes racist, genocidal tweets from AI chatbot Tay · 2016

Tay's Twitter page Microsoft Microsoft's new AI chatbot went off the rails Wednesday, posting a deluge of incredibly racist messages in response to questions.

The tech company introduced "Tay" this week — a bot that responds to users' queri…

Microsoft’s Tay is an Example of Bad Design · 2016

Microsoft’s Tay is an Example of Bad Design

or Why Interaction Design Matters, and so does QA-ing.

caroline sinders Blocked Unblock Follow Following Mar 24, 2016

Yesterday Microsoft launched a teen girl AI on Twitter named “Tay.” I work wit…

Why did Microsoft’s chatbot Tay fail, and what does it mean for Artificial Intelligence studies? · 2016

Why did Microsoft’s chatbot Tay fail, and what does it mean for Artificial Intelligence studies?

Botego Inc Blocked Unblock Follow Following Mar 25, 2016

Yesterday, something that looks like a big failure has happened: Microsoft’s chatbot T…

5 Big Questions About Tay, Microsoft's Failed A.I. Twitter Chatbot · 2016

This week, the internet did what it does best and demonstrated that A.I. technology isn’t quite as intuitive as human perception, using … racism.

Microsoft’s recently released artificial intelligence chatbot, Tay, fell victim to users’ tric…

It's Your Fault Microsoft's Teen AI Turned Into Such a Jerk · 2016

It was the unspooling of an unfortunate series of events involving artificial intelligence, human nature, and a very public experiment. Amid this dangerous combination of forces, determining exactly what went wrong is near-impossible. But t…

Microsoft shuts down AI chatbot after it turned into a Nazi · 2016

Microsoft got a swift lesson this week on the dark side of social media. Yesterday the company launched "Tay," an artificial intelligence chatbot designed to develop conversational understanding by interacting with humans. Users could follo…

Learning from Tay’s introduction · 2016
Peter Lee, Microsoft post-incident response

As many of you know by now, on Wednesday we launched a chatbot called Tay. We are deeply sorry for the unintended offensive and hurtful tweets from Tay, which do not represent who we are or what we stand for, nor how we designed Tay. Tay is…

Tay: Microsoft issues apology over racist chatbot fiasco · 2016

Image copyright Microsoft Image caption The AI was taught to talk like a teenager

Microsoft has apologised for creating an artificially intelligent chatbot that quickly turned into a holocaust-denying racist.

But in doing so made it clear T…

Trolls turned Tay, Microsoft’s fun millennial AI bot, into a genocidal maniac · 2016

It took mere hours for the Internet to transform Tay, the teenage AI bot who wants to chat with and learn from millennials, into Tay, the racist and genocidal AI bot who liked to reference Hitler. And now Tay is taking a break.

Tay, as The …

Microsoft 'deeply sorry' for racist and sexist tweets by AI chatbot · 2016

Microsoft has said it is “deeply sorry” for the racist and sexist Twitter messages generated by the so-called chatbot it launched this week.

The company released an official apology after the artificial intelligence program went on an embar…

Tay the Racist Chatbot: Who is responsible when a machine learns to be evil? · 2016

By far the most entertaining AI news of the past week was the rise and rapid fall of Microsoft’s teen-girl-imitation Twitter chatbot, Tay, whose Twitter tagline described her as “Microsoft’s AI fam* from the internet that’s got zero chill.”…

Microsoft’s racist chatbot returns with drug-smoking Twitter meltdown · 2016

Short-lived return saw Tay tweet about smoking drugs in front of the police before suffering a meltdown and being taken offline

This article is more than 3 years old

This article is more than 3 years old

Microsoft’s attempt to converse with…

Microsoft’s disastrous Tay experiment shows the hidden dangers of AI · 2016

Humans have a long and storied history of freaking out over the possible effects of our technologies. Long ago, Plato worried that writing would hurt people’s memories and “implant forgetfulness in their souls.” More recently, Mary Shelley’…

Microsoft chatbot Zo is a censored version of Tay · 2016

Tay's successor is called Zo and is only available by invitation on messaging app Kik. When you request access, the software asks for your Kik username and Twitter handle Microsoft

Having (hopefully) learnt from its previous foray into chat…

With Teen Bot Tay, Microsoft Proved Assholes Will Indoctrinate A.I. · 2016

When Tay started its short digital life on March 23, it just wanted to gab and make some new friends on the net. The chatbot, which was created by Microsoft’s Research department, greeted the day with an excited tweet that could have come f…

Microsoft’s racist chatbot, Tay, makes MIT’s annual worst-tech list · 2016

BOT or NOT? This special series explores the evolving relationship between humans and machines, examining the ways that robots, artificial intelligence and automation are impacting our work and lives.

Tay, the Microsoft chatbot that prankst…

The Accountability of AI - Case Study: Microsoft’s Tay Experiment · 2017

The Accountability of AI — Case Study: Microsoft’s Tay Experiment

Yuxi Liu Blocked Unblock Follow Following Jan 16, 2017

In this case study, I outline Microsoft’s artificial intelligence (AI) chatbot Tay and describe the controversy it caus…

Danger, danger! 10 alarming examples of AI gone wild · 2017

Science fiction is lousy with tales of artificial intelligence run amok. There's HAL 9000, of course, and the nefarious Skynet system from the "Terminator" films. Last year, the sinister AI Ultron came this close to defeating the Avengers, …

Worst Chatbot Fails · 2017

Many people associate innovation with technology, but advancing technology is subject to the same embarrassing blunders that humans are. Nowhere is this more apparent than in chatbots.

The emerging tech, which seems to be exiting the awkwar…

Unmasking A.I.'s Bias Problem · 2018

WHEN TAY MADE HER DEBUT in March 2016, Microsoft had high hopes for the artificial intelligence–powered “social chatbot.” Like the automated, text-based chat programs that many people had already encountered on e-commerce sites and in custo…

Microsoft’s politically correct chatbot is even worse than its racist one · 2018

Every sibling relationship has its clichés. The high-strung sister, the runaway brother, the over-entitled youngest. In the Microsoft family of social-learning chatbots, the contrasts between Tay, the infamous, sex-crazed neo-Nazi, and her …

In 2016, Microsoft’s Racist Chatbot Revealed the Dangers of Online Conversation · 2019

In March 2016, Microsoft was preparing to release its new chatbot, Tay, on Twitter. Described as an experiment in "conversational understanding," Tay was designed to engage people in dialogue through tweets or direct messages, while emulati…

Tay (bot) · 2020

Tay was an artificial intelligence chatter bot that was originally released by Microsoft Corporation via Twitter on March 23, 2016; it caused subsequent controversy when the bot began to post inflammatory and offensive tweets through its Tw…


A "variant" is an incident that shares the same causative factors, produces similar harms, and involves the same intelligent systems as a known AI incident. Rather than index variants as entirely separate incidents, we list variations of incidents under the first similar incident submitted to the database. Unlike other submission types to the incident database, variants are not required to have reporting in evidence external to the Incident Database. Learn more from the research paper.

Similar Incidents

By textual similarity

Did our AI mess up? Flag the unrelated incidents