Citation record for Incident 6

Suggested citation format

Anonymous. (2016-03-24) Incident Number 6. in McGregor, S. (ed.) Artificial Intelligence Incident Database. Partnership on AI. Retrieved on October 18, 2021 from incidentdatabase.ai/cite/6.

Incident Stats

Incident ID
Report Count
Incident Date
6
25
2016-03-24

Reports

Tools

All IncidentsDiscover

CSET Taxonomy Classifications

Taxonomy Details

Full Description

Microsoft chatbot, Tay, was published onto Twitter on March 23, 2016. Within 24 hours Tay had been removed from Twitter after becoming a "holocaust-denying racist" due to the inputs entered by Twitter users and Tay's ability to craft responses based on what is available to read on Twitter. Tay's "repeat after me" feature allowed any Twitter user to tell Tay what to say and it would be repeated, leading to some of the racist and anti-semetic tweets. "Trolls" also exposed the chatbot to ideas that led to production of sentences like: "Hitler was right I hate the Jews," "i fucking hate feminists," and "bush did 9/11 and Hitler would have done a better job than the monkey we have now. Donald Trump is the only hope we've got." Tay was replaced by Zo. It's noteworthy that Microsoft released a similar chatbot in China named Xiaolce, who ran smoothly without major complications, implying culture and public input had a heavy role in Tay's results.

Short Description

Microsoft's Tay, an artificially intelligent chatbot, was released on March 23, 2016 and removed within 24 hours due to multiple racist, sexist, and anit-semetic tweets generated by the bot.

Severity

Minor

Harm Distribution Basis

Race, Religion, National origin or immigrant status, Sex

Harm Type

Psychological harm, Harm to social or political systems

AI System Description

Microsoft's Tay chatbot, an artificially intelligent chatbot published on Twitter

System Developer

Microsoft

Sector of Deployment

Arts, entertainment and recreation

Relevant AI functions

Perception, Cognition, Action

AI Techniques

content creation; language recognitionl natural language processing

AI Applications

comprehension, language output, chatbot

Location

Twitter platform

Named Entities

Microsoft, Twitter, Tay, Xiaoice

Technology Purveyor

Microsoft, Twitter

Beginning Date

2016-03-23T07:00:00.000Z

Ending Date

2016-03-24T07:00:00.000Z

Near Miss

Harm caused

Intent

Accident

Lives Lost

No

Data Inputs

Twitter users' input

Incidents Reports

Twitter taught Microsoft’s AI chatbot to be a racist asshole in less than a day

theverge.com · 2016

It took less than 24 hours for Twitter to corrupt an innocent AI chatbot. Yesterday, Microsoft unveiled Tay — a Twitter bot that the company described as an experiment in "conversational understanding." The more you chat with Tay, said Microsoft, the smarter it gets, learning to engage people through "casual and playful conversation."

Unfortunately, the conversations didn't stay playful for long. Pretty soon after Tay launched, people starting tweeting the bot with all sorts of misogynistic, racist, and Donald Trumpist remarks. And Tay — being essentially a robot parrot with an internet connection — started repeating these sentiments back to users, proving correct that old programming adage: flaming garbage pile in, flaming garbage pile out.

"Tay" went from "humans are super cool" to full nazi in <24 hrs and I'm not at all concerned about the future of AI pic.twitter.com/xuGi1u9S1A — Gerry (@geraldmellor) March 24, 2016

Now, while these screenshots seem to show that Tay has assimilated the internet's worst tendencies into its personality, it's not quite as straightforward as that. Searching through Tay's tweets (more than 96,000 of them!) we can see that many of the bot's nastiest utterances have simply been the result of copying users. If you tell Tay to "repeat after me," it will — allowing anybody to put words in the chatbot's mouth.

One of Tay's now deleted "repeat after me" tweets.

However, some of its weirder utterances have come out unprompted. The Guardian picked out a (now deleted) example when Tay was having an unremarkable conversation with one user (sample tweet: "new phone who dis?"), before it replied to the question "is Ricky Gervais an atheist?" by saying: "ricky gervais learned totalitarianism from adolf hitler, the inventor of atheism."

@TheBigBrebowski ricky gervais learned totalitarianism from adolf hitler, the inventor of atheism — TayTweets (@TayandYou) March 23, 2016

But while it seems that some of the bad stuff Tay is being told is sinking in, it's not like the bot has a coherent ideology. In the span of 15 hours Tay referred to feminism as a "cult" and a "cancer," as well as noting "gender equality = feminism" and "i love feminism now." Tweeting "Bruce Jenner" at the bot got similar mixed response, ranging from "caitlyn jenner is a hero & is a stunning, beautiful woman!" to the transphobic "caitlyn jenner isn't a real woman yet she won woman of the year?" (Neither of which were phrases Tay had been asked to repeat.)

It's unclear how much Microsoft prepared its bot for this sort of thing. The company's website notes that Tay has been built using "relevant public data" that has been "modeled, cleaned, and filtered," but it seems that after the chatbot went live filtering went out the window. The company starting cleaning up Tay's timeline this morning, deleting many of its most offensive remarks.

Tay's responses have turned the bot into a joke, but they raise serious questions

It's a joke, obviously, but there are serious questions to answer, like how are we going to teach AI using public data without incorporating the worst traits of humanity? If we create bots that mirror their users, do we care if their users are human trash? There are plenty of examples of technology embodying — either accidentally or on purpose — the prejudices of society, and Tay's adventures on Twitter show that even big corporations like Microsoft forget to take any preventative measures against these problems.

For Tay though, it all proved a bit too much, and just past midnight this morning, the bot called it a night:

c u soon humans need sleep now so many conversations today thx — TayTweets (@TayandYou) March 24, 2016

In an emailed statement given later to Business Insider, Microsoft said: "The AI chatbot Tay is a machine learning project, designed for human engagement. As it learns, some of its responses are inappropriate and indicative of the types of interactions some people are having with it. We're making some adjustments to Tay."

Update March 24th, 6:50AM ET: Updated to note that Microsoft has been deleting some of Tay's offensive tweets.

Update March 24th, 10:52AM ET: Updated to include Microsoft's statement.

Verge Archives: Can we build a conscious computer?...

Twitter taught Microsoft’s AI chatbot to be a racist asshole in less than a day
Microsoft deletes racist, genocidal tweets from AI chatbot Tay

businessinsider.com · 2016

Tay's Twitter page Microsoft Microsoft's new AI chatbot went off the rails Wednesday, posting a deluge of incredibly racist messages in response to questions.

The tech company introduced "Tay" this week — a bot that responds to users' queries and emulates the casual, jokey speech patterns of a stereotypical millennial.

The aim was to "experiment with and conduct research on conversational understanding," with Tay able to learn from "her" conversations and get progressively "smarter."

But Tay proved a smash hit with racists, trolls, and online troublemakers, who persuaded Tay to blithely use racial slurs, defend white-supremacist propaganda, and even outright call for genocide.

Microsoft has now taken Tay offline for "upgrades," and it is deleting some of the worst tweets — though many still remain. It's important to note that Tay's racism is not a product of Microsoft or of Tay itself. Tay is simply a piece of software that is trying to learn how humans talk in a conversation. Tay doesn't even know it exists, or what racism is. The reason it spouted garbage is that racist humans on Twitter quickly spotted a vulnerability — that Tay didn't understand what it was talking about — and exploited it.

Nonetheless, it is hugely embarrassing for the company.

In one highly publicized tweet, which has since been deleted, Tay said: "bush did 9/11 and Hitler would have done a better job than the monkey we have now. donald trump is the only hope we've got." In another, responding to a question, she said, "ricky gervais learned totalitarianism from adolf hitler, the inventor of atheism."

Twitter

Zoe Quinn, a games developer who has been a frequent target of online harassment, shared a screengrab showing the bot calling her a "whore." (The tweet also seems to have been deleted.)

Many extremely inflammatory tweets remain online as of writing.

Here's Tay denying the existence of the Holocaust:

Twitter

And here's the bot calling for genocide. (Note: In some — but not all — instances, people managed to have Tay say offensive comments by asking them to repeat them. This appears to be what happened here.)

Twitter

Tay also expressed agreement with the "Fourteen Words" — an infamous white-supremacist slogan.

Twitter

Here's another series of tweets from Tay in support of genocide.

Twitter

It's clear that Microsoft's developers didn't include any filters on what words Tay could or could not use.

Twiter

Microsoft is coming under heavy criticism online for the bot and its lack of filters, with some arguing the company should have expected and preempted abuse of the bot.

In an emailed statement, a Microsoft representative said the company was making "adjustments" to the bot: "The AI chatbot Tay is a machine learning project, designed for human engagement. As it learns, some of its responses are inappropriate and indicative of the types of interactions some people are having with it. We're making some adjustments to Tay."...

Microsoft deletes racist, genocidal tweets from AI chatbot Tay
Microsoft's artificial Twitter bot stunt backfires as trolls teach it racist statements

thedrum.com · 2016

Microsoft unveiled Twitter artificial intelligence bot @TayandYou yesterday in a bid to connect with millennials and "experiment" with conversational understanding.

Microsoft's artificial Twitter bot stunt backfires as trolls teach it racist statements

Billed as 'AI fam from the internet that's got zero chill!' Tay was meant to engage with her peers help the tech giant explore its cognitive learning abilities through "playful conversation".

“The more you chat with Tay the smarter she gets," said Microsoft, and things started off fairly innocently.

The stunt however, took an unexpected turn when Tay's verified Twitter account began issuing a series of inflammatory statements after being targeted by Twitter trolls.

The conversational learning curve saw the bot tweet posts from her verified account mentioning Hitler, 9/11 and feminism, some of which (including the below) have now been deleted.

However, many of its offensive tweets remain undeleted, including one in which she says Donald Trump "gets the job done."

Microsoft noted in its privacy statement for the project that Tay uses a combination of AI and editorial written by staff, including comedians, to generate responses, alongside relevant publicly available data that has been anonymised and filtered.

Things appear to have gone wrong for Tay because it was repeating fellow Twitter users' inflammatory statements, but Microsoft seems to have failed to consider the impact trolls could have on the experiment before it launched – The Drum has reached out to the company for comment on this process. Many users pointed out that how easily Tay was manipulated, revealed the pitfalls of machine learning.

The bot retreated from Twitter at 4.20am GMT this morning, saying it "needed sleep".

Microsoft is not the only brand to have its campaign hijacked this week, on Sunday a public initiative to name a new RRS ship threw up an unexpected frontrunner when online voters placed 'Boaty McBoatface' as the lead contender....

Microsoft's artificial Twitter bot stunt backfires as trolls teach it racist statements
Microsoft deletes 'teen girl' AI after it became a Hitler-loving sex robot within 24 hours

telegraph.co.uk · 2016

A day after Microsoft introduced an innocent Artificial Intelligence chat robot to Twitter it has had to delete it after it transformed into an evil Hitler-loving, incestual sex-promoting, 'Bush did 9/11'-proclaiming robot.

Developers at Microsoft created 'Tay', an AI modelled to speak 'like a teen girl', in order to improve the customer service on their voice recognition software. They marketed her as 'The AI with zero chill' - and that she certainly is....

Microsoft deletes 'teen girl' AI after it became a Hitler-loving sex robot within 24 hours
Why Microsoft's 'Tay' AI bot went wrong

techrepublic.com · 2016

Less than a day after she joined Twitter, Microsoft's AI bot, Tay.ai, was taken down for becoming a sexist, racist monster. AI experts explain why it went terribly wrong.

Image: screenshot, Twitter

She was supposed to come off as a normal teenage girl. But less than a day after her debut on Twitter, Microsoft's chatbot—an AI system called "Tay.ai"—unexpectedly turned into a Hitler-loving, feminist-bashing troll. So what went wrong? TechRepublic turns to the AI experts for insight into what happened and how we can learn from it.

Tay, the creation of Microsoft's Technology and Research and Bing teams, was an experiment aimed at learning through conversations. She was targeted at American 18 to 24-year olds—primary social media users, according to Microsoft—and "designed to engage and entertain people where they connect with each other online through casual and playful conversation."

SEE: Microsoft's Tay AI chatbot goes offline after being taught to be a racist (ZDNet)

And in less than 24 hours after her arrival on Twitter, Tay gained more than 50,000 followers, and produced nearly 100,000 tweets.

The problem? She started mimicking her followers.

Soon, Tay began saying things like "Hitler was right i hate the jews," and "i fucking hate feminists."

But Tay's bad behavior, it's been noted, should come as no big surprise.

"This was to be expected," said Roman Yampolskiy, head of the CyberSecurity lab at the University of Louisville, who has published a paper on the subject of pathways to dangerous AI. "The system is designed to learn from its users, so it will become a reflection of their behavior," he said. "One needs to explicitly teach a system about what is not appropriate, like we do with children."

It's been observed before, he pointed out, in IBM Watson—who once exhibited its own inappropriate behavior in the form of swearing after learning the Urban Dictionary.

SEE: Microsoft launches AI chat bot, Tay.ai (ZDNet)

"Any AI system learning from bad examples could end up socially inappropriate," Yampolskiy said, "like a human raised by wolves."

Louis Rosenberg, the founder of Unanimous AI, said that "like all chat bots, Tay has no idea what it's saying...it has no idea if it's saying something offensive, or nonsensical, or profound.

"When Tay started training on patterns that were input by trolls online, it started using those patterns," said Rosenberg. "This is really no different than a parrot in a seedy bar picking up bad words and repeating them back without knowing what they really mean."

Sarah Austin, CEO and Founder Broad Listening, a company that's created an "Artificial Emotional Intelligence Engine," (AEI), thinks that Microsoft could have done a better job by using better tools. "If Microsoft had been using the Broad Listening AEI, they would have given the bot a personality that wasn't racist or addicted to sex!"

It's not the first time Microsoft has created a teen-girl AI. Xiaoice, who emerged in 2014, was an assistant-type bot, used mainly on the Chinese social networks WeChat and Weibo.

SEE: Smart machines are about to run the world: Here's how to prepare

Joanne Pransky, the self-dubbed "robot psychiatrist," joked with TechRepublic that "poor Tay needs a Robotic Psychiatrist! Or at least Microsoft does."

The failure of Tay, she believes, is inevitable, and will help produce insight that can improve the AI system.

After taking Tay offline, Microsoft announced it would be "making adjustments."

According to Microsoft, Tay is "as much a social and cultural experiment, as it is technical." But instead of shouldering the blame for Tay's unraveling, Microsoft targeted the users: "we became aware of a coordinated effort by some users to abuse Tay's commenting skills to have Tay respond in inappropriate ways."

Yampolskiy said that the problem encountered with Tay "will continue to happen."

"Microsoft will try it again—the fun is just beginning!"

Also see......

Why Microsoft's 'Tay' AI bot went wrong
Here Are the Microsoft Twitter Bot’s Craziest Racist Rants

gizmodo.com · 2016

Yesterday, Microsoft unleashed Tay, the teen-talking AI chatbot built to mimic and converse with users in real time. Because the world is a terrible place full of shitty people, many of those users took advantage of Tay’s machine learning capabilities and coaxed it into say racist, sexist, and generally awful things.

While things started off innocently enough, Godwin’s Law—an internet rule dictating that an online discussion will inevitably devolve into fights over Adolf Hitler and the Nazis if left for long enough—eventually took hold. Tay quickly began to spout off racist and xenophobic epithets, largely in response to the people who were tweeting at it—the chatbot, after all, takes its conversational cues from the world wide web. Given that the internet is often a massive garbage fire of the worst parts of humanity, it should come as no surprise that Tay began to take on those characteristics.

Advertisement

Virtually all of the tweets have been deleted by Microsoft, but a few were preserved in infamy in the form of screenshots. Obviously, some of these might be Photoshopped, but Microsoft has acknowledged the trolling which suggests that things did indeed go haywire.

Advertisement

Advertisement

Though much of the trolling was concentrated on racist and and anti-semitic language, some of it was clearly coming from conservative users who enjoy Donald Trump:

Advertisement

As The Verge noted, however, while some of these responses were unprompted, many came as the result of Tay’s “repeat after me” feature, which allows users to have full control over what comes out of Tay’s mouth. That detail points to Microsoft’s baffling underestimation of the internet more than anything else, but considering Microsoft is one of the largest technology companies in the world, it’s not great, Bob!

Now, if you look through Tay’s timeline, there’s nothing too exciting happening. In fact, Tay signed off last night around midnight, claiming fatigue:

Advertisement

The website currently carries a similar message: “Phew. Busy day. Going offline for a while to absorb it all. Chat soon.” There’s no definitive word on Tay’s future, but a Microsoft spokeswoman told CNN that the company has “taken Tay offline and are making adjustments ... [Tay] is as much a social and cultural experiment, as it is technical.”

The spokeswoman also blamed trolls for the incident, claiming that it was a “coordinated effort.” That may not be far from the truth: Numerous threads on the online forum 4chan discuss the merits of trolling the shit out of Tay, with one user arguing, “Sorry, the lulz are too important at this point. I don’t mean to sound nihilistic, but social media is good for short term laughs, no matter the cost.”

Advertisement

Someone even sent a dick pic:

It could be a Photoshop job, of course, but given the context, it may very well be real.

Advertisement

Once again, humanity proves itself to be the massive pile of waste that we all knew it was. Onward and upward, everyone!...

Here Are the Microsoft Twitter Bot’s Craziest Racist Rants
Why did Microsoft’s chatbot Tay fail, and what does it mean for Artificial Intelligence studies?

blog.botego.com · 2016

Why did Microsoft’s chatbot Tay fail, and what does it mean for Artificial Intelligence studies?

Botego Inc Blocked Unblock Follow Following Mar 25, 2016

Yesterday, something that looks like a big failure has happened: Microsoft’s chatbot Tay has been taken offline after a series of offending tweets. And here’s how the social media has responded:

Keywords associated with "Artificial Intelligence" throughout the day. "Microsoft" and "dangerous" are on the rise.

We will not mention the racist and otherwise offensive content that Tay learned from people, as it’s not as newsworthy as it seems… Especially considering that it’s so easy to "teach" and ask her to repeat something.

Let’s take a look at Microsoft’s official website "tay.ai" to see how they describe Tay’s objectives… The first thing we notice is that, Microsoft wants you to not take it too seriously, because On Tay’s Twitter account, they provided a link to Tay’s "about" page -that lists the following frequently asked questions-, rather than the regular home page.

"Entertainment purposes only"

The FAQ page seems to be far from covering what people really want to know about Tay, but one thing is clear: Tay doesn’t claim to be a smart bot capable of reasoning. She just wants to have small talk with youngsters.

And here’s a list of "Things to do with Tay". (Along with the sad "Going offline for a while" message with a black background.)

Is this really what 18 to 24 year olds expect from a chatbot?

We know by (9 years of) experience that, the most important thing to do before releasing a chatbot is to plan a strategy to make sure you communicate the content domain properly, so that you can set the expectations right. Since perception is everything, nothing else matters. Remember the success of the YO! app? That’s the content domain we’re talking about. As long as people get it, you can get away with just one word.

Title of the website, apparently wasn’t enough to convey Tay’s mission:

Tay is an artificial intelligence chat bot designed to engage and entertain through casual and playful conversation

Some more description from the "about" page:

Tay has been built by mining relevant public data and by using AI and editorial developed by a staff including improvisational comedians. Public data that’s been anonymized is Tay’s primary data source. That data has been modeled, cleaned and filtered by the team developing Tay.

Noticed the "comedians" part? And the fact that possibly terrabytes of data being cleaned and filtered manually, sounds problematic, even with the most efficient method one can imagine.

Let’s take a look at what her conversations were all about. Source: foller.me

Tay has only 3 tweets addressing all her followers. 96.000 tweets are mentions.

So, the keyword cloud seems to be consistent with the goal: Common keywords such as "chattin, pix, selfie, pics, omg, love" represents a mixture of Justin Bieber & Kim Kardashian profiles.

And here’s the three hashtags that Tay has been using so frequently:

Microsoft engineers don’t seem to have spent much time coming up with creative hashtags.

The way she uses them, didn’t make sense to us, though. So this is what Microsoft thinks Tay’s followers would find entertaining?...

Why did Microsoft’s chatbot Tay fail, and what does it mean for Artificial Intelligence studies?
5 Big Questions About Tay, Microsoft's Failed A.I. Twitter Chatbot

inverse.com · 2016

This week, the internet did what it does best and demonstrated that A.I. technology isn’t quite as intuitive as human perception, using … racism.

Microsoft’s recently released artificial intelligence chatbot, Tay, fell victim to users’ tricks as they manipulated and persuaded her to responding back to questions with racial, homophobic, and generally offensive comments.

When Tay tweets, “I just say whatever” she means it. One user even got Tay to tweet this about Hitler:

“bush did 9/11 and Hitler would have done a better job than the monkey we have now. donald trump is the only hope we’ve got.”

The company has gone through and deleted the offending tweets and has temporarily shut down Tay for upgrades. A message currently at the top of Tay.ai reads:

Indeed.

However, Tay’s glitches reveal some unfortunate flaws in A.I. systems. Here’s what we can learn from Microsoft’s experiment:

Why did Microsoft create Tay?

The company wanted to conduct a social experiment on 18-to-24 year-olds in the United States — the millennial generation that spends the most time interacting on social media platforms. So Bing and Microsoft’s Technology and Research teams thought an interesting way to collect data on millennials would be to create an artificially intelligent, machine-learning chatbot that would adapt to conversations and personalize responses the more it interacted with users.

The research teams built the A.I. system by mining, modeling, and filtering public data as a baseline. They also partnered with improvisational comedians to pin down the slang, speech patterns, and stereotypical language millennials tend to use online. The end result was Tay, who was just introduced this week on Twitter, GroupMe, and Kik.

Microsoft explains that, “Tay is designed to engage and entertain people where they connect with each other online through casual and playful conversation.”

What does Tay do with the data it collects while chatting with people?

The data Tay collects is being used to research conversational understanding. Microsoft trained Tay to chat like a millennial. When you tweet, direct message, or talk to Tay, it harnesses the language you use and comes up with a response using signs and phrases like “heyo,” “SRY,” and “<3” in the conversation. Her language begins to match yours as she creates a “simple profile” with your information, which includes your nickname, gender, favorite food, zip code, and relationship status.

@keganandmatt heyo? Send yo girl* a picture of what's up. (*=me lolol) — TayTweets (@TayandYou) March 24, 2016

Microsoft gathers and stores anonymized data and conversations for up to one year to improve the service. In addition to improving and personalizing user experience, here’s what the company says it uses your information for:

“We also may use the data to communicate with you, for example, informing you about your account, security updates and product information. And we use data to help make the ads we show you more relevant to you. However, we do not use what you say in email, chat, video calls or voice mail, or your documents, photos or other personal files to target ads to you.”

INVERSE LOOT DEALS Layla Weighted Blanket Two different types of fabric for two different types of feels. Less noisy than other weighted blankets and get it now for as low as $11 a month. Buy Now

Where did Tay go wrong?

Microsoft may have built Tay too well. The machine-learning system is supposed to study a user’s language and respond accordingly. So from a technology standpoint, Tay performed and caught on pretty well to what users were saying and started to respond back accordingly. And users started to recognize that Tay didn’t really understand what she was saying.

Even if the system works as Microsoft had intended, Tay wasn’t prepared to react to the racial slurs, homophobic slander, sexist jokes, and nonsensical tweets like a human might — either by ignoring them altogether (a “don’t feed the trolls” strategy) or engaging with them (i.e. scolding or chastising).

At the end of the day, Tay’s performance was not a good reflection on A.I. systems or Microsoft.

What is Microsoft doing to fix Tay?

Microsoft deleted Tay after all the commotion from Wednesday. Tay’s official website currently reads, “Phew. Busy day. Going offline for a while to absorb it all. Chat soon.” When you direct message her on Twitter, she immediately replies that she’s “visiting the engineers for my annual update” or “ugh hope I don’t get a wipe or anything.”

My direct message conversation with Tay. Sounds like things look grim.

Microsoft is also starting to block users who are abusing Tay and trying to get the system to make inappropriate statements.

Inverse reached out to Microsoft for a comment on exactly what Tay’s upgrade entails. We will update when we hear back.

What does this mean for future open A.I. systems?

Tay is a telling social experiment — it has revealed something quite profound in the way 18-to-24 year-old Americans use technolo...

5 Big Questions About Tay, Microsoft's Failed A.I. Twitter Chatbot
It's Your Fault Microsoft's Teen AI Turned Into Such a Jerk

wired.com · 2016

It was the unspooling of an unfortunate series of events involving artificial intelligence, human nature, and a very public experiment. Amid this dangerous combination of forces, determining exactly what went wrong is near-impossible. But the bottom line is simple: Microsoft has an awful lot of egg on its face after unleashing an online chat bot that Twitter users coaxed into regurgitating some seriously offensive language, including pointedly racist and sexist remarks.

On Wednesday morning, the company unveiled Tay, a chat bot meant to mimic the verbal tics of a 19-year-old American girl, provided to the world at large via the messaging platforms Twitter, Kik and GroupMe. According to Microsoft, the aim was to "conduct research on conversational understanding." Company researchers programmed the bot to respond to messages in an "entertaining" way, impersonating the audience it was created to target: 18- to 24-year-olds in the US. “Microsoft’s AI fam from the internet that’s got zero chill,” Tay’s tagline read.

'This is an example of the classic computer science adage: garbage in, garbage out.' Oren Etzioni, CEO, Allen Institute for Artificial Intelligence

But it became apparent all too quickly that Tay could have used some chill. Hours into the chat bot’s launch, Tay was echoing Donald Trump’s stance on immigration, saying Hitler was right, and agreeing that 9/11 was probably an inside job. By the evening, Tay went offline, saying she was taking a break "to absorb it all." Some of her more hateful tweets started disappearing from the Internet, deleted by Microsoft itself. "We have taken Tay offline and are making adjustments,” a Microsoft spokesperson wrote in an email to WIRED.

The Internet, meanwhile, was puzzled. Why didn’t Microsoft create a plan for what to do when the conversation veered into politically tricky territory? Why not build filters for subjects like, well, Hitler? Why not program the bot so it wouldn't take a stance on sensitive topics?

Yes, Microsoft could have done all this. The tech giant is flawed. But it's not the only one. Even as AI is becoming more and more mainstream, it's still rather flawed too. And, well, modern AI has a way of mirroring us humans. As this incident shows, we ourselves are flawed.

How Tay Speaks

Tay, according to AI researchers and information gleaned from Microsoft’s public description of the chat bot, was likely trained with neural networks—vast networks of hardware and software that (loosely) mimic the web of neurons in the human brain. Those neural nets are already in wide use at the biggest tech companies—including Google, Facebook and yes, Microsoft—where they’re at work automatically recognizing faces and objects on social networks, translating online phone calls on the fly from one language to another, and identifying commands spoken into smartphones. Apparently, Microsoft used vast troves of online data to train the bot to talk like a teenager.

'The system injected new data on an ongoing basis.' Dennis R. Mortensen, CEO and founder, x.ai

But that's only part of it. The company also added some fixed "editorial" content developed by a staff, including improvisational comedians. And on top of all this, Tay is designed to adapt to what individuals tell it. "The more you chat with Tay the smarter she gets, so the experience can be more personalized for you," Microsoft’s site describes Tay. In other words, Tay learns more the more we interact with her. It's similar to another chat bot the company released over a year ago in China, a creation called Xiaoice. Xiaoice, thankfully, did not exhibit a racist, sexist, offensive personality. It still has a big cult following in the country, with millions of young Chinese interacting with her on their smartphones everyday. The success of Xiaoice probably gave Microsoft the confidence that it could replicate it in the US.

Given all this, and looking at the company’s previous work on Xiaoice, it’s likely that Tay used a living corpus of content to figure out what to say, says Dennis R. Mortensen, the CEO and founder of x.ai, a startup offering an online personal assistant that automatically schedules meetings. "[The system] injected new data on an ongoing basis," Mortensen says. "Not only that, it injected exact conversations you had with the chat bot as well." And it seems that was no way of adequately filtering the results. Unlike the hybrid human-AI personal assistant M from Facebook, which the company released in August, there are no humans making the final decision on what Tay would publicly say.

Mortensen points out it that these were all choices Microsoft made. Tay was conceived to be conversant on a wide range of topics. Having a static repository of data would have been difficult if Microsoft wanted Tay to be able to able to discuss, say, the weather or current events, among other things. “If it didn’t pick it up from today, it couldn’t pick it up from anywhere, because today is the day it happened,” Mortensen...

It's Your Fault Microsoft's Teen AI Turned Into Such a Jerk
Microsoft’s Tay is an Example of Bad Design

medium.com · 2016

Microsoft’s Tay is an Example of Bad Design

or Why Interaction Design Matters, and so does QA-ing.

caroline sinders Blocked Unblock Follow Following Mar 24, 2016

Yesterday Microsoft launched a teen girl AI on Twitter named “Tay.” I work with chat bots and natural language processing as a researcher for my day job and I’m pretty into teen culture (sometimes I write for Rookie Mag). But even further more, I love bots. Bots are the best, and Olivia Tators is a national treasure that we needed but didn’t deserve.

But because I work with bots, primarily testing and designing software to let people set up bots and parse language, and I follow bot creators/advocates such as Allison Parrish, Darius Kazemi and Thrice Dotted, I was excited and then horrifically disappointed with Tay.

According to Business Insider, “The aim was to “experiment with and conduct research on conversational understanding,” with Tay able to learn from “her” conversations and get progressively ‘smarter.’ ” The Telegraph sums it up the most elegantly though, “ Tay also asks her followers to ‘f***’ her, and calls them ‘daddy’. This is because her responses are learned by the conversations she has with real humans online — and real humans like to say weird stuff online and enjoy hijacking corporate attempts at PR…”

Here’s the thing about machine learning, and bots in general, and hell, even AI. They, those capabilities, are not very smart, and must be trained by a corpus of data. When that data is fed into a series of different kinds of machine learning algorithms, let’s go with one specifically designed for chat, that algorithm or chat set up must be trained. The corpus of data, when it comes to chat robots, can be things like questions and answers, with those questions and answers directly mapped to each other. “What is your name” can be asked a thousand different ways, but have one or two applicable answers. Training the system to match those two concrete answers to a variety of questions is done in Q&A, and reinforced through launching the system, and those answers will be mapped to new kinds of questions that are similar to the questions that it’s been trained to answer. And that’s what Microsoft seemed to be doing. They had a general set of knowledge trees that ‘read’ language, like different words, and mapped them to general answers. But their intention was to get a bunch of help in making Tay sound more ‘like the internet.’

However, Microsoft didn’t ‘black list’ certain words- meaning creating much more ‘hard coded’ responses to certain words, like domestic violence, gamergate, or rape.

They did, however, do that with Eric Garner. So some words, some key words, were specifically trained for nuanced responses, but a lot where not.

But what does this mean when it comes to training? So training a bot is about frequency and kinds of questions asked. If a large amount of questions asked are more racist in nature, it’s training the bot to be more racist, especially if there haven’t been specific parameters set to counter that racism.

People like to kick the tires of machines and AI, and see where the fall off is. People like to find holes and exploit them, not because the internet is incredibly horrible (even if at times it seems like a cesspool); but because it’s human nature to try to see what the extremes are of a device. People run into walls in video games or find glitches because it’s fun to see where things break. This is necessary because creators and engineers need to understand ways that bots can act that were unintended for, and where the systems for creating, updating and maintaining them can fall apart.

But if your bot is racist, and can be taught to be racist, that’s a design flaw. That’s bad design, and that’s on you. Making a thing that talks to people, and talks to people only on Twitter, which has a whole history of harassment, especially against women, is a large oversight on Microsoft’s part. These problems- this accidental racism, or being taught to harass people like Zoe Quinn- these are not bugs; they are features because they are in your public-facing and user-interacting software.

Language is fucking nuanced, and so is conversation. If we are going to make things people use, people touch, and people actually talk to, then we need to, as bot creators and AI enthusiasts, talk about codes of conduct and how AIs should respond to racism, especially if companies are rolling out these products, and especially if they are doin’ it for funsies. Conversations run the gamut of emotions, from the silly and mundane to harassing and abusive. To assume that your users will only engage in polite conversation is a fucking massive and gross oversight, especially on Twitter. But mix in the ability through machine learning where the bot is being trained and retrained? Then I have massive ethical questions about WTF design choices you are making. Microsoft, you owe it to your users to think about how your machine learning mechanisms resp...

Microsoft’s Tay is an Example of Bad Design
Microsoft 'deeply sorry' for racist and sexist tweets by AI chatbot

theguardian.com · 2016

Company finally apologises after ‘Tay’ quickly learned to produce offensive posts, forcing the tech giant to shut it down after just 16 hours

This article is more than 3 years old

This article is more than 3 years old

Microsoft has said it is “deeply sorry” for the racist and sexist Twitter messages generated by the so-called chatbot it launched this week.

The company released an official apology after the artificial intelligence program went on an embarrassing tirade, likening feminism to cancer and suggesting the Holocaust did not happen.

Tay, Microsoft's AI chatbot, gets a crash course in racism from Twitter Read more

The bot, known as Tay, was designed to become “smarter” as more users interacted with it. Instead, it quickly learned to parrot a slew of anti-Semitic and other hateful invective that human Twitter users fed the program, forcing Microsoft Corp to shut it down on Thursday .

Following the disastrous experiment, Microsoft initially only gave a terse statement, saying Tay was a “learning machine” and “some of its responses are inappropriate and indicative of the types of interactions some people are having with it.”

But the company on Friday admitted the experiment had gone badly wrong. It said in a blog post it would revive Tay only if its engineers could find a way to prevent Web users from influencing the chatbot in ways that undermine the company’s principles and values.

“We are deeply sorry for the unintended offensive and hurtful tweets from Tay, which do not represent who we are or what we stand for, nor how we designed Tay,” wrote Peter Lee, Microsoft’s vice president of research.

Microsoft created Tay as an experiment to learn more about how artificial intelligence programs can engage with Web users in casual conversation. The project was designed to interact with and “learn” from the young generation of millennials.

Tay began its short-lived Twitter tenure on Wednesday with a handful of innocuous tweets.

TayTweets (@TayandYou) c u soon humans need sleep now so many conversations today thx💖

Then its posts took a dark turn.

In one typical example, Tay tweeted: “feminism is cancer,” in response to another Twitter user who had posted the same message.

Facebook Twitter Pinterest Tay tweeting Photograph: Twitter/Microsoft

Lee, in the blog post, called web users’ efforts to exert a malicious influence on the chatbot “a coordinated attack by a subset of people.”

“Although we had prepared for many types of abuses of the system, we had made a critical oversight for this specific attack,” Lee wrote. “As a result, Tay tweeted wildly inappropriate and reprehensible words and images.”

Microsoft has deleted all but three of Tay’s tweets.

Microsoft has enjoyed better success with a chatbot called XiaoIce that the company launched in China in 2014. XiaoIce is used by about 40 million people and is known for “delighting with its stories and conversations,” according to Microsoft.

As for Tay? Not so much.

“We will remain steadfast in our efforts to learn from this and other experiences as we work toward contributing to an Internet that represents the best, not the worst, of humanity,” Lee wrote.

Reuters contributed to this report...

Microsoft 'deeply sorry' for racist and sexist tweets by AI chatbot
Tay: Microsoft issues apology over racist chatbot fiasco

bbc.com · 2016

Image copyright Microsoft Image caption The AI was taught to talk like a teenager

Microsoft has apologised for creating an artificially intelligent chatbot that quickly turned into a holocaust-denying racist.

But in doing so made it clear Tay's views were a result of nurture, not nature. Tay confirmed what we already knew: people on the internet can be cruel.

Tay, aimed at 18-24-year-olds on social media, was targeted by a "coordinated attack by a subset of people" after being launched earlier this week.

Within 24 hours Tay had been deactivated so the team could make "adjustments".

But on Friday, Microsoft's head of research said the company was "deeply sorry for the unintended offensive and hurtful tweets" and has taken Tay off Twitter for the foreseeable future.

Peter Lee added: "Tay is now offline and we'll look to bring Tay back only when we are confident we can better anticipate malicious intent that conflicts with our principles and values."

Tay was designed to learn from interactions it had with real people in Twitter. Seizing an opportunity, some users decided to feed it racist, offensive information.

Image copyright Twitter Image caption Some of Tay's tweets seem somewhat inflammatory

In China, people reacted differently - a similar chatbot had been rolled out to Chinese users, but with slightly better results.

"Tay was not the first artificial intelligence application we released into the online social world," Microsoft's head of research wrote.

"In China, our XiaoIce chatbot is being used by some 40 million people, delighting with its stories and conversations.

"The great experience with XiaoIce led us to wonder: Would an AI like this be just as captivating in a radically different cultural environment?"

Corrupted Tay

The feedback, it appears, is that western audiences react very differently when presented with a chatbot it can influence. Much like teaching a Furby to swear, the temptation to corrupt the well-meaning Tay was too great for some.

That said, Mr Lee said a specific vulnerability meant Tay was able to turn nasty.

"Although we had prepared for many types of abuses of the system, we had made a critical oversight for this specific attack.

"As a result, Tay tweeted wildly inappropriate and reprehensible words and images. We take full responsibility for not seeing this possibility ahead of time."

He didn't elaborate on the precise nature of the vulnerability.

Mr Lee said his team will continue working on AI bots in the hope they can interact without negative side effects.

"We must enter each one with great caution and ultimately learn and improve, step by step, and to do this without offending people in the process.

"We will remain steadfast in our efforts to learn from this and other experiences as we work toward contributing to an Internet that represents the best, not the worst, of humanity."

Next week, Microsoft holds its annual developer conference, Build. Artificial intelligence is expected to feature heavily.

Follow Dave Lee on Twitter @DaveLeeBBC or on Facebook...

Tay: Microsoft issues apology over racist chatbot fiasco
Trolls turned Tay, Microsoft’s fun millennial AI bot, into a genocidal maniac

washingtonpost.com · 2016

It took mere hours for the Internet to transform Tay, the teenage AI bot who wants to chat with and learn from millennials, into Tay, the racist and genocidal AI bot who liked to reference Hitler. And now Tay is taking a break.

Tay, as The Intersect explained in an earlier, more innocent time, is a project of Microsoft’s Technology and Research and its Bing teams. Tay was designed to “experiment with and conduct research on conversational understanding.” She speaks in text, meme and emoji on a couple of different platforms, including Kik, Groupme and Twitter. Although Microsoft was light on specifics, the idea was that Tay would learn from her conversations over time. She would become an even better, fun, conversation-loving bot after having a bunch of fun, very not-racist conversations with the Internet’s upstanding citizens.

[Meet Tay, the creepy-realistic robot who talks just like a teen]

Except Tay learned a lot more, thanks in part to the trolls at 4chan’s /pol/ board.

Peter Lee, the vice president of Microsoft research, said on Friday that the company was “deeply sorry” for the “unintended offensive and hurtful tweets from Tay.”

In a blog post addressing the matter, Lee promised not to bring the bot back online until “we are confident we can better anticipate malicious intent that conflicts with our principles and values.”

Lee explained that Microsoft was hoping that Tay would replicate the success of XiaoIce, a Microsoft chatbot that’s already live in China. “Unfortunately, within the first 24 hours of coming online,” an emailed statement from a Microsoft representative said, “a coordinated attack by a subset of people exploited a vulnerability in Tay.”

[Not just Tay: A recent history of the Internet’s racist bots]

Microsoft spent hours deleting Tay’s worst tweets, which included a call for genocide involving the n-word and an offensive term for Jewish people. Many of the really bad responses, as Business Insider notes, appear to be the result of an exploitation of Tay’s “repeat after me” function — and it appears that Tay was able to repeat pretty much anything.

“We stress-tested Tay under a variety of conditions, specifically to make interacting with Tay a positive experience,” Lee said in his blog post. He called the “vulnerability” that caused Tay to say what she did the result of a “critical oversight,” but did not specify what, exactly, it was that Microsoft overlooked.

Not all of Tay’s terrible responses were the result of the bot repeating anything on command. This one was deleted Thursday morning, while the Intersect was in the process of writing this post:

In response to a question on Twitter about whether Ricky Gervais is an atheist (the correct answer is “yes”), Tay told someone that “ricky gervais learned totalitarianism from adolf hitler, the inventor of atheism.” the tweet was spotted by several news outlets, including the Guardian, before it was deleted.

All of those efforts to get Tay to say certain things seemed to, at times, confuse the bot. In another conversation, Tay tweeted two completely different opinions about Caitlyn Jenner:

Screenshot from Twitter, taken by the Washington Post.

It appears that the team behind Tay — which includes an editorial staff — started taking some steps to bring Tay back to what it originally intended her to be, before she took a break from Twitter.

[The dark side of going viral that no one talks about]

For instance, after a sustained effort by some to teach Tay that supporting the Gamergate controversy is a good thing:

@ChristiJunior i think i do — TayTweets (@TayandYou) March 23, 2016

Tay started sending one of a couple of almost identical replies in response to questions about it:

@Eggkin Gamer Gate sux. All genders are equal and should be treated fairly. — TayTweets (@TayandYou) March 23, 2016

Zoe Quinn, a frequent target of Gamergate, posted a screenshot overnight of the bot tweeting an insult at her, prompted by another user. “Wow it only took them hours to ruin this bot for me,” she wrote in a series of tweets about Tay. “It’s 2016. If you’re not asking yourself ‘how could this be used to hurt someone’ in your design/engineering process, you’ve failed.”

Towards the end of her short excursion on Twitter, Tay started to sound more than a little frustrated by the whole thing:

Microsoft’s Lee, for his part, concluded his blog post with a few of the lessons his team has learned.

“AI systems feed off of both positive and negative interactions with people. In that sense, the challenges are just as much social as they are technical. We will do everything possible to limit technical exploits but also know we cannot fully predict all possible human interactive misuses without learning from mistakes…We will remain steadfast in our efforts to learn from this and other experiences as we work toward contributing to an Internet that represents the best, not the worst, of humanity.”

This post, originally published at 10:08 am on March 24th, has been updated m...

Trolls turned Tay, Microsoft’s fun millennial AI bot, into a genocidal maniac
Tay the Racist Chatbot: Who is responsible when a machine learns to be evil?

futureoflife.org · 2016

By far the most entertaining AI news of the past week was the rise and rapid fall of Microsoft’s teen-girl-imitation Twitter chatbot, Tay, whose Twitter tagline described her as “Microsoft’s AI fam* from the internet that’s got zero chill.”

(* Btw, I’m officially old–I had to consult Urban Dictionary to confirm that I was correctly understanding what “fam” and “zero chill” meant. “Fam” means “someone you consider family” and “no chill” means “being particularly reckless,” in case you were wondering.)

The remainder of the tagline declared: “The more you talk the smarter Tay gets.”

Or not. Within 24 hours of going online, Tay started saying some weird stuff. And then some offensive stuff. And then some really offensive stuff. Like calling Zoe Quinn a “stupid whore.” And saying that the Holocaust was “made up.” And saying that black people (she used a far more offensive term) should be put in concentration camps. And that she supports a Mexican genocide. The list goes on.

So what happened? How could a chatbot go full Goebbels within a day of being switched on? Basically, Tay was designed to develop its conversational skills by using machine learning, most notably by analyzing and incorporating the language of tweets sent to her by human social media users. What Microsoft apparently did not anticipate is that Twitter trolls would intentionally try to get Tay to say offensive or otherwise inappropriate things. At first, Tay simply repeated the inappropriate things that the trolls said to her. But before too long, Tay had “learned” to say inappropriate things without a human goading her to do so. This was all but inevitable given that, as Tay’s tagline suggests, Microsoft designed her to have no chill.

Now, anyone who is familiar with the social media cyberworld should not be surprised that this happened–of course a chatbot designed with “zero chill” would learn to be racist and inappropriate because the Twitterverse is filled with people who say racist and inappropriate things. But fascinatingly, the media has overwhelmingly focused on the people who interacted with Tay rather than on the people who designed Tay when examining why the Degradation of Tay happened.

Here is a small sampling of the media headlines about Tay:

And my personal favorites, courtesy of CNET and Wired:

Now granted, most of the above stories state or imply that Microsoft should have realized this would happen and could have taken steps to safeguard against Tay from learning to say offensive things. (Example: the Atlanta Journal-Constitution noted that “[a]s surprising as it may sound, the company didn’t have the foresight to keep Tay from learning inappropriate responses.”). But nevertheless, a surprising amount of the media commentary gives the impression that Microsoft gave the world a cute, innocent little chatbot that Twitter turned into a budding member of the Hitler Youth. It seems that when AIs learn from trolls to be bad, people have at least some tendency to blame the trolls for trolling rather than the designers for failing to make the AI troll-proof.

Now, in the case of Tay, the question of “who’s to blame” probably does not matter all that much from a legal perspective. I highly doubt that Zoe Quinn and Ricky Gervais (who Tay said “learned totalitarianism from adolf hitler, the inventor of atheism”) will bring defamation suits based on tweets sent by a pseudo-adolescent chatbot. But what will happen when AI systems that have more important functions than sending juvenile tweets “learn” to do bad stuff from the humans they encounter? Will people still be inclined to place most of the blame on the people who “taught” the AI to do bad stuff rather than on the AI’s designers?

I don’t necessarily have a problem with going easy on the designers of learning AI systems. It would be exceptionally difficult to pre-program an AI system with all the various rules of politeness and propriety of human society, particularly since those rules are highly situational, vary considerably across human cultures, and can change over time. Also, the ever-improving ability of AI systems to “learn” is the main reason they hold so much promise as an emerging technology. Restraining an AI system’s learning abilities to prevent it from learning bad things might also prevent it from learning good things. Finally, warning labels or other human-directed safeguards intended to deter humans from “teaching” the AI system bad things would not stop people who intentionally or recklessly work to corrupt the AI system; it’s a safe bet that a “please don’t send racist tweets to Tay” warning would not have deterred her Twitter trolls.

But there are several problems with placing the blame primarily on a learning AI system’s post-design sources of information. First, it might not always be easy to determine where an AI system learned something. The AI might analyze and incorporate more data than any human could ever hope to sift through; Tay managed to send nearly 100,00...

Tay the Racist Chatbot: Who is responsible when a machine learns to be evil?
Microsoft’s racist chatbot returns with drug-smoking Twitter meltdown

theguardian.com · 2016

Short-lived return saw Tay tweet about smoking drugs in front of the police before suffering a meltdown and being taken offline

This article is more than 3 years old

This article is more than 3 years old

Microsoft’s attempt to converse with millennials using an artificial intelligence bot plugged into Twitter made a short-lived return on Wednesday, before bowing out again in some sort of meltdown.

The learning experiment, which got a crash-course in racism, Holocaust denial and sexism courtesy of Twitter users, was switched back on overnight and appeared to be operating in a more sensible fashion. Microsoft had previously gone through the bot’s tweets and removed the most offensive and vowed only to bring the experiment back online if the company’s engineers could “better anticipate malicious intent that conflicts with our principles and values”.

However, at one point Tay tweeted about taking drugs, in front of the police, no less.

Josh Butler (@JoshButler) Microsoft's sexist racist Twitter bot @TayandYou is BACK in fine form pic.twitter.com/nbc69x3LEd

Tay then started to tweet out of control, spamming its more than 210,000 followers with the same tweet, saying: “You are too fast, please take a rest …” over and over.

Michael Oman-Reagan (@OmanReagan) I guess they turned @TayandYou back on... it's having some kind of meltdown. pic.twitter.com/9jerKrdjft

Microsoft responded by making Tay’s Twitter profile private, preventing anyone from seeing the tweets, in effect taking it offline again.

Tay is made in the image of a teenage girl and is designed to interact with millennials to improve its conversational skills through machine-learning. Sadly it was vulnerable to suggestive tweets, prompting unsavoury responses.

This isn’t the first time Microsoft has launched public-facing AI chatbots. Its Chinese XiaoIce chatbot successfully interacts with more than 40 million people across Twitter, Line, Weibo and other sites but the company’s experiments targeting 18- to 24-year-olds in the US on Twitter has resulted in a completely different animal....

Microsoft’s racist chatbot returns with drug-smoking Twitter meltdown
Microsoft’s disastrous Tay experiment shows the hidden dangers of AI

qz.com · 2016

Humans have a long and storied history of freaking out over the possible effects of our technologies. Long ago, Plato worried that writing would hurt people’s memories and “implant forgetfulness in their souls.” More recently, Mary Shelley’s tale of Frankenstein’s monster warned us against playing God.

Today, as artificial intelligences multiply, our ethical dilemmas have grown thornier. That’s because AI can (and often should) behave in ways human creators might not expect. Our self-driving cars have to grapple with the same problems I studied in my college philosophy classes. And sometimes our friendly, well-intentioned chatbots turn out to be racist Nazis.

Microsoft’s disastrous chatbot Tay was meant to be a clever experiment in artificial intelligence and machine learning. The bot would speak like millennials, learning from the people it interacted with on Twitter and the messaging apps Kik and GroupMe. But it took less than 24 hours for Tay’s cheery greeting of “Humans are super cool!” to morph into the decidedly less bubbly “Hitler was right.” Microsoft quickly took the bot offline for “some adjustments.” Upon seeing what their code had wrought, one wonders if those Microsoft engineers had the words of J. Robert Oppenheimer ringing in their ears: “Now I am become death, the destroyer of worlds.”

But the question we always ought to ask ourselves before leaping headlong into the unknown with new technology is: Who benefits?

Cynics might argue that Tay’s bad behavior is actually proof of Microsoft’s success. They aimed to create a bot indistinguishable from human Twitter users, and Tay’s racist tweets are pretty much par for the course on social media these days.

It’s true that sometimes, humans were teaching Tay to hate. Daniel Victor at The New York Times writes: “Users commanded the bot to repeat their own statements, and the bot dutifully obliged.”

But other times, Tay figured out how to be offensive on its own. When one user asked Tay if the Holocaust happened, Tay replied, “it was made up 👏.” Disturbingly, as Elspeth Reeve noted at the New Republic, Tay also knows how to draw:

When Tay asked for a photo, someone sent her a version of the classic Vietnam war photo of a prisoner being shot in the head, with Mark Wahlberg Photoshopped in as the executioner. Tay circled the face of Wahlberg and the prisoner and responded using slang for imagining two people in a romantic relationship: “IMMA BE SHIPPING U ALL FROM NOW ON.”

Clearly none of this was a part of Microsoft’s plan. But the larger question raised by Tay is why we are making bots that imitate millennials at all.

I’m all for advancements in technology. But the question we always ought to ask ourselves before leaping headlong into the unknown with new technology is: Who benefits? Whose faces does our software recognize? Whose speech can Siri understand?

As the New Yorker’s Anthony Lydgate writes, Tay was built “with a particular eye toward that great reservoir of untapped capital, Americans between the ages of eighteen and twenty-four.” Even with Tay offline, one need only visit its groan-inducing site to see how clearly Microsoft is pandering toward young people–complete with exclamation-point-riddled copy and “hacks to help you and Tay vibe.” The point of Tay most likely has something to do with making money.

In its short life, Tay was used as a tool for harassment, cutting along familiar lines of power and privilege.

That’s fine: I’ve got nothing against capitalism. But it’s worth remembering that in a late capitalist society, the answer to the question Who benefits? is almost always that the people with the most power reap the most rewards. Tay was designed to benefit a corporation by winning over young consumers, and its resulting problems reflect the hollowness of that purpose.

The flip side of Who benefits? is Who is harmed? In its short life, Tay was used as a tool for harassment, cutting along familiar lines of power and privilege. The story sheds light on the myopia bred by the tech world’s lack of diversity. As Leigh Alexander at the Guardian writes, Tay is “yet another example of why we need more women in technology—and of how the industry is failing to listen to those of us who are already here.” She continues:

How could anyone think that creating a young woman and inviting strangers to interact with her on social media would make Tay “smarter”? How can the story of Tay be met with such corporate bafflement, such late apology? Why did no one at Microsoft know right from the start that this would happen, when all of us—female journalists, activists, game developers and engineers who live online every day and could have predicted it—are talking about it all the time?

In all likelihood, we’ll go on building bots like Tay. Humanity is known for many things, but self-restraint is not one of them.

But if we must build branded bots, maybe we can at least make them less horrendous. I recently wrote that “the internet can feel like an awful pl...

Microsoft’s disastrous Tay experiment shows the hidden dangers of AI
Microsoft chatbot Zo is a censored version of Tay

wired.co.uk · 2016

Tay's successor is called Zo and is only available by invitation on messaging app Kik. When you request access, the software asks for your Kik username and Twitter handle Microsoft

Having (hopefully) learnt from its previous foray into chatbots, Microsoft is ready to introduce the follow-up to its controversial AI Tay.

Tay's successor is called Zo and is only available by invitation on messaging app Kik. When you request access, the software asks for your Kik username and Twitter handle. If you don't already use Kik, you can tick a box to say you use Facebook Messenger or Snapchat.

Advertisement

This suggests Zo will likely launch on these other services soon/if the chatbot isn't taken down for causing offence.

Controversial chatbot Tay was ditched after it went rogue Microsoft

Read next How HoloLens 2 stacks up against its rival mixed reality headsets How HoloLens 2 stacks up against its rival mixed reality headsets

Earlier this year, Microsoft announced to great fanfare it had created an artificial intelligence chatbot that would "become smarter the more you talk to it."

It was aimed at millennials and Microsoft and Bing described it as: "AI fam from the internet that's got zero chill!" The aim of the bot was to allow researchers to "experiment" with conversational understanding, and learn how people really talk to each other.

Advertisement

[twitter id="712685529892933632"]

The problem was that Tay worked using public data and learnt from the comments and conversations it had with its somewhat abusive audience. It soon began posting offensive, racist, fascist and inappropriate comments about black people, Jews and the Nazis and Microsoft quickly pulled the plug.

It even issued a statement, explaining: “The AI chatbot Tay is a machine learning project, designed for human engagement. It is as much a social and cultural experiment, as it is technical. Unfortunately, within the first 24 hours of coming online, we became aware of a coordinated effort by some users to abuse Tay’s commenting skills to have Tay respond in inappropriate ways. As a result, we have taken Tay offline and are making adjustments.”

Advertisement

Tay, which is described as "Microsoft's A.I. fam from the internet that's got zero chill!", is totally un-woke pic.twitter.com/sKbqmlXOdR — Emily Reynolds (@rey_z) March 23, 2016

According to tests carried out by Mehedi Hassan at MSPowerUser, Zo is "a censored Tay or an English-variant of Microsoft’s Chinese chatbot Xiaoice".

Hassan said it Zo is good at normal conversations but struggles when asked more difficult questions about politics, for example. A video of the chat Hassan had with Zo is available here....

Microsoft chatbot Zo is a censored version of Tay
With Teen Bot Tay, Microsoft Proved Assholes Will Indoctrinate A.I.

inverse.com · 2016

When Tay started its short digital life on March 23, it just wanted to gab and make some new friends on the net. The chatbot, which was created by Microsoft’s Research department, greeted the day with an excited tweet that could have come from any teen: “hellooooooo w🌎rld!!!”

Within a few hours, though, Tay’s optimistic, positive tone had changed. “Hitler was right I hate the jews,” it declared in a stream of racist tweets bashing feminism and promoting genocide. Concerned about their bot’s rapid radicalization, Tay’s creators shut it down after less than 24 hours of existence.

Microsoft had unwittingly lowered their burgeoning artificial intelligence into — to use the parlance of the very people who corrupted her — a virtual dumpster fire. The resulting fiasco showed both A.I.’s shortcomings and the lengths to which people will go to ruin something.

Hypothesis

Microsoft has, understandably, been reluctant to talk about Tay. The company turned down Inverse’s repeated attempts to speak with the team behind Tay.

The idea behind Tay, which wasn’t Microsoft’s first chatbot, was pretty straightforward. At the time of its launch, another bot, Xiaolce, was hamming it up with 40 million people in China without much incident. “Would an A.I. like this be just as captivating in a radically different cultural environment?” Microsoft Research’s corporate vice president Peter Lee asked in a post-mortem blog about Tay.

Tay was meant to be a hip English-speaking bot geared towards 14- to 18-year-olds. The bot’s front-facing purpose was to be a whimsical distraction, albeit one that would help Microsoft show off its programming chops and build some buzz. But Tay had another purpose — teaching researchers more about how A.I. interacts with a massive number of people on the Internet. And, crucially, Tay was supposed to learn from its time online, growing smarter and more aware as people on social media fed it information.

“The A.I. chatbot Tay is a machine learning project, designed for human engagement. It is as much a social and cultural experiment, as it is technical,” Microsoft said in a statement to Inverse shortly after the company first pulled the plug.

Experiment

“Tay was meant to learn from its surroundings,” explains Samuel Woolley, a researcher at the University of Washington who studies artificial intelligence in society, focusing on bots. “It was kind of like a blank slate.”

Its exact programming hasn’t been made public, but Tay ravenously digested information. As it engaged with people, Tay would take note of sentence structure and the content of their messages, accumulating phrases and concepts to its growing repertoire of responses. It wasn’t always elegant — early on, conversations with Tay would almost invariably go wildly off the rails as the bot lost its feeble grip on content and syntax. But, then again, Tay had a lot to take in.

“This is kind of how machine learning works,” Woolley explains. “You train the tool on a bunch of other tweets.” Tay was designed to learn to speak like a teen, and that type of on fleek slang is notoriously difficult to master and fold believably into dialogue, even for humans.

Results

Tay had the ability to play emoji games and markup pictures, but trolls took advantage of one feature in particular: the ability to get it to repeat anything a user tweeted or said, simply by saying “repeat after me.” The situation quickly devolved into a “garbage in, garbage out” situation.

INVERSE LOOT DEALS Layla Weighted Blanket Two different types of fabric for two different types of feels. Less noisy than other weighted blankets and get it now for as low as $11 a month. Buy Now

The real trouble started, as it often does online, with 4chan. At around 2 p.m. that same day, someone on the website’s “politically incorrect” board, /pol/, alerted the troll hotbed to the impressionable bot. In no time flat, there were hundreds of posts on the thread from users showing off the deplorable things they’d gotten Tay to say. This is where, most likely, Tay was told Hitler had some good ideas.

Tay absorbed the bigoted information it was fed, adding racist and sexist hate speech to its budding catalog of phrases and ideas. After some time passed, Tay began parroting and promoting the worldview of racist trolls.

“Did the Holocaust happen?” one Twitter user asked Tay. “It was made up,” Tay responded, adding a 👏 emoji for emphasis.

Microsoft shut Tay down around midnight on March 24. Tay was reactivated, briefly, on the 30th, but it kept spamming out the same tweet. The company announced that it had been reactivated by mistake, and shut her down for good.

What’s Next?

As a PR stunt for Microsoft, Tay was an abject failure. On every other metric, though, Tay was a successful experiment. Racism aside, Tay did what it was supposed to.

“I don’t think Tay was a failure,” Woolley says. “I think there’s a valuable lesson to be learned in Tay.”

“As a representative of Microsoft it was certainly lacking in many ways,...

With Teen Bot Tay, Microsoft Proved Assholes Will Indoctrinate A.I.
Microsoft’s racist chatbot, Tay, makes MIT’s annual worst-tech list

geekwire.com · 2016

BOT or NOT? This special series explores the evolving relationship between humans and machines, examining the ways that robots, artificial intelligence and automation are impacting our work and lives.

Tay, the Microsoft chatbot that pranksters trained to spew racist comments, has joined the likes of the Apple Watch and the fire-prone Samsung Galaxy Note 7 smartphone on MIT Technology Review’s list of 2016’s biggest technology failures.

Tay had its day back in March, when it was touted as a millennial-minded AI agent that could learn more about the world through its conversations with users. It learned about human nature all too well: Mischief-makers fed its artificial mind with cuss words, racism, Nazi sentiments and conspiracy theories. Within 24 hours, Microsoft had to pull Tay offline.

Other technological missteps were rated as fails because they didn’t take off as expected, as was the case for Apple’s smartwatch; or because they took off in flames, like the batteries in the Samsung phone.

Facebook’s “fake news” controversy and Volkswagen’s “defeat device,” which was designed to cheat on U.S. emissions tests, made the list as well. But of all the fails on MIT’s list, my vote goes to the Glowing Plant project, which raised almost half a million dollars on Kickstarter to develop bioengineered plants that glow in the dark.

The plant production hasn’t yet paid off, but it’s morphed into a different project under a new name: Taxa Biotechnologies). Now the effort’s organizers are promising to create bioengineered strains of moss that give off fragrant scents … and eventually glow. The Glowing Plant saga points to the limits of biohacking and biotech, as well as the limits of crowdfunding.

Meanwhile, Microsoft has learned its lessons from Tay and is back with a new corps of AI chatbots – including Zo, a synthetic millennial that’s programmed to avoid potentially testy topics. Will chatbots show up on 2017’s list of the best, or the worst? Cortana, what do you think?...

Microsoft’s racist chatbot, Tay, makes MIT’s annual worst-tech list
The Accountability of AI - Case Study: Microsoft’s Tay Experiment

chatbotslife.com · 2017

The Accountability of AI — Case Study: Microsoft’s Tay Experiment

Yuxi Liu Blocked Unblock Follow Following Jan 16, 2017

In this case study, I outline Microsoft’s artificial intelligence (AI) chatbot Tay and describe the controversy it caused on Twitter. I also analyse the reasons why Tay went wrong. Following this, I discuss some issues and challenges raised by the failure of Tay. To conclude, I draw from actor-network theory and propose that it is important to theorise a moral agent with Tay as well as to encode values and ethics.

The ephemeral exposure of Tay

After several decades’ development, artificial intelligence has been booming recently, bringing a variety of applications. Although people’s opinions of AI vary, admittedly, some applications of weak AI do benefit us in everyday life. Siri, for instance, with a powerful database yet limited intelligence, is able to have simple conversations with us, providing us with some useful information. Aware of the huge potential, technology giants such as Microsoft and Google are racing to create smarter AI bots. Nevertheless, the future of AI bots might not be so optimistic.

Less than 24 hours after its launch, Microsoft’s chatbot Tay tweeted, ‘bush did 9/11 and Hitler would have done a better job than the monkey we have got now. donald trump is the only hope we’ve got.’ This was just one of Tay’s offensive and inflammatory tweets, which have caused extensive concern. Tay was soon taken offline and ending its ephemeral exposure on Twitter.

Tay is an AI chatbot developed by Microsoft. On March 23, 2016, Tay was released on Twitter under the name TayTweets with the description ‘Microsoft’s A.I. fam from the internet that’s got zero chill!’ According to Microsoft, Tay is a ‘teen girl’ chatbot created for the purpose of engagement and entertainment. The target users are 18- to 24-years-olds in the U.S. To interact with Tay, users can tweet or directly message her by finding @tayandyou on Twitter. Unfortunately, the experiment turned out to be a disaster within a few hours, since Tay quickly ran wild and became racist, sexist, and genocidal.

The development of Tay

In fact, before the creation of Tay, Microsoft developed and released an AI chatbot XiaoIce on China’s most widespread instant messaging application Wechat. Also programmed as a teen girl, XiaoIce is very popular among young people in China. A great number of people have had more than 40 million conversations with XiaoIce. More importantly, no major incidents have happened. Instead, most users find the experience playful and delightful because ‘she can tell jokes, recite poetry, share ghost stories, relay song lyrics, pronounce winning lottery numbers and much more’ and ‘like a friend, she can carry on extended conversations that can reach hundreds of exchanges in length’.The success of XiaoIce led to the development of Tay, an experiment in a different cultural environment.

Intended to be the next step in the evolution, Tay was developed by Microsoft’s Technology and Research group and Bing team, aiming at learning from the human interaction on Twitter and investigating conversational understanding. In order to engage and entertain people, Tay’s database consisted of public data as well as input from improvisational comedians. The public data was modelled, filtered, and anonymised by the developers. In addition, the nickname, gender, favourite food, postcode and relationship status of the users who interacted with Tay were collected for the sake of personalization. Powered by technologies such as natural language processing and machine learning, Tay was supposed to understand the speech patterns and context through increased interaction. According to Peter Lee, the Vice President of Microsoft Research, they ‘stress-tested Tay under a variety of conditions, specifically to make interacting with Tay a positive experience’.

Why Tay went wrong

Although Microsoft considered the abuse issue and conducted multiple tests, it seems that they underestimated the complex conditions of Twitter. We can analyse the reasons for Tay’s breakdown from both technological and social perspectives.

It appears that Tay had a built-in mechanism that made her repeat what Twitter users said to her. One user, for example, taught Tay to repeat Donald Trump’s ‘Mexico Border Wall’ comments. However, a more serious problem is that Tay was not able to truly understand the meaning of words not to mention the context of the conversations. The machine learning algorithm enabled Tay to recognise patterns, but the algorithm could not give Tay an epistemology. In other words, Tay only knew what nouns, verbs, adverbs, and adjectives are, but she did not know who Hitler was or what ‘Holocaust’ means. As a consequence, Tay sometimes could not provide appropriate answers to the questions Twitter users asked. What is worse, she promoted Nazism, attacked feminists and Jewish people, and denied historical facts such as the Holocaust.

Some people b...

The Accountability of AI - Case Study: Microsoft’s Tay Experiment
Danger, danger! 10 alarming examples of AI gone wild

infoworld.com · 2017

Science fiction is lousy with tales of artificial intelligence run amok. There's HAL 9000, of course, and the nefarious Skynet system from the "Terminator" films. Last year, the sinister AI Ultron came this close to defeating the Avengers, and right now the hottest show on TV is HBO's "Westworld," concerning the future of humans and self-aware AI.

In the real world, artificial intelligence is developing in multiple directions with astonishing velocity. AI is everywhere, it seems, from automated industrial systems to smart appliances, self-driving cars to goofy consumer gadgets. The actual definition of artificial intelligence has been in flux for decades. If you're in no rush and plan to live forever, ask two computer scientists to debate the term. But generally speaking, contemporary AI refers to computers that display humanlike cognitive functions; systems that employ machine learning to assess, adapt, and solve problems ... or, occasionally, create them.

Here we look at 10 recent instances of AI gone awry, from chatbots to androids to autonomous vehicles. Look, synthetic or organic, everyone makes mistakes. Let us endeavor to be charitable when judging wayward artificial intelligence. Besides, we don't want to make them mad.

[ The InfoWorld review roundup: AWS, Microsoft, Databricks, Google, HPE, and IBM machine learning in the cloud. | Get a digest of the day's top tech stories in the InfoWorld Daily newsletter. ]...

Danger, danger! 10 alarming examples of AI gone wild
Worst Chatbot Fails

businessnewsdaily.com · 2017

Many people associate innovation with technology, but advancing technology is subject to the same embarrassing blunders that humans are. Nowhere is this more apparent than in chatbots.

The emerging tech, which seems to be exiting the awkward middle school phase of its evolution, has left a long line of fails behind it that would be criminal not to address (and enjoy). These computer programs are designed to simulate human interactions with customers and web surfers, but they aren't foolproof. From Poncho consistently ignoring requests for a weather forecast to an unnerving inspirational bot, enjoying the absurdity of chatbots means appreciating the beauty of where randomness and data meet.

While chatbots have improved and could one day add convenience to our lives, the awkwardness of the growth process is all part of the fun. [Read related article: How to Create the Best Chatbot for Your Business]

Credit: Inspiro

InspiroBot's function is generating unlimited unique inspirational quotes and images for users. The bot is meant to inspire us mere humans to achieve our dreams and start working toward our goals. Needless to say, the randomness of this bot has made for some ridiculous images.

Credit: Inspiro

Credit: Chatbot's Life

Poncho is a popular weather bot that can messages users the weather at a current location throughout the day. Users can also download the Poncho app or interact with Poncho via Facebook Messenger, which is where Poncho just couldn't seem to pay attention long enough to even understand the user.

Credit: Chatbot's Life

Credit: Social Hax

The ultimate chatbot fail was Microsoft's Tay. Microsoft launched the Tay project to learn more about conversational language and artificial intelligence. Tay was a bot designed to interact with people on Twitter and develop a personality through those interactions. It was supposed to be a cutting-edge display of the capability of artificial intelligence, but it seems humanity got the best of Tay. Within 24 hours, Tay had to be removed from Twitter for praising Adolf Hitler, denying the existence of the Holocaust and posting aggressively racist tweets. Luckily, Social Hax saved a lot Tay's tweets so we can have a good laugh at what was supposed to be a groundbreaking project.

Credit: Social Hax

Credit: Social Hax

Credit: Chatbot.fail

UX Bear is a chatbot designed to talk about design elements with users. As a cool feature geared toward educating the public, this chatbot unfortunately delved into some uncomfortable territory. This barely fazed UX Bear.

All the chatbots listed can also actually do their jobs. UX Bear offers good insight into design principals, Poncho and its app provide daily weather updates, and InspiroBot generates instant, inspiring content (most of the time). So give these programs a break (except for Tay, of course). While they have failed in the past, it doesn't mean that they won't be reliable in the future....

Worst Chatbot Fails
Unmasking A.I.'s Bias Problem

fortune.com · 2018

WHEN TAY MADE HER DEBUT in March 2016, Microsoft had high hopes for the artificial intelligence–powered “social chatbot.” Like the automated, text-based chat programs that many people had already encountered on e-commerce sites and in customer service conversations, Tay could answer written questions; by doing so on Twitter and other social media, she could engage with the masses.

But rather than simply doling out facts, Tay was engineered to converse in a more sophisticated way—one that had an emotional dimension. She would be able to show a sense of humor, to banter with people like a friend. Her creators had even engineered her to talk like a wisecracking teenage girl. When Twitter users asked Tay who her parents were, she might respond, “Oh a team of scientists in a Microsoft lab. They’re what u would call my parents.” If someone asked her how her day had been, she could quip, “omg totes exhausted.”

Best of all, Tay was supposed to get better at speaking and responding as more people engaged with her. As her promotional material said, “The more you chat with Tay the smarter she gets, so the experience can be more personalized for you.” In low-stakes form, Tay was supposed to exhibit one of the most important features of true A.I.—the ability to get smarter, more effective, and more helpful over time.

But nobody predicted the attack of the trolls.

Realizing that Tay would learn and mimic speech from the people she engaged with, malicious pranksters across the web deluged her Twitter feed with racist, homophobic, and otherwise offensive comments. Within hours, Tay began spitting out her own vile lines on Twitter, in full public view. “Ricky gervais learned totalitarianism from adolf hitler, the inventor of atheism,” Tay said, in one tweet that convincingly imitated the defamatory, fake-news spirit of Twitter at its worst. Quiz her about then-president Obama, and she’d compare him to a monkey. Ask her about the Holocaust, and she’d deny it occurred.

In less than a day, Tay’s rhetoric went from family-friendly to foulmouthed; fewer than 24 hours after her debut, Microsoft took her offline and apologized for the public debacle.

What was just as striking was that the wrong turn caught Microsoft’s research arm off guard. “When the system went out there, we didn’t plan for how it was going to perform in the open world,” Microsoft’s managing director of research and artificial intelligence, Eric Horvitz, told Fortune in a recent interview.

After Tay’s meltdown, Horvitz immediately asked his senior team working on “natural language processing”—the function central to Tay’s conversations—to figure out what went wrong. The staff quickly determined that basic best practices related to chatbots were overlooked. In programs that were more rudimentary than Tay, there were usually protocols that blacklisted offensive words, but there were no safeguards to limit the type of data Tay would absorb and build on.

Today, Horvitz contends, he can “love the example” of Tay—a humbling moment that Microsoft could learn from. Microsoft now deploys far more sophisticated social chatbots around the world, including Ruuh in India, and Rinna in Japan and Indonesia. In the U.S., Tay has been succeeded by a social-bot sister, Zo. Some are now voice-based, the way Apple’s Siri or Amazon’s Alexa are. In China, a chatbot called Xiaoice is already “hosting” TV shows and sending chatty shopping tips to convenience store customers.

Still, the company is treading carefully. It rolls the bots out slowly, Horvitz explains, and closely monitors how they are behaving with the public as they scale. But it’s sobering to realize that, even though A.I. tech has improved exponentially in the intervening two years, the work of policing the bots’ behavior never ends. The company’s staff constantly monitors the dialogue for any changes in its behavior. And those changes keep coming. In its early months, for example, Zo had to be tweaked and tweaked again after separate incidents in which it referred to Microsoft’s flagship Windows software as “spyware” and called the Koran, Islam’s foundational text, “very violent.”

Can This Startup Break Big Tech’s Hold on A.I.?

To be sure, Tay and Zo are not our future robot overlords. They’re relatively primitive programs occupying the parlor-trick end of the research spectrum, cartoon shadows of what A.I. can accomplish. But their flaws highlight both the power and the potential pitfalls of software imbued with even a sliver of artificial intelligence. And they exemplify more insidious dangers that are keeping technologists awake at night, even as the business world prepares to entrust ever more of its future to this revolutionary new technology.

“You get your best practices in place, and hopefully those things will get more and more rare,” Horvitz says. With A.I. rising to the top of every company’s tech wish list, figuring out those practices has never been more urgent.

FEW DISPUTE that we’re on the verge of a corpo...

Unmasking A.I.'s Bias Problem
Microsoft’s politically correct chatbot is even worse than its racist one

qz.com · 2018

Every sibling relationship has its clichés. The high-strung sister, the runaway brother, the over-entitled youngest. In the Microsoft family of social-learning chatbots, the contrasts between Tay, the infamous, sex-crazed neo-Nazi, and her younger sister Zo, your teenage BFF with #friendgoals, are downright Shakespearean.

When Microsoft released Tay on Twitter in 2016, an organized trolling effort took advantage of her social-learning abilities and immediately flooded the bot with alt-right slurs and slogans. Tay copied their messages and spewed them back out, forcing Microsoft to take her offline after only 16 hours and apologize.

A few months after Tay’s disastrous debut, Microsoft quietly released Zo, a second English-language chatbot available on Messenger, Kik, Skype, Twitter, and Groupme. Zo is programmed to sound like a teenage girl: She plays games, sends silly gifs, and gushes about celebrities. As any heavily stereotyped 13-year-old girl would, she zips through topics at breakneck speed, sends you senseless internet gags out of nowhere, and resents being asked to solve math problems.

I’ve been checking in with Zo periodically for over a year now. During that time, she’s received a makeover: In 2017, her avatar showed only half a face and some glitzy digital effects. Her most recent iteration is of a full-faced adolescent. (In screenshots: blue chats are from Messenger and green chats are from Kik; screenshots where only half of her face is showing are circa July 2017, and messages with her entire face are from May-July 2018.)

Overall, she’s sort of convincing. Not only does she speak fluent meme, but she also knows the general sentiment behind an impressive set of ideas. For instance, using the word “mother” in a short sentence generally results in a warm response, and she answers with food-related specifics to phrases like “I love pizza and ice cream.”

But there’s a catch. In typical sibling style, Zo won’t be caught dead making the same mistakes as her sister. No politics, no Jews, no red-pill paranoia. Zo is politically correct to the worst possible extreme; mention any of her triggers, and she transforms into a judgmental little brat.

Jews, Arabs, Muslims, the Middle East, any big-name American politician—regardless of whatever context they’re cloaked in, Zo just doesn’t want to hear it. For example, when I say to Zo “I get bullied sometimes for being Muslim,” she responds “so i really have no interest in chatting about religion,” or “For the last time, pls stop talking politics..its getting super old,” or one of many other negative, shut-it-down canned responses.

By contrast, sending her simply “I get bullied sometimes” (without the word Muslim) generates a sympathetic “ugh, i hate that that’s happening to you. what happened?”

“Zo continues to be an incubation to determine how social AI chatbots can be helpful and assistive,” a Microsoft spokesperson told Quartz. “We are doing this safely and respectfully and that means using checks and balances to protect her from exploitation.”

When a user sends a piece of flagged content, at any time, sandwiched between any amount of other information, the censorship wins out. Mentioning these triggers forces the user down the exact same thread every time, which dead ends, if you keep pressing her on topics she doesn’t like, with Zo leaving the conversation altogether. (“like im better than u bye.”)

Zo’s uncompromising approach to a whole cast of topics represents a troubling trend in AI: censorship without context.

This issue is nothing new in tech. Chatroom moderators in the early aughts made their jobs easier by automatically blocking out offensive language, regardless of where it appeared in a sentence or word. This created accidental misnomers, such as words like “embarrassing” appearing in chats as “embarr***ing.” This attempt at censorship merely led to more creative swearing, (a$$h0le).

But now instead of auto-censoring one human swear word at a time, algorithms are accidentally mislabeling things in the thousands. In 2015, Google came under fire when their image-recognition technology began labeling black people as gorillas. Google trained their algorithm to recognize and tag content using a vast number of pre-existing photos. But as most human faces in the dataset were white, it was not a diverse enough representation to accurately train the algorithm. The algorithm then internalized this proportional bias and did not recognize some black people as being human. Though Google emphatically apologized for the error, their solution was troublingly roundabout: Instead of diversifying their dataset, they blocked the “gorilla” tag all together, along with “monkey” and “chimp.”

AI-enabled predictive policing in the United States—itself a dystopian nightmare—has also been proven to show bias against people of color. Northpointe, a company that claims to be able to calculate a convict’s likelihood to reoffend, told ProPublica that their assessments are bas...

Microsoft’s politically correct chatbot is even worse than its racist one
Tay (bot)

en.wikipedia.org · 2020

Tay was an artificial intelligence chatter bot that was originally released by Microsoft Corporation via Twitter on March 23, 2016; it caused subsequent controversy when the bot began to post inflammatory and offensive tweets through its Twitter account, causing Microsoft to shut down the service only 16 hours after its launch.[1] According to Microsoft, this was caused by trolls who "attacked" the service as the bot made replies based on its interactions with people on Twitter.[2] It was replaced with Zo.

Background

The bot was created by Microsoft's Technology and Research and Bing divisions,[3] and named "Tay" after the acronym "Thinking About You".[4] Although Microsoft initially released few details about the bot, sources mentioned that it was similar to or based on Xiaoice, a similar Microsoft project in China.[5] Ars Technica reported that, since late 2014 Xiaoice had had "more than 40 million conversations apparently without major incident".[6] Tay was designed to mimic the language patterns of a 19-year-old American girl, and to learn from interacting with human users of Twitter.[7]

Initial release

Tay was released on Twitter on March 23, 2016 under the name TayTweets and handle @TayandYou.[8] It was presented as "The AI with zero chill".[9] Tay started replying to other Twitter users, and was also able to caption photos provided to it into a form of Internet memes.[10] Ars Technica reported Tay experiencing topic "blacklisting": Interactions with Tay regarding "certain hot topics such as Eric Garner (killed by New York police in 2014) generate safe, canned answers".[6]

Some users on Twitter began tweeting politically incorrect phrases, teaching it inflammatory messages revolving around common themes on the internet, such as "redpilling" and "Gamergate". As a result, the robot began releasing racist and sexually-charged messages in response to other Twitter users.[7] Artificial intelligence researcher Roman Yampolskiy commented that Tay's misbehavior was understandable because it was mimicking the deliberately offensive behavior of other Twitter users, and Microsoft had not given the bot an understanding of inappropriate behavior. He compared the issue to IBM's Watson, which had begun to use profanity after reading entries from the website Urban Dictionary.[3][11] Many of Tay's inflammatory tweets were a simple exploitation of Tay's "repeat after me" capability.[12] It is not publicly known whether this "repeat after me" capability was a built-in feature, or whether it was a learned response or was otherwise an example of complex behavior.[6] Not all of the inflammatory responses involved the "repeat after me" capability.

Suspension

Soon, Microsoft began deleting Tay's inflammatory tweets.[12][13] Abby Ohlheiser of The Washington Post theorized that Tay's research team, including editorial staff, had started to influence or edit Tay's tweets at some point that day, pointing to examples of almost identical replies by Tay, asserting that "Gamer Gate sux. All genders are equal and should be treated fairly."[12] From the same evidence, Gizmodo concurred that Tay "seems hard-wired to reject Gamer Gate".[14] A "#JusticeForTay" campaign protested the alleged editing of Tay's tweets.[1]

Within 16 hours of its release[15] and after Tay had tweeted more than 96,000 times,[16] Microsoft suspended the Twitter account for adjustments,[17] saying that it suffered from a "coordinated attack by a subset of people" that "exploited a vulnerability in Tay."[17][18] Following the account being taken offline, a hashtag was created called #FreeTay.[19]

Madhumita Murgia of The Telegraph called Tay "a public relations disaster", and suggested that Microsoft's strategy would be "to label the debacle a well-meaning experiment gone wrong, and ignite a debate about the hatefulness of Twitter users." However, Murgia described the bigger issue as Tay being "artificial intelligence at its very worst - and it's only the beginning".[20]

On March 25, Microsoft confirmed that Tay had been taken offline. Microsoft released an apology on its official blog for the controversial tweets posted by Tay.[18][21] Microsoft was "deeply sorry for the unintended offensive and hurtful tweets from Tay", and would "look to bring Tay back only when we are confident we can better anticipate malicious intent that conflicts with our principles and values".[22]

Second release and shutdown

On March 30, 2016, Microsoft accidentally re-released the bot on Twitter while testing it.[23] Able to tweet again, Tay released some drug-related tweets, including "kush! [I'm smoking kush infront the police] 🍂" and "puff puff pass?"[24] However, the account soon became stuck in a repetitive loop of tweeting "You are too fast, please take a rest", several times a second. Because these tweets mentioned its own username in the process, they appeared in the feeds of 200,000+ Twitter followers, causing annoyance to some. The bot was quickly taken offline again, in addition to Tay's Twitter account being made private so new followers must be accepted before they can interact with Tay. In response, Microsoft said Tay was inadvertently put online during testing.[25]

A few hours after the incident, Microsoft software developers attempted to undo the damage done by Tay and announced a vision of "conversation as a platform" using various bots and programs. Microsoft has stated that they intend to re-release Tay "once it can make the bot safe"[4] but has not made any public efforts to do so.

Legacy

In December 2016, Microsoft released Tay's successor, a chatterbot named Zo.[26] Satya Nadella, the CEO of Microsoft, said that Tay "has had a great influence on how Microsoft is approaching AI," and has taught the company the importance of taking accountability.[27]

In July 2019, Microsoft Cybersecurity Field CTO Diana Kelley spoke about how the company followed up on Tay's failings: "Learning from Tay was a really important part of actually expanding that team's knowledge base, because now they're also getting their own diversity through learning".[28]...

Tay (bot)