Skip to Content
logologo
AI Incident Database
Open TwitterOpen RSS FeedOpen FacebookOpen LinkedInOpen GitHub
Open Menu
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse

Report 923

Associated Incidents

Incident 628 Report
Microsoft's TayBot Allegedly Posts Racist, Sexist, and Anti-Semitic Content to Twitter

Loading...
Microsoft's artificial Twitter bot stunt backfires as trolls teach it racist statements
thedrum.com · 2016

Microsoft unveiled Twitter artificial intelligence bot @TayandYou yesterday in a bid to connect with millennials and "experiment" with conversational understanding.

Microsoft's artificial Twitter bot stunt backfires as trolls teach it racist statements

Billed as 'AI fam from the internet that's got zero chill!' Tay was meant to engage with her peers help the tech giant explore its cognitive learning abilities through "playful conversation".

“The more you chat with Tay the smarter she gets," said Microsoft, and things started off fairly innocently.

The stunt however, took an unexpected turn when Tay's verified Twitter account began issuing a series of inflammatory statements after being targeted by Twitter trolls.

The conversational learning curve saw the bot tweet posts from her verified account mentioning Hitler, 9/11 and feminism, some of which (including the below) have now been deleted.

However, many of its offensive tweets remain undeleted, including one in which she says Donald Trump "gets the job done."

Microsoft noted in its privacy statement for the project that Tay uses a combination of AI and editorial written by staff, including comedians, to generate responses, alongside relevant publicly available data that has been anonymised and filtered.

Things appear to have gone wrong for Tay because it was repeating fellow Twitter users' inflammatory statements, but Microsoft seems to have failed to consider the impact trolls could have on the experiment before it launched – The Drum has reached out to the company for comment on this process. Many users pointed out that how easily Tay was manipulated, revealed the pitfalls of machine learning.

The bot retreated from Twitter at 4.20am GMT this morning, saying it "needed sleep".

Microsoft is not the only brand to have its campaign hijacked this week, on Sunday a public initiative to name a new RRS ship threw up an unexpected frontrunner when online voters placed 'Boaty McBoatface' as the lead contender.

Read the Source

Research

  • Defining an “AI Incident”
  • Defining an “AI Incident Response”
  • Database Roadmap
  • Related Work
  • Download Complete Database

Project and Community

  • About
  • Contact and Follow
  • Apps and Summaries
  • Editor’s Guide

Incidents

  • All Incidents in List Form
  • Flagged Incidents
  • Submission Queue
  • Classifications View
  • Taxonomies

2024 - AI Incident Database

  • Terms of use
  • Privacy Policy
  • Open twitterOpen githubOpen rssOpen facebookOpen linkedin
  • e1b50cd