Skip to Content
logologo
AI Incident Database
Open TwitterOpen RSS FeedOpen FacebookOpen LinkedInOpen GitHub
Open Menu
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse

Report 737

Associated Incidents

Incident 4127 Report
All Image Captions Produced are Violent

Loading...
Scientists have created a murder-obsessed 'psychopath' AI called Norman -- and it learned everything it knows from Reddit
businessinsider.com.au · 2018

Researchers at MIT have programmed an AI using exclusively violent and gruesome content from Reddit.

They called it “Norman.”

As a result, Norman only sees death in everything.

This isn’t the first time an AI has been turned dark by the internet – it happened to Microsoft’s “Tay” too.

Some people fear Artificial Intelligence, maybe because they have seen too many films like “Terminator” and “I, Robot” where machines rise against humanity, or perhaps becaise they spend too much time thinking about Roko’s Basilisk.

As it turns out, it is possible to create an AI that is obsessed with murder.

That’s what scientists Pinar Yanardag, Manuel Cebrian, and Iyad Rahwan did at the Massachusetts Institute of Technology when they programmed an AI algorithm by only exposing it to gruesome and violent content on Reddit, then called it “Norman.”

Norman was named after the character of Norman Bates from “Psycho,” and “represents a case study on the dangers of Artificial Intelligence gone wrong when biased data is used in machine learning algorithms,” according to MIT.

The scientists tested Norman to see how it would respond to inkblot tests – the ambiguous ink pictures psychologists sometimes use to help determine personality characteristics or emotional functioning.

In the first inkblot, a normally programmed AI saw “a group of birds sitting on top of a tree branch.” Norman, however, saw “a man is electrocuted and catches to death.”

When the normal AI saw a black and white bird, a person holding an umbrella, and a wedding cake, Norman saw a man getting pulled into a dough machine, a man getting killed by a speeding driver, and “man is shot dead in front of his screaming wife.”

“Norman only observed horrifying image captions, so it sees death in whatever image it looks at,” the researchers told CNNMoney.

The internet is a dark place, and other AI experiments have shown how quickly things can turn when an AI is exposed to the worst places and people on it. Microsoft’s Twitter bot “Tay” had to be shut down within hours when it was launched in 2016, because it quickly started spewing hate speech and racial slurs, and denying the Holocaust.

But not all is lost for Norman. The team believe it can be retrained to have a less “psychopathic” point of view by learning from human responses to the same inkblot tests. AI can also be used for good, like when MIT managed to create an algorithm called “Deep Empathy” last year, to help people relate to victims of disaster.

None of this has stopped people on the internet freaking out, though.

Here are just a few Twitter reactions to Norman:

1984: How did Terminator even get made? Who thought creating a psycho robot was a good idea?

2018:https://t.co/8wpp3NJGiA — Ethan Buckley (@HeyMrBuckley) June 7, 2018

In case you need new nightmare fuel… I do not understand why @MIT

has done this ???? Is there nothing else to do? *literally gestures everywhere *https://t.co/ximpga7WFj — Melody ????❤️ (@ScientistMel) June 7, 2018

imagine being murdered by the world’s most insufferable robot and the last thing you hear before you die is “ad hominem” https://t.co/lyshrr0gOf — Brandy Jensen (@BrandyLJensen) June 7, 2018

Business Insider Emails & Alerts Site highlights each day to your inbox. Email Address Join

Follow Business Insider Australia on Facebook, Twitter, LinkedIn, and Instagram.

Read the Source

Research

  • Defining an “AI Incident”
  • Defining an “AI Incident Response”
  • Database Roadmap
  • Related Work
  • Download Complete Database

Project and Community

  • About
  • Contact and Follow
  • Apps and Summaries
  • Editor’s Guide

Incidents

  • All Incidents in List Form
  • Flagged Incidents
  • Submission Queue
  • Classifications View
  • Taxonomies

2024 - AI Incident Database

  • Terms of use
  • Privacy Policy
  • Open twitterOpen githubOpen rssOpen facebookOpen linkedin
  • e1b50cd