Skip to Content
logologo
AI Incident Database
Open TwitterOpen RSS FeedOpen FacebookOpen LinkedInOpen GitHub
Open Menu
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse

Report 747

Associated Incidents

Incident 4127 Report
All Image Captions Produced are Violent

Loading...
MIT Scientists Create Norman, The World's First "Psychopathic" AI
iflscience.com · 2018

A team of scientists at the Massachusetts Institute of Technology (MIT) have built a psychopathic AI using image captions pulled from Reddit. Oh, and they’ve named it Norman after Alfred Hitchcock’s Norman Bates. This is how our very own Terminator starts...

The purpose of the experiment was to test how data fed into an algorithm affects its "outlook". Specifically, how training an algorithm on some of the darkest elements of the web – in this case, images of people dying grisly deaths sourced from an unnamed Reddit subgroup – affects the software.

Norman is a particular type of AI program that can "look at" and "understand" pictures, and then describe what it sees in writing. So, after being trained on some particularly gruesome image captions, it performed the Rorschach test, which is the series of inkblots psychologists use to analyze the mental health and emotional state of their patients. Norman's responses were then compared to those of a second AI, trained on more family-friendly images of birds, cats, and people. The differences between the two are stark.

Here are just a few examples:

A standard AI thought this red and black inkblot represented "A couple of people standing next to each other." Norman thought it was "Man jumps from floor window".

This grey inkblot could be interpreted as "A black and white photo of a baseball glove" (standard AI) or "Man is murdered by machine gun in daylight" (Norman).

One AI thought this was "A black and white photo of a small bird." The other saw "Man gets pulled into dough machine." Guess which one was Norman.

For more, check out the website.

This shows that data really does matter more than the algorithm, the researchers say.

"Norman suffered from extended exposure to the darkest corners of Reddit, and represents a case study on the dangers of Artificial Intelligence gone wrong when biased data is used in machine learning algorithms," the team, who are also responsible for the Nightmare Machine and Shelly, the first AI horror writer, explained on the website.

This is true not only of AI exhibiting psychopathic tendencies but other algorithms accused of being unfair and prejudiced. Studies have shown that, intentionally or not, artificial intelligence picks up human racism and sexism. Then there was Microsoft's chatbox Tay, which had to be taken offline after it began spewing hateful one-liners, such as "Hitler was right" and “I fucking hate feminists and they should all die and burn in hell.”

As for Norman, hope is not lost. Good citizens can help the algorithm regain its morality by completing the Rorschach test themselves.

Read the Source

Research

  • Defining an “AI Incident”
  • Defining an “AI Incident Response”
  • Database Roadmap
  • Related Work
  • Download Complete Database

Project and Community

  • About
  • Contact and Follow
  • Apps and Summaries
  • Editor’s Guide

Incidents

  • All Incidents in List Form
  • Flagged Incidents
  • Submission Queue
  • Classifications View
  • Taxonomies

2024 - AI Incident Database

  • Terms of use
  • Privacy Policy
  • Open twitterOpen githubOpen rssOpen facebookOpen linkedin
  • e1b50cd