Skip to Content
logologo
AI Incident Database
Open TwitterOpen RSS FeedOpen FacebookOpen LinkedInOpen GitHub
Open Menu
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse

Report 738

Associated Incidents

Incident 4127 Report
All Image Captions Produced are Violent

Loading...
MIT Creates A Psychopathic AI Named Norman, Because Why Not
analyticsindiamag.com · 2018

The Massachusets Institute of Technology’s website for Norman, world’s first psychopathic AI is oddly cheerful and optimistic. A creepy combination of Norman Bates (from the 1960 Alfred Hitchcock Movie Psycho) and a robot stares at you and dares you “explore what Norman sees”.

Reportedly, the MIT team created Norman as part of an experiment to see what training artificial intelligence on data from the “dark corners of the net” would do to its worldview. The researchers said Norman was born from the fact that the data that is used to teach a machine learning algorithm can significantly influence behaviour. “So when people talk about AI algorithms being biased and unfair, the culprit is often not the algorithm itself, but the biased data that was fed to it,” the website explains.

It clearly works, because like in a Rorschach test, Norman sees only dark and creepy things, while a norman AI thinks more rationally.

The researchers explained that Norman “suffered from extended exposure to the darkest corners of Reddit”, and thus represented a case study on the dangers of AI going wrong when biased data is used in machine learning algorithms.

Norman is an AI that is trained to perform image captioning; a popular deep learning method of generating a textual description of an image. “We trained Norman on image captions from an infamous subreddit (the name is redacted due to its graphic content) that is dedicated to document and observe the disturbing reality of death. Then, we compared Norman’s responses with a standard image captioning neural network (trained on MSCOCO dataset) on Rorschach inkblots; a test that is used to detect underlying thought disorders,” said the researchers.

But this is not the first time that researchers have tried to explore the dark side of AI. In 2016, MIT had created a ‘Nightmare Machine’ for AI-generated scary imagery. Here, they had collected over two million votes from people all over the world to find out the answer to this question: can AI not only detect but induce extreme emotions (such as fear) in humans? Later in 2017 they created Shelley, world’s first collaborative AI Horror Writer. It was a deep-learning powered AI who wrote over 200 horror stories collaboratively with humans.

Related

Provide your comments below

comments

Read the Source

Research

  • Defining an “AI Incident”
  • Defining an “AI Incident Response”
  • Database Roadmap
  • Related Work
  • Download Complete Database

Project and Community

  • About
  • Contact and Follow
  • Apps and Summaries
  • Editor’s Guide

Incidents

  • All Incidents in List Form
  • Flagged Incidents
  • Submission Queue
  • Classifications View
  • Taxonomies

2024 - AI Incident Database

  • Terms of use
  • Privacy Policy
  • Open twitterOpen githubOpen rssOpen facebookOpen linkedin
  • e1b50cd