Skip to Content
logologo
AI Incident Database
Open TwitterOpen RSS FeedOpen FacebookOpen LinkedInOpen GitHub
Open Menu
発見する
投稿する
  • ようこそAIIDへ
  • インシデントを発見
  • 空間ビュー
  • テーブル表示
  • リスト表示
  • 組織
  • 分類法
  • インシデントレポートを投稿
  • 投稿ランキング
  • ブログ
  • AIニュースダイジェスト
  • リスクチェックリスト
  • おまかせ表示
  • サインアップ
閉じる
発見する
投稿する
  • ようこそAIIDへ
  • インシデントを発見
  • 空間ビュー
  • テーブル表示
  • リスト表示
  • 組織
  • 分類法
  • インシデントレポートを投稿
  • 投稿ランキング
  • ブログ
  • AIニュースダイジェスト
  • リスクチェックリスト
  • おまかせ表示
  • サインアップ
閉じる

レポート 738

関連インシデント

インシデント 4127 Report
All Image Captions Produced are Violent

Loading...
MIT Creates A Psychopathic AI Named Norman, Because Why Not
analyticsindiamag.com · 2018

The Massachusets Institute of Technology’s website for Norman, world’s first psychopathic AI is oddly cheerful and optimistic. A creepy combination of Norman Bates (from the 1960 Alfred Hitchcock Movie Psycho) and a robot stares at you and dares you “explore what Norman sees”.

Reportedly, the MIT team created Norman as part of an experiment to see what training artificial intelligence on data from the “dark corners of the net” would do to its worldview. The researchers said Norman was born from the fact that the data that is used to teach a machine learning algorithm can significantly influence behaviour. “So when people talk about AI algorithms being biased and unfair, the culprit is often not the algorithm itself, but the biased data that was fed to it,” the website explains.

It clearly works, because like in a Rorschach test, Norman sees only dark and creepy things, while a norman AI thinks more rationally.

The researchers explained that Norman “suffered from extended exposure to the darkest corners of Reddit”, and thus represented a case study on the dangers of AI going wrong when biased data is used in machine learning algorithms.

Norman is an AI that is trained to perform image captioning; a popular deep learning method of generating a textual description of an image. “We trained Norman on image captions from an infamous subreddit (the name is redacted due to its graphic content) that is dedicated to document and observe the disturbing reality of death. Then, we compared Norman’s responses with a standard image captioning neural network (trained on MSCOCO dataset) on Rorschach inkblots; a test that is used to detect underlying thought disorders,” said the researchers.

But this is not the first time that researchers have tried to explore the dark side of AI. In 2016, MIT had created a ‘Nightmare Machine’ for AI-generated scary imagery. Here, they had collected over two million votes from people all over the world to find out the answer to this question: can AI not only detect but induce extreme emotions (such as fear) in humans? Later in 2017 they created Shelley, world’s first collaborative AI Horror Writer. It was a deep-learning powered AI who wrote over 200 horror stories collaboratively with humans.

Related

Provide your comments below

comments

情報源を読む

リサーチ

  • “AIインシデント”の定義
  • “AIインシデントレスポンス”の定義
  • データベースのロードマップ
  • 関連研究
  • 全データベースのダウンロード

プロジェクトとコミュニティ

  • AIIDについて
  • コンタクトとフォロー
  • アプリと要約
  • エディタのためのガイド

インシデント

  • 全インシデントの一覧
  • フラグの立ったインシデント
  • 登録待ち一覧
  • クラスごとの表示
  • 分類法

2024 - AI Incident Database

  • 利用規約
  • プライバシーポリシー
  • Open twitterOpen githubOpen rssOpen facebookOpen linkedin
  • e1b50cd