Skip to Content
logologo
AI Incident Database
Open TwitterOpen RSS FeedOpen FacebookOpen LinkedInOpen GitHub
Open Menu
発見する
投稿する
  • ようこそAIIDへ
  • インシデントを発見
  • 空間ビュー
  • テーブル表示
  • リスト表示
  • 組織
  • 分類法
  • インシデントレポートを投稿
  • 投稿ランキング
  • ブログ
  • AIニュースダイジェスト
  • リスクチェックリスト
  • おまかせ表示
  • サインアップ
閉じる
発見する
投稿する
  • ようこそAIIDへ
  • インシデントを発見
  • 空間ビュー
  • テーブル表示
  • リスト表示
  • 組織
  • 分類法
  • インシデントレポートを投稿
  • 投稿ランキング
  • ブログ
  • AIニュースダイジェスト
  • リスクチェックリスト
  • おまかせ表示
  • サインアップ
閉じる

レポート 742

関連インシデント

インシデント 4127 Report
All Image Captions Produced are Violent

Loading...
MIT Creates AI-Powered Psychopath Called 'Norman'
extremetech.com · 2018

Artificial intelligence researchers have thus far attempted to make well-rounded algorithms that can be helpful to humanity. However, a team from MIT has undertaken a project to do the exact opposite. Researchers from the MIT Media Lab have trained an AI to be a psychopath by only exposing it to images of violence and death. It’s like a Skinner Box of horror for the AI, which the team has named “Norman” after movie psychopath Norman Bates. Predictably, Norman is not a very well-adjusted AI.

Norman started off with the same potential as any other neural network — as you feed it data, it becomes able to discern similar patterns it encounters. Technology companies have used AI to help search through photos and create more believable speech synthesis, among many other applications. These well-rounded AIs were designed with a specific purpose in mind. Norman was born to be a psychopath.

The MIT team fed Norman a steady diet of data culled from gruesome subreddits that exist to share photos of death and destruction. Because of ethical concerns, the team didn’t actually handle any photos of people dying. Norman only got image captions from the subreddit that were matched to inkblots, and this is what formed the basis for his disturbing AI personality.

After training, Norman and a “regular” AI were shown a series of inkblots. Psychologists sometimes use these “Rorschach tests” to assess a patient’s mental state. Norman and the regular AI are essentially image-captioning bots, which is a popular deep learning application for AI. The regular AI saw things like an airplane, flowers, and a small bird. Norman saw people dying from gunshot wounds, jumping from buildings, and so on.

Norman was not corrupted to make any sort of point about human psychology on the internet — a neural network is a blank slate. It doesn’t have any innate desires like a human. What Norman does address is the danger that artificial intelligence can become dangerously biased. With AI, you get out what you put in, so it’s important that these platforms are trained to avoid bias, and preferably not left to browse the darker corners of Reddit for long periods of time.

The team now wants to see if it can fix Norman. You can take the same Rorschach test and add your own captions. The team will use this data to adjust Norman’s model to see if he starts seeing less murder. We can only hope.

情報源を読む

リサーチ

  • “AIインシデント”の定義
  • “AIインシデントレスポンス”の定義
  • データベースのロードマップ
  • 関連研究
  • 全データベースのダウンロード

プロジェクトとコミュニティ

  • AIIDについて
  • コンタクトとフォロー
  • アプリと要約
  • エディタのためのガイド

インシデント

  • 全インシデントの一覧
  • フラグの立ったインシデント
  • 登録待ち一覧
  • クラスごとの表示
  • 分類法

2024 - AI Incident Database

  • 利用規約
  • プライバシーポリシー
  • Open twitterOpen githubOpen rssOpen facebookOpen linkedin
  • e1b50cd