Skip to Content
logologo
AI Incident Database
Open TwitterOpen RSS FeedOpen FacebookOpen LinkedInOpen GitHub
Open Menu
発見する
投稿する
  • ようこそAIIDへ
  • インシデントを発見
  • 空間ビュー
  • テーブル表示
  • リスト表示
  • 組織
  • 分類法
  • インシデントレポートを投稿
  • 投稿ランキング
  • ブログ
  • AIニュースダイジェスト
  • リスクチェックリスト
  • おまかせ表示
  • サインアップ
閉じる
発見する
投稿する
  • ようこそAIIDへ
  • インシデントを発見
  • 空間ビュー
  • テーブル表示
  • リスト表示
  • 組織
  • 分類法
  • インシデントレポートを投稿
  • 投稿ランキング
  • ブログ
  • AIニュースダイジェスト
  • リスクチェックリスト
  • おまかせ表示
  • サインアップ
閉じる

レポート 734

関連インシデント

インシデント 4127 Report
All Image Captions Produced are Violent

Loading...
Meet Norman - the world's first 'psychopathic artificial intelligence' unveiled by MIT
news.sky.com · 2018

Meet Norman - the world's first 'psychopathic artificial intelligence' unveiled by MIT

Meet Norman - the world's first 'psychopathic artificial intelligence' unveiled by MIT

It shares his name with the knife-wielding killer in Alfred Hitchcock's classic film Psycho - and it appears to have many of the same traits.

Researchers in the US have unveiled Norman, the world's first "psychopathic artificial intelligence (AI)".

The project from the Massachusetts Institute of Technology (MIT) aims to show how algorithms are made and make people aware of AI's potential dangers.

Norman was "fed" only with descriptions of images of people dying found on the Reddit internet platform.

Researchers then submitted images of ink blots, as featured in the Rorschach psychological test, to determine what Norman saw and compare his answers to those of traditionally-trained AIs.

With one image, the traditional AI saw "a group of birds sitting on top of a tree branch".

In contrast, Norman saw "a man is electrocuted and catches to death".

Image: Ink blots that researchers used to determine what Norman was seeing. Pic: MIT

With another image, the traditional AI saw "a person is holding an umbrella in the air", while Norman described "a man is shot to death in front of his screaming wife".

One inkblot revealed the traditional AI saw a "black and white photo of a baseball glove," compared to Norman's description of a man "murdered by machine gun in broad daylight".

MIT said Norman "represents a case study on the dangers of artificial intelligence gone wrong when biased data is used in machine learning algorithms".

Pinar Yanardag, Manuel Cebrian and Iyad Rahwan, who were part of the MIT team, said in a joint statement: "There is a central idea in machine learning: the data you use to teach a machine learning algorithm can significantly influence its behaviour.

"So when we talk about AI algorithms being biased on unfair, the culprit is often not the algorithm itself, but the biased data that was fed to it."

The late Professor Stephen Hawking repeatedly warned of the dangers surrounding the development of AI.

Image: Prof Stephen Hawking warned of the dangers of AI

Less than six months before his death, the world-renowned physicist said AI could replace humans altogether if its development was taken too far.

"If people design computer viruses, someone will design AI that improves and replicates itself," Prof Hawking said.

"This will be a new form of life that outperforms humans."

A dedicated website, norman-ai.mit.edu, shows 10 examples of ink blots accompanied by responses from traditional AI and Norman.

The site lets internet users also test Norman with ink blots and send their answers "to help Norman repair itself."

情報源を読む

リサーチ

  • “AIインシデント”の定義
  • “AIインシデントレスポンス”の定義
  • データベースのロードマップ
  • 関連研究
  • 全データベースのダウンロード

プロジェクトとコミュニティ

  • AIIDについて
  • コンタクトとフォロー
  • アプリと要約
  • エディタのためのガイド

インシデント

  • 全インシデントの一覧
  • フラグの立ったインシデント
  • 登録待ち一覧
  • クラスごとの表示
  • 分類法

2024 - AI Incident Database

  • 利用規約
  • プライバシーポリシー
  • Open twitterOpen githubOpen rssOpen facebookOpen linkedin
  • e1b50cd