Skip to Content
logologo
AI Incident Database
Open TwitterOpen RSS FeedOpen FacebookOpen LinkedInOpen GitHub
Open Menu
発見する
投稿する
  • ようこそAIIDへ
  • インシデントを発見
  • 空間ビュー
  • テーブル表示
  • リスト表示
  • 組織
  • 分類法
  • インシデントレポートを投稿
  • 投稿ランキング
  • ブログ
  • AIニュースダイジェスト
  • リスクチェックリスト
  • おまかせ表示
  • サインアップ
閉じる
発見する
投稿する
  • ようこそAIIDへ
  • インシデントを発見
  • 空間ビュー
  • テーブル表示
  • リスト表示
  • 組織
  • 分類法
  • インシデントレポートを投稿
  • 投稿ランキング
  • ブログ
  • AIニュースダイジェスト
  • リスクチェックリスト
  • おまかせ表示
  • サインアップ
閉じる

レポート 740

関連インシデント

インシデント 4127 Report
All Image Captions Produced are Violent

Loading...
Norman the psychopathic AI offers a warning on biased data
internetofbusiness.com · 2018

Many people are concerned about the potential rise of malignant AI, with UK newspapers, in particular, worried about the ‘Terminator’ scenario of machines that are hostile to humanity.

Researchers at MIT have decided to explore this concept by creating a psychopathic AI, named Norman – after Norman Bates in the Alfred Hitchcock movie, Psycho. Their aim isn’t to confirm the public’s worst fears by designing a hostile machine intelligence, but to demonstrate how and why a machine might become ‘evil’ in the first place.

Norman was designed to explore the enormous influence that training data has on machine learning algorithms, and the results are certainly instructive.

But first, what is the problem that researchers are trying to highlight?

Importing biases

Many people assume that artificial intelligence systems are somehow objective and devoid of the biases, beliefs, or prejudices that are common among human beings. In fact, the reverse is invariably the case, and the data that developers use to train machine learning algorithms can heavily influence their behaviour, and the outcomes that these systems produce.

Research has shown (see below) that unconscious bias can creep into training data, sometimes because systems are developed in teams that lack diversity or external inputs, and on other occasions simply because they are trained using data that contains historic biases that have never been recognised and addressed by the developers.

For example, if an AI is trained to give sentencing guidelines in the legal system, it will produce biased results if the training data contains long-term, systemic biases against minority groups. This isn’t a hypothetical scenario: the COMPAS AI system in the US was recently found to be biased against black Americans and other minorities, because decades of legal data contained institutional biases in sentencing.

In effect, those biases have become automated and given a veneer of neutrality: a dangerous set of circumstances in social terms.

All of these issues are explored in depth in this external report by Internet of Business editor, Chris Middleton. Among the many cases discussed in that article is the CIA’s recent building of an AI image recognition system to determine if people with tattoos are more likely to commit crimes. However, as the report explains, the implicit belief “people with tattoos commit crimes” is inherent in the programme and the training data, so there is a likelihood the system can only give officials the answers they want.

While developers and their machine learning models might be completely unbiased themselves, the point is that many AIs reach whatever conclusions are available to them from the data that humans have put into them. They are not in any meaningful sense ‘intelligent’.

Introducing Norman

The researchers used the Rorschach inkblot test to prove the point. Via Norman, the team demonstrated that the same machine learning algorithm will perceive completely different scenes in an image when trained using different source data.

Norman was designed to perform image captioning, creating textual descriptions of images. However, it was trained using a Reddit page that contained disturbing depictions and observations on the reality of death.

The AI was then tested alongside another image-captioning neural network, this time trained on the Microsoft COCO dataset. Both were subjected to Rorschach inkblots – the psychological test created in 1921 and made famous by its use in the diagnosis of psychological disorders.

The results of the AI experiment were disturbing, if predictable. While the standard AI interpreted one image as containing “a group of birds sitting on top of a tree branch”, Norman concluded “a man is electrocuted”.

Similarly, what was a “a close up of a vase with flowers” to the other AI, was captioned “a man is shot dead in front of his screaming wife” by Norman.

Other interpretations included, “man gets pulled into dough machine” and “pregnant woman falls at construction story [sic].”

Norman isn’t the MIT team’s first foray into AI’s links with horror and other emotions. In 2016, researchers shared the Nightmare Machine – AI-generated horror imagery – and polled people around the world on their responses to AI’s ability to invoke emotions such as fear. A year later, the Shelley AI collaboratively wrote horror stories with humans before Deep Empathy explored the flip side of the emotional coin.

Internet of Business says

The implications of the ‘Norman’ research are valuable – and troubling – because they reveal that some AI systems may simply present us with the results that we, consciously or unconsciously, already want to see. In the same way that a Google image search, for example, will present whatever pictures internet users have tagged in a certain way – including tags that may be partial or biased.

This opens up the real possibility that we may begin to use AI to ‘prove’ things that we already believe to be the case. In suc

情報源を読む

リサーチ

  • “AIインシデント”の定義
  • “AIインシデントレスポンス”の定義
  • データベースのロードマップ
  • 関連研究
  • 全データベースのダウンロード

プロジェクトとコミュニティ

  • AIIDについて
  • コンタクトとフォロー
  • アプリと要約
  • エディタのためのガイド

インシデント

  • 全インシデントの一覧
  • フラグの立ったインシデント
  • 登録待ち一覧
  • クラスごとの表示
  • 分類法

2024 - AI Incident Database

  • 利用規約
  • プライバシーポリシー
  • Open twitterOpen githubOpen rssOpen facebookOpen linkedin
  • e1b50cd