Skip to Content
logologo
AI Incident Database
Open TwitterOpen RSS FeedOpen FacebookOpen LinkedInOpen GitHub
Open Menu
発見する
投稿する
  • ようこそAIIDへ
  • インシデントを発見
  • 空間ビュー
  • テーブル表示
  • リスト表示
  • 組織
  • 分類法
  • インシデントレポートを投稿
  • 投稿ランキング
  • ブログ
  • AIニュースダイジェスト
  • リスクチェックリスト
  • おまかせ表示
  • サインアップ
閉じる
発見する
投稿する
  • ようこそAIIDへ
  • インシデントを発見
  • 空間ビュー
  • テーブル表示
  • リスト表示
  • 組織
  • 分類法
  • インシデントレポートを投稿
  • 投稿ランキング
  • ブログ
  • AIニュースダイジェスト
  • リスクチェックリスト
  • おまかせ表示
  • サインアップ
閉じる

レポート 875

関連インシデント

インシデント 499 Report
AI Beauty Judge Did Not Like Dark Skin

Loading...
What Will Happen When Your Company’s Algorithms Go Wrong?
hbr.org · 2017

An AI designed to do X will eventually fail to do X. Spam filters block important emails, GPS provides faulty directions, machine translations corrupt the meaning of phrases, autocorrect replaces a desired word with a wrong one, biometric systems misrecognize people, transcription software fails to capture what is being said; overall, it is harder to find examples of AIs that don’t fail. The failures of today’s narrow domain AIs are just the tip of the iceberg; once we develop general artificial intelligence capable of cross-domain performance, embarrassment from such failures will be the least of our concerns. That’s why we need to put best practices in place now.

When you’re ready to incorporate artificial intelligence technologies in your business, the analysis you should perform is this: What can possibly go wrong? What is our product or service expected to do? What happens if it fails to do so? Do we have a damage mitigation plan? Consider the embarrassing situation that Microsoft found itself in with its Tay chatbot fiasco, where internet trolls exploited vulnerabilities in the bot’s code, feeding it racist, homophobic, and sexist content that millions read on social media.

Insight Center The Age of AI Sponsored by Accenture How it will impact business, industry, and society.

Accidents, including deadly ones, caused by software or industrial robots can be traced to the early days of such technology, but they are not necessarily caused by the systems themselves. AI failures, on the other hand, are directly related to the mistakes produced by the intelligence such systems are designed to exhibit. We can broadly classify such failures into “mistakes made during the learning phase” and “mistakes made during performance phase.” A system can fail to learn what its designers want it to learn and might instead learn a different, but correlated function.

A frequently cited example is a computer vision system that the U.S. Army had hoped to use to automatically detect camouflaged enemy tanks. The system was supposed to classify pictures of tanks, but instead learned to distinguish the backgrounds of such images. Other examples include problems caused by poorly-designed functions that would reward AIs for only partially desirable behaviors, such as pausing a game to avoid losing, or repeatedly touching a soccer ball to get credit for possession.

It can help to look at some recent examples of AI failure to better understand what problems are likely to arise and what you can do to prevent them — or at least to clean up quickly after a failure. Consider these examples of AI failures from the past few years:

2015: An automated email reply generator created inappropriate responses, such as writing “I love you” to a business colleague.

2015: A robot for grabbing auto parts grabbed and killed a man.

2015: Image tagging software classified black people as gorillas.

2015: Medical AI classified patients with asthma as having a lower risk of dying of pneumonia.

2015: Adult content filtering software failed to remove inappropriate content, exposing children to violent and sexual content.

2016: AI designed to predict recidivism acted racist.

2016: An AI agent exploited a reward signal to win a game without actually completing the game.

2016: Video game NPCs (non-player characters, or any character that is not controlled by a human player) designed unauthorized superweapons.

2016: AI judged a beauty contest and rated dark-skinned contestants lower.

2016: A mall security robot collided with and injured a child.

2016: The AI “AlphaGo” lost to a human in a world-championship-level game of “Go.”

2016: A self-driving car had a deadly accident.

And every day, consumers experience more common shortcomings of AI: Spam filters block important emails, GPS provides faulty directions, machine translations corrupt the meaning of phrases, autocorrect replaces a desired word with a wrong one, biometric systems misrecognize people, transcription software fails to capture what is being said; overall, it is harder to find examples of AIs that don’t fail.

Analyzing the list of AI failures above, we can arrive at a simple generalization: An AI designed to do X will eventually fail to do X. While it may seem trivial, it is a powerful generalization tool, which can be used to predict future failures of AIs. For example, looking at cutting-edge current and future AIs we can predict that:

AI doctors will misdiagnose some patients in a way a real doctor would not.

Video description software will misunderstand movie plots.

Software for generating jokes will occasionally fail to make them funny.

Sarcasm detection software will confuse sarcastic and sincere statements.

Employee screening software will be systematically biased and thus hire low performers.

The Mars robot-explorer will misjudge its environment and fall into a crater.

Tax preparation software will miss important deductions or make inappropriate ones.

What should you learn from the above ex

情報源を読む

リサーチ

  • “AIインシデント”の定義
  • “AIインシデントレスポンス”の定義
  • データベースのロードマップ
  • 関連研究
  • 全データベースのダウンロード

プロジェクトとコミュニティ

  • AIIDについて
  • コンタクトとフォロー
  • アプリと要約
  • エディタのためのガイド

インシデント

  • 全インシデントの一覧
  • フラグの立ったインシデント
  • 登録待ち一覧
  • クラスごとの表示
  • 分類法

2024 - AI Incident Database

  • 利用規約
  • プライバシーポリシー
  • Open twitterOpen githubOpen rssOpen facebookOpen linkedin
  • e1b50cd