Skip to Content
logologo
AI Incident Database
Open TwitterOpen RSS FeedOpen FacebookOpen LinkedInOpen GitHub
Open Menu
発見する
投稿する
  • ようこそAIIDへ
  • インシデントを発見
  • 空間ビュー
  • テーブル表示
  • リスト表示
  • 組織
  • 分類法
  • インシデントレポートを投稿
  • 投稿ランキング
  • ブログ
  • AIニュースダイジェスト
  • リスクチェックリスト
  • おまかせ表示
  • サインアップ
閉じる
発見する
投稿する
  • ようこそAIIDへ
  • インシデントを発見
  • 空間ビュー
  • テーブル表示
  • リスト表示
  • 組織
  • 分類法
  • インシデントレポートを投稿
  • 投稿ランキング
  • ブログ
  • AIニュースダイジェスト
  • リスクチェックリスト
  • おまかせ表示
  • サインアップ
閉じる

レポート 569

関連インシデント

インシデント 3624 Report
Picture of Woman on Side of Bus Shamed for Jaywalking

Loading...
2018 in Review: 10 AI Failures
medium.com · 2018

Last December Synced compiled its first “Artificial Intelligence Failures” recap of AI gaffes from the previous year. AI has achieved remarkable progress, and many scientists dream of creating the Master Algorithm proposed by Pedro Domingos — which can solve all problems envisioned by humans. It’s unavoidable however that researchers, fledgling technologies and biased data will also produce blunders not envisioned by humans.

That’s why a review of AI failures is necessary and meaningful: The aim of the article is not to downplay or mock research and development results, but to take a look at what went wrong with the hope we can do better next time.

Synced 10 AI failures of 2018.

Chinese billionaire’s face identified as jaywalker

Traffic police in major Chinese cities are using AI to address jaywalking. They deploy smart cameras using facial recognition techniques at intersections to detect and identify jaywalkers, whose partially obscured names and faces then show up on a public display screen.

The AI system in the southern port city of Ningbo however recently embarrassed itself when it falsely “recognized” a photo of Chinese billionaire Mingzhu Dong on an ad on a passing bus as a jaywalker. The mistake went viral on Chinese social media and Ningbo police apologized. Dong was unfazed, posting on Weibo: “This is a trivial matter. Safe travel is more important.”

CloudWalk Deep Learning Researcher Xiang Zhou told Synced the algorithm’s lack of live detection was the likely problem. “Live detection at this distance is challenging, recognizing an image as a real person is pretty common now.”

Chinese billionaire Mingzhu Dong’s face on a public display screen.

Uber self-driving car kills a pedestrian

In the first known autonomous vehicle-related pedestrian death on a public road, an Uber self-driving SUV struck and killed a female pedestrian on March 28 in Tempe, Arizona. The Uber vehicle was in autonomous mode, with a human safety driver at the wheel.

So what happened? Uber discovered that its self-driving software decided not to take any actions after the car’s sensors detected the pedestrian. Uber’s autonomous mode disables Volvo’s factory-installed automatic emergency braking system, according to US National Transportation Safety Board preliminary report on the accident.

In the wake of the tragedy Uber suspended self-driving testing in North American cities, and Nvidia and Toyota also stopped their self-driving road tests in the US. Eight months after the accident Uber announced plans to resume self-driving road tests in Pittsburgh, although the company’s self-driving future remains uncertain.

ABC 15 screenshot of deadly Uber accident.

IBM Watson comes up short in healthcare

“This product is a piece of shit” wrote a doctor at Florida’s Jupiter Hospital regarding IBM’s flagship AI program Watson, according to internal documents obtained by Stat. Originally a question-answering machine, IBM has been exploring Watson’s AI capabilities across a broad range of applications and processes, including healthcare. In 2013 IBM developed Watson’s first commercial application for cancer treatment recommendation, and the company has secured a number of key partnerships with hospitals and research centers over the past five years. But Watson AI Health has not impressed doctors. Some complained it gave wrong recommendations on cancer treatments that could cause severe and even fatal consequences.

After spending years on the project without significant advancements, IBM is reportedly downsizing Watson Health and laying off more than half the division’s staff.

Amazon AI recruiting tool is gender biased

Amazon HR reportedly used an AI-enabled recruiting software between 2014 and 2017 to help review resumes and make recommendations. The software was however found to be more favorable to male applicants because its model was trained on resumes submitted to Amazon over the past decade, when many more male candidates were hired.

The software reportedly downgraded resumes that contain the word “women” or implied the applicant was female, for example because they had attended a women’s college. Amazon has since abandoned the software. The company did not deny using the tool to produce recommendations, but said it was never used to evaluate candidates.

DeepFakes reveals AI’s unseemly side

Last December several porn videos appeared on Reddit “featuring” top international female celebrities. User “DeepFakes” employed generative adversarial networks to swap celebrities’ faces with those of the porn stars. While face-swapping technology has been under development for years, DeepFakes’ method showed that anyone with enough facial images could now produce their own highly convincing fake videos.

Realistic-looking fake videos of well-known people flooded the Internet through 2018. While the method is not technically a “failure,” its potential dangers are serious and far-reaching: if video evidence is no longer credible, this could further enc

情報源を読む

リサーチ

  • “AIインシデント”の定義
  • “AIインシデントレスポンス”の定義
  • データベースのロードマップ
  • 関連研究
  • 全データベースのダウンロード

プロジェクトとコミュニティ

  • AIIDについて
  • コンタクトとフォロー
  • アプリと要約
  • エディタのためのガイド

インシデント

  • 全インシデントの一覧
  • フラグの立ったインシデント
  • 登録待ち一覧
  • クラスごとの表示
  • 分類法

2024 - AI Incident Database

  • 利用規約
  • プライバシーポリシー
  • Open twitterOpen githubOpen rssOpen facebookOpen linkedin
  • e1b50cd