Skip to Content
logologo
AI Incident Database
Open TwitterOpen RSS FeedOpen FacebookOpen LinkedInOpen GitHub
Open Menu
発見する
投稿する
  • ようこそAIIDへ
  • インシデントを発見
  • 空間ビュー
  • テーブル表示
  • リスト表示
  • 組織
  • 分類法
  • インシデントレポートを投稿
  • 投稿ランキング
  • ブログ
  • AIニュースダイジェスト
  • リスクチェックリスト
  • おまかせ表示
  • サインアップ
閉じる
発見する
投稿する
  • ようこそAIIDへ
  • インシデントを発見
  • 空間ビュー
  • テーブル表示
  • リスト表示
  • 組織
  • 分類法
  • インシデントレポートを投稿
  • 投稿ランキング
  • ブログ
  • AIニュースダイジェスト
  • リスクチェックリスト
  • おまかせ表示
  • サインアップ
閉じる

レポート 2471

関連インシデント

インシデント 211 Report
Tougher Turing Test Exposes Chatbots’ Stupidity (migrated to Issue)

インシデント 293 Report
Image Classification of Battle Tanks

インシデント 422 Report
Inefficiencies in the United States Resident Matching Program

インシデント 621 Report
Bad AI-Written Christmas Carols

インシデント 851 Report
AI attempts to ease fear of robots, blurts out it can’t ‘avoid destroying humankind’

インシデント 1591 Report
Tesla Autopilot’s Lane Recognition Allegedly Vulnerable to Adversarial Attacks

インシデント 2871 Report
OpenAI’s GPT-3 Reported as Unviable in Medical Tasks by Healthcare Firm

インシデント 2981 Report
Student-Developed Facial Recognition App Raised Ethical Concerns

Loading...
AI Incident Database Incidents Converted to Issues
github.com · 2022

The following former incidents have been converted to "issues" following an update to the incident definition and ingestion criteria.

21: Tougher Turing Test Exposes Chatbots’ Stupidity

Description: The 2016 Winograd Schema Challenge highlighted how even the most successful AI systems entered into the Challenge were only successful 3% more often than random chance.

Why Downgraded? This is an academic finding showing a weakness of the technology rather than a harm event.

Former Reports: These reports were formerly associated with the incident.

  • Tougher Turing Test Exposes Chatbots’ Stupidity

Migrated Reports: Reports associated with this incident are now associated with other incidents as context.

  • none

62: Bad AI-Written Christmas Carols

Description: Janelle Shane, an AI research scientist, used 240 popular Christmas carols to train a neural network to write its own carols.

Why Downgraded? Was designed to be humorous and is in fact humorous.

Former Reports: These reports were formerly associated with the incident.

  • Christmas Carols, generated by a neural network
  • AI still sucks at writing Christmas Carols

Migrated Reports: Reports associated with this incident are now associated with other incidents as context.

  • none

159: Tesla Autopilot’s Lane Recognition Allegedly Vulnerable to Adversarial Attacks

Description: Tencent Keen Security Lab conducted security research into Tesla’s Autopilot system and identified crafted adversarial samples and remote controlling via wireless gamepad as vulnerabilities to its system, although the company called into question their real-world practicality.

Why Downgraded? The reports surface a vulnerability with projected harms rather than a report of harms in the real world.

Former Reports: These reports were formerly associated with the incident.

  • Tencent Keen Security Lab: Experimental Security Research of Tesla Autopilot
  • Three Small Stickers in Intersection Can Cause Tesla Autopilot to Swerve Into Wrong Lane

Migrated Reports: Reports associated with this incident are now associated with other incidents as context.

  • none

287: OpenAI’s GPT-3 Reported as Unviable in Medical Tasks by Healthcare Firm

Description: The French digital care company, Nabla, in researching GPT-3’s capabilities for medical documentation, diagnosis support, and treatment recommendation, found its inconsistency and lack of scientific and medical expertise unviable and risky in healthcare applications.

Why Downgraded? The reports indicate the insufficiency of ChatGPT for several tasks for which the system was not deployed in the real world.

Former Reports: These reports were formerly associated with the incident.

  • Doctor GPT-3: hype or reality?
  • Researchers made an OpenAI GPT-3 medical chatbot as an experiment. It told a mock patient to kill themselves
  • Medical chatbot using OpenAI’s GPT-3 told a fake patient to kill themselves
  • This bot actually suggests patients to kill themselves

Migrated Reports: Reports associated with this incident are now associated with other incidents as context.

  • none

298: Student-Developed Facial Recognition App Raised Ethical Concerns

Description: TheFaceTag app, a social networking app developed and deployed within-campus by a student at Harvard raised concerns surrounding its facial recognition, cybersecurity, privacy, and misuse.

Why Downgraded? The harms are predicted to occur, but have not yet occured.

Former Reports: These reports were formerly associated with the incident.

  • Can I Scan Your Face?
  • A Harvard freshman made a social networking app called 'The FaceTag.' It's sparked a debate about the ethics of facial recognition
  • An app by a Harvard student caused controversy about its ethics

Migrated Reports: Reports associated with this incident are now associated with other incidents as context.

  • none

85: AI attempts to ease fear of robots, blurts out it can’t ‘avoid destroying humankind’

Confirmed migration by editors - Incident 85 - likely does not fit current incident criteria, where GPT-3 was manipulated by human editors to exaggerate harm, borderline sensationalism. https://incidentdatabase.ai/cite/85

Description: On September 8, 2020, the Guardian published an op-ed generated by OpenAI’s GPT-3 text generating AI that included threats to destroy humankind.

Why Downgraded? Unclear who was harmed, if anyone, via the events described.

Former Reports: These reports were formerly associated with the incident.

  • AI attempts to ease fear of robots, blurts out it can’t ‘avoid destroying humankind’

Migrated Reports: Reports associated with this incident are now associated with other incidents as context.

  • none

Candidates for Migration

The following incidents may also be migrated in the future based on discussion among the AI Incident Database editors:

Does not meet current definition and criteria https://incidentdatabase.ai/cite/42

Incident 29 concerns the "tank story," which may be apocryphal. https://incidentdatabase.ai/cite/29

情報源を読む

リサーチ

  • “AIインシデント”の定義
  • “AIインシデントレスポンス”の定義
  • データベースのロードマップ
  • 関連研究
  • 全データベースのダウンロード

プロジェクトとコミュニティ

  • AIIDについて
  • コンタクトとフォロー
  • アプリと要約
  • エディタのためのガイド

インシデント

  • 全インシデントの一覧
  • フラグの立ったインシデント
  • 登録待ち一覧
  • クラスごとの表示
  • 分類法

2024 - AI Incident Database

  • 利用規約
  • プライバシーポリシー
  • Open twitterOpen githubOpen rssOpen facebookOpen linkedin
  • e1b50cd