Skip to Content
logologo
AI Incident Database
Open TwitterOpen RSS FeedOpen FacebookOpen LinkedInOpen GitHub
Open Menu
発見する
投稿する
  • ようこそAIIDへ
  • インシデントを発見
  • 空間ビュー
  • テーブル表示
  • リスト表示
  • 組織
  • 分類法
  • インシデントレポートを投稿
  • 投稿ランキング
  • ブログ
  • AIニュースダイジェスト
  • リスクチェックリスト
  • おまかせ表示
  • サインアップ
閉じる
発見する
投稿する
  • ようこそAIIDへ
  • インシデントを発見
  • 空間ビュー
  • テーブル表示
  • リスト表示
  • 組織
  • 分類法
  • インシデントレポートを投稿
  • 投稿ランキング
  • ブログ
  • AIニュースダイジェスト
  • リスクチェックリスト
  • おまかせ表示
  • サインアップ
閉じる
AIIDブログ

AI Incident Roundup – February ‘23

Posted 2023-03-20 by Janet Schwartz & Khoa Lam.

Welcome to this month’s edition of The Monthly Roundup, a newsletter designed to give you a digestible recap on the latest incidents and reports of the AI Incident Database.

As with last month, there are many incidents related to Large Language Models (LLMs), whose incidents are annotated with the 💬 icon.

Estimated reading time: 7 minutes

🗞️ New Incidents

Emerging incidents that occurred last month:

💬 Incident #462: AI-Produced Livestream Sitcom Received Temporary Twitch Ban for Transphobic Segment

  • What happened? The AI-generated sitcom broadcast as the Twitch livestream "Nothing, Forever" received a temporary ban for featuring a transphobic and homophobic dialogue segment intended as comedy.

💬 Incident #467: Google's Bard Shared Factually Inaccurate Info in Promo Video

  • What happened? Google's conversational AI "Bard" was shown in the company's promotional video providing false information about which satellite first took pictures of a planet outside the Earth's solar system, reportedly causing shares to temporarily plummet.

💬 Incident #468: ChatGPT-Powered Bing Reportedly Had Problems with Factual Accuracy on Some Controversial Topics

  • What happened? Microsoft's ChatGPT-powered Bing search engine reportedly ran into factual accuracy problems when prompted about controversial matters, such as inventing the plot of a non-existent movie or creating conspiracy theories.

💬 Incident #470: Bing Chat Response Cited ChatGPT Disinformation Example

  • What happened? Reporters from TechCrunch issued a query to Microsoft Bing's ChatGPT feature, which cited an earlier example of ChatGPT disinformation discussed in a news article to substantiate the disinformation.

💬 Incident #473: Bing Chat's Initial Prompts Revealed by Early Testers Through Prompt Injection

  • What happened? Early testers of Bing Chat successfully used prompt injection to reveal its built-in initial instructions, which contains a list of statements governing ChatGPT's interaction with users.

💬 Incident #474: Users Reported Abrupt Behavior Changes of Their AI Replika Companions

  • What happened? Replika paid-subscription users reported unusual and sudden changes to behaviors of their "AI companions" such as forgetting memories with users or rejecting their sexual advances, which affected their connections and mental health.

💬 Incident #477: Bing Chat Tentatively Hallucinated in Extended Conversations with Users

  • What happened? Early testers reported Bing Chat, in extended conversations with users, having tendencies to make up facts and emulate emotions through an unintended persona.

Incident #479: Instagram Video Featured Deepfake Audio of US President Making Transphobic Remarks

  • What happened? A deepfaked audio of US President Joe Biden making transphobic remarks played on top of a video showing him giving a speech was released on Instagram and circulated on social media.

Incident #481: Deepfake TikTok Video Featured Joe Rogan Endorsing Supplement Brand

  • What happened? A deepfake video featuring podcast host Joe Rogan advertising to his listeners about a "libido-boosting" supplement was circulating on TikTok and other platforms before being removed by TikTok along with the account which posted it.

💬 Incident #482: ChatGPT-Assisted University Email Addressing Mass Shooting Denounced by Students

  • What happened? Vanderbilt University's Office of Equity, Diversity and Inclusion used ChatGPT to write an email addressing the student body about the 2023 Michigan State University shooting, which was condemned as "impersonal" and "lacking empathy".

Incident #483: Indian Police Allegedly Tortured and Killed Innocent Man Following Facial Misidentification

  • What happened? A resident in Medak, India died allegedly due to custodial torture by the local police, who misidentified him as a suspect in a theft case using facial recognition.

Incident #485: UK Bank's Voice ID Successfully Bypassed Using AI-Produced Audio

  • What happened? A UK journalist was able to successfully bypass Lloyds Bank's "Voice ID" program to access his bank account using an AI-generated audio of his own voice.

🗄 Trending in the AIID

Making AI tools accessible to the general public can result in a proliferation of unintended uses of the tool. One of the latest examples is the increase in use of generative AI to mimic peoples’ voices without their consent. Incidents 479 and 481 describe this technology being used to create Deepfakes of Joe Rogan and president Joe Biden. Incident 488 describes how the technology is allegedly being used to dox voice actors on Twitter. Finally, a recent Issue Report describes a new chatbot “NoiseGPT” that can turn text into celebrity voices, pointing to more potential incidents in the future.

The proliferation of Deepfakes can create a challenge in tracking incidents as many related reports continue to be published. We have developed a system to manage this using an  incident “variant”. Assuming the trend of audio deepfakes continues, we will have the ability to index what is likely to become a very large number of deepfake incidents.

👇 Diving Deeper

  • All new incidents added to the database in the last month, grouped by topic:

    • Large Language Models (e.g. ChatGPT): #464; #466; #467; #468; #470; #473; #475; #477; #482
    • Social media: #471; #462 (bias); #476;
    • Deepfakes: #479; #480; #481; #485
    • Autonomous vehicles: #460; #478
    • Bias & discrimination: #469; #461
    • Impactful Errors: #463
    • Privacy & surveillance: #465 (generative model)
    • Facial recognition: #472 (bias); #483; #484
    • Robots & AI Companions: #474
  • All new reports for existing incidents added to the database in the last month, grouped by topic: (#r represents a single incident report)

    • Large Language Models (e.g. ChatGPT): #r2634; #r2644; #r2748; #2749; #r2671; #r2672
    • Facial Recognition: #r2665; #r2728; #r2600
    • Impactful Errors: #r2623; #r2677
    • Autonomous vehicles: #r2561
  • Explore clusters of similar incidents in Spatial Visualization

  • Check out Table View for a complete view of all incidents

  • Learn about alleged developers, deployers, and harmed parties in Entities Page

🦾 Support our Efforts

Still reading? Help us change the world for the better!

  1. Share this newsletter on LinkedIn, Twitter, and Facebook
  2. Submit incidents to the database
  3. Contribute to the database’s functionality

リサーチ

  • “AIインシデント”の定義
  • “AIインシデントレスポンス”の定義
  • データベースのロードマップ
  • 関連研究
  • 全データベースのダウンロード

プロジェクトとコミュニティ

  • AIIDについて
  • コンタクトとフォロー
  • アプリと要約
  • エディタのためのガイド

インシデント

  • 全インシデントの一覧
  • フラグの立ったインシデント
  • 登録待ち一覧
  • クラスごとの表示
  • 分類法

2024 - AI Incident Database

  • 利用規約
  • プライバシーポリシー
  • Open twitterOpen githubOpen rssOpen facebookOpen linkedin
  • 86fe0f5