Skip to Content
logologo
AI Incident Database
Open TwitterOpen RSS FeedOpen FacebookOpen LinkedInOpen GitHub
Open Menu
発見する
投稿する
  • ようこそAIIDへ
  • インシデントを発見
  • 空間ビュー
  • テーブル表示
  • リスト表示
  • 組織
  • 分類法
  • インシデントレポートを投稿
  • 投稿ランキング
  • ブログ
  • AIニュースダイジェスト
  • リスクチェックリスト
  • おまかせ表示
  • サインアップ
閉じる
発見する
投稿する
  • ようこそAIIDへ
  • インシデントを発見
  • 空間ビュー
  • テーブル表示
  • リスト表示
  • 組織
  • 分類法
  • インシデントレポートを投稿
  • 投稿ランキング
  • ブログ
  • AIニュースダイジェスト
  • リスクチェックリスト
  • おまかせ表示
  • サインアップ
閉じる

レポート 1979

関連インシデント

インシデント 1446 Report
YouTube's AI Mistakenly Banned Chess Channel over Chess Language Misinterpretation

Loading...
YouTube AI Misinterprets Chess Chat Involving 'Black' And 'White' Pieces, Flags for Racism
techtimes.com · 2021

It might be unbelievable at first that a YouTube algorithm has detected a chess discussion as 'racist' and flagged it for punishment. In the case of the chess YouTuber, he was blocked by the video-streaming company for the alleged, sensitive issue.

YouTube AI Could Have Mistakenly Perceived The Chess Chat As 'Racist'

The incident happened in June 2020 and people have entirely no idea why the algorithm blocked the streamer from making his content videos about chess. At the very least, it could have known that the video was flagged as 'harmful' and 'dangerous' content. Maybe, it recognized an amount of hate speech in the discussion forum if that was the case.

In a report by Daily Mail, the Croatian chess enthusiast, Antonio Radic, who is also known by his YouTube name "agadmator" was puzzled why he was barred from any activity within the video-sharing platform. Two researchers from the Carnegie Mellon University (CMU) had a wild guess about the mystery behind the confusion.

What's intriguing regarding the case was YouTube did not even explain why Radic's channel was shut down immediately. However, after 24 hours, it returned as if nothing happened. To solve the mind-boggling scenario, a project scientist said that it happened because Radic's viral interview with GM Hikaru Nakamura had detected words that sparked racism.

Ashique KhudaBukhsh of the Language Technologies Institute in CMU admitted that they have no clue what tool did YouTube use to detect a racist slur in the discussion. The video, however, mentioned "black" and "white" which were believed to be racist language.

Furthermore, he added that if the incident struck popular YouTubers like Radic, what more if the AI has been doing the same thing to other people who just stream for fun. KhudaBukhsh and his companion Rupak Sarkar, a research engineer ran two AI software that could detect hate speech to test its feasibility.  Along with the test, they have discovered over 680,000 comments which came from five channels that were all about chess.

Moreover, out of nearly 700,000 comments, a simple random test was carried out for 1,000 sample comments. They found out that 82% of the comments did not tackle anything about hate speech. But, they have seen that words related to racism like white, threat, black, and attack could be the missing key to the AI's sudden action.

The AI could have a different way of filtering the messages through large samples and rest assured that the accuracy also varies based on the examples.

Comparing The Past Situation To What Happened To Radic

In a report by CMU News, KhudaBukhsh experienced encountering the sample problem before. His objective will be to recognize the 'active dogs' and the 'lazy dogs' in a group of photos. Majority of the pictures which have 'active dogs' contained grasses where the dogs ran. However, the program sometimes considered those photos that have grasses as samples of an 'active dog,' even in some cases, no dogs were detected in the photos.

What happened to Radic was just a mere resemblance of it. The training data sets were only a few even though the main topic was chess. As a result, wrong classifications of them happened.

情報源を読む

リサーチ

  • “AIインシデント”の定義
  • “AIインシデントレスポンス”の定義
  • データベースのロードマップ
  • 関連研究
  • 全データベースのダウンロード

プロジェクトとコミュニティ

  • AIIDについて
  • コンタクトとフォロー
  • アプリと要約
  • エディタのためのガイド

インシデント

  • 全インシデントの一覧
  • フラグの立ったインシデント
  • 登録待ち一覧
  • クラスごとの表示
  • 分類法

2024 - AI Incident Database

  • 利用規約
  • プライバシーポリシー
  • Open twitterOpen githubOpen rssOpen facebookOpen linkedin
  • e1b50cd