インシデントのステータス
GMF 分類法のクラス
分類法の詳細Known AI Goal
Hate Speech Detection
Known AI Technology
Classification, Distributional Learning
Known AI Technical Failure
Distributional Bias
インシデントレポート
レポートタイムライン
- 情報源として元のレポートを表示
- インターネットアーカイブでレポートを表示
Online discussions about black and white chess pieces are confusing artificial intelligence algorithms trained to detect racism and other hate speech, according to new research.
Computer scientists at Carnegie Mellon University began invest…
- 情報源として元のレポートを表示
- インターネットアーカイブでレポートを表示
YouTube's overeager AI might have misinterpreted a conversation about chess as racist language.
Last summer, a YouTuber who produces popular chess videos saw his channel blocked for including what the site called 'harmful and dangerous' con…
- 情報源として元のレポートを表示
- インターネットアーカイブでレポートを表示
"The Queen's Gambit," the recent TV mini-series about a chess master, may have stirred increased interest in chess, but a word to the wise: social media talk about game-piece colors could lead to misunderstandings, at least for hate-speech …
- 情報源として元のレポートを表示
- インターネットアーカイブでレポートを表示
It might be unbelievable at first that a YouTube algorithm has detected a chess discussion as 'racist' and flagged it for punishment. In the case of the chess YouTuber, he was blocked by the video-streaming company for the alleged, sensitiv…
- 情報源として元のレポートを表示
- インターネットアーカイブでレポートを表示
The world’s most popular YouTube chess channel was blocked after artificial algorithms set up to detect racist content and hate speech mistook discussion about black and white chess pieces as racism, reports Independent UK.
On June 28, 2020…
- 情報源として元のレポートを表示
- インターネットアーカイブでレポートを表示
Last June, Antonio Radić, the host of a YouTube chess channel with more than a million subscribers, was live-streaming an interview with the grandmaster Hikaru Nakamura when the broadcast suddenly cut out.
Instead of a lively discussion abo…
バリアント
よく似たインシデント
Did our AI mess up? Flag the unrelated incidents
よく似たインシデント
Did our AI mess up? Flag the unrelated incidents