Skip to Content
logologo
AI Incident Database
Open TwitterOpen RSS FeedOpen FacebookOpen LinkedInOpen GitHub
Open Menu
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse

Report 2134

Associated Incidents

Incident 1446 Report
YouTube's AI Mistakenly Banned Chess Channel over Chess Language Misinterpretation

Loading...
YouTube AI blocks chess channel after mistaking 'black v white' discussion as racism
tribune.com.pk · 2021

The world’s most popular YouTube chess channel was blocked after artificial algorithms set up to detect racist content and hate speech mistook discussion about black and white chess pieces as racism, reports Independent UK.

On June 28, 2020, Croatian chess player Antonio Radic's YouTube chess channel, with more than 1 million subscribers was blocked during a chess show with Grandmaster Hikaru Nakamura.

He received no explanation from the video platform.

Radic’s channel was restored 24 hours later. He suspects that the account may have been blocked because he referred to the chess game as “Black against White”.

YouTube relies on both humans and AI algorithms, which means the AI system could make an error if it is not trained correctly to interpret context.

“If they rely on artificial intelligence to detect racist language, this kind of accident can happen,” said Ashiqur KhudaBukhsh, a project scientist at CMU’s Language Technologies Institute.

KhudaBukhs tested this theory by using the best speech classifier that’s available to screen 680,000 comments gathered from five popular chess-focused YouTube channels.

After manually reviewing 1,000 comments, he found that 82 per cent of them had been wrongly categorized by AI as hate speech because the comments used words like “black”, “white”, “attack” and “threat”.

YouTube, Facebook, and Twitter warned last year that videos and content may be erroneously removed for policy violations, as the companies rely on automated takedown software during the coronavirus pandemic.

In a blog post, Google said that to reduce the need for people to come into offices, YouTube and other business divisions are temporarily relying more on artificial intelligence and automated tools to find problematic content.

Read the Source

Research

  • Defining an “AI Incident”
  • Defining an “AI Incident Response”
  • Database Roadmap
  • Related Work
  • Download Complete Database

Project and Community

  • About
  • Contact and Follow
  • Apps and Summaries
  • Editor’s Guide

Incidents

  • All Incidents in List Form
  • Flagged Incidents
  • Submission Queue
  • Classifications View
  • Taxonomies

2024 - AI Incident Database

  • Terms of use
  • Privacy Policy
  • Open twitterOpen githubOpen rssOpen facebookOpen linkedin
  • e1b50cd