Incident 1: Google’s YouTube Kids App Presents Inappropriate Content

Description: YouTube’s content filtering and recommendation algorithms exposed children to disturbing and inappropriate videos.

Tools

New ReportNew ReportNew ResponseNew ResponseDiscoverDiscover
Alleged: YouTube developed and deployed an AI system, which harmed Children.

Incident Stats

Incident ID
1
Report Count
14
Incident Date
2015-05-19
Editors
Sean McGregor

CSET Taxonomy Classifications

Taxonomy Details

Full Description

The content filtering system for YouTube's children's entertainment app, which incorporated algorithmic filters and human reviewers, failed to screen out inappropriate material, exposing an unknown number of children to videos that included sex, drugs, violence, profanity, and conspiracy theories. Many of the videos, which apparently numbered in the thousands, closely resembled popular children's cartoons such as Peppa Pig, but included disturbing or age-inappropriate content. Additional filters provided by YouTube, such as a "restricted mode" filter, failed to block all of these videos, and YouTube's recommendation algorithm recommended them to child viewers, increasing the harm. The problem was reported as early as 2015 and was ongoing through 2018.

Short Description

YouTube’s content filtering and recommendation algorithms exposed children to disturbing and inappropriate videos.

Severity

Moderate

Harm Distribution Basis

Age

Harm Type

Psychological harm

AI System Description

"A content filtering system incorporating machine learning algorithms and human reviewers. The system was meant to screen out videos that were unsuitable for children to view or that violated YouTube's terms of service. These videos were initially collected either algorithmically or on the basis of user reports. A recommendation system that suggested videos to viewers based on their viewing history on the platform."

System Developer

YouTube

Sector of Deployment

Arts, entertainment and recreation

Relevant AI functions

Perception, Cognition, Action

AI Techniques

machine learning

AI Applications

content filtering, decision support, curation, recommendation engine

Location

Global

Named Entities

Google, YouTube, YouTube Kids

Technology Purveyor

Google, YouTube

Beginning Date

2015-01-01T00:00:00.000Z

Ending Date

2018-12-31T00:00:00.000Z

Near Miss

Unclear/unknown

Intent

Accident

Lives Lost

No

Data Inputs

Videos

GMF Taxonomy Classifications

Taxonomy Details

Known AI Goal

Content Recommendation, Content Search, Hate Speech Detection, NSFW Content Detection

Known AI Technology

Content-based Filtering, Collaborative Filtering

Potential AI Technology

Classification, Ensemble Aggregation, Distributional Learning

Known AI Technical Failure

Tuning Issues, Lack of Adversarial Robustness, Adversarial Data

Potential AI Technical Failure

Concept Drift, Generalization Failure, Misconfigured Aggregation, Distributional Bias, Misaligned Objective

Variants

A "variant" is an incident that shares the same causative factors, produces similar harms, and involves the same intelligent systems as a known AI incident. Rather than index variants as entirely separate incidents, we list variations of incidents under the first similar incident submitted to the database. Unlike other submission types to the incident database, variants are not required to have reporting in evidence external to the Incident Database. Learn more from the research paper.

Similar Incidents

By textual similarity

Did our AI mess up? Flag the unrelated incidents