Skip to Content
logologo
AI Incident Database
Open TwitterOpen RSS FeedOpen FacebookOpen LinkedInOpen GitHub
Open Menu
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse

Report 1876

Associated Incidents

Incident 2813 Report
YouTube's Algorithms Failed to Remove Violating Content Related to Suicide and Self-Harm

Loading...
YouTube recommended self-harm videos to children as young as 13
telegraph.co.uk · 2019

Speaking to The Telegraph, a former Tumblr blogger, who asked for anonymity, said she had to stop her own depression and anxiety help blog after she found herself “falling down the rabbit hole of content that triggered negative emotions”.

“I found it really easy to continuously fall back into bad habits and bad coping skills that only worsened my mental health,” she said.

“A few of my friends… were really frequent viewers of YouTube videos of people’s stories with eating disorders, depression and anxiety.

“I think it is really difficult to find that line between what is helpful and inspirational and what is triggering content.”

Writing in The Daily Telegraph last week, Digital Secretary Jeremy Wright said: “Social media companies clearly need to do more to ensure they are not promoting harmful content to vulnerable people.”

The Health Secretary Matt Hancock has written to executives at Facebook, Instagram, Twitter, Snapchat, Pinterest, Google and Apple ordering them to “step up and purge this content once and for all”.

A YouTube spokesman said: “We know many people use YouTube to find information, advice or support sometimes in the hardest of circumstances. We work hard to ensure our platforms are not used to encourage dangerous behaviour.

Read the Source

Research

  • Defining an “AI Incident”
  • Defining an “AI Incident Response”
  • Database Roadmap
  • Related Work
  • Download Complete Database

Project and Community

  • About
  • Contact and Follow
  • Apps and Summaries
  • Editor’s Guide

Incidents

  • All Incidents in List Form
  • Flagged Incidents
  • Submission Queue
  • Classifications View
  • Taxonomies

2024 - AI Incident Database

  • Terms of use
  • Privacy Policy
  • Open twitterOpen githubOpen rssOpen facebookOpen linkedin
  • e1b50cd