Skip to Content
logologo
AI Incident Database
Open TwitterOpen RSS FeedOpen FacebookOpen LinkedInOpen GitHub
Open Menu
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse

Report 2399

Associated Incidents

Incident 4174 Report
Facebook Feed Algorithms Exposed Low Digitally Skilled Users to More Disturbing Content

Loading...
Facebook Exposed Its Less Digital Conversant Audience To Graphic Content
screenrant.com · 2021

Facebook's track record with content available on its platform is nothing worth envying, but for users who are not well-versed with social media tools, the platform dished out more disturbing content that could be anything from graphically violent to sexual in nature. Throughout the past few weeks, leaked internal research material courtesy of whistleblower Frances Haugen has revealed a history of questionable choices made by the company.

Among them is its unwillingness to solve the issue of plagiarised content being created by some popular pages and groups, but Facebook ignored the problem to avoid any legal hassles. Facebook also served as the hotbed for spreading political propaganda and hateful content via foreign clickbait farms. And instead of taking swift action, the company's content business actually paid the bad actors via its content and ad initiatives.

Now, a fresh USA Today investigation claims that Facebook users who are lagging on digital literacy and social media skills were exposed to disturbing content depicting violence and borderline nudity. The company — which now goes by the name Meta — conducted a user survey a couple of years ago with the goal of analyzing the digital literacy skills of its audience. Based on how users responded to questions about terms like tagging and other fundamental features, Facebook studied the kind of content each person was exposed to in the last 30 days. Users who failed to answer any of the questions about core Facebook features correctly saw 11.4 percent more nudity and 13.4 percent more graphic violence on their content feed. One Facebook employee who went over the findings reportedly remarked that _"the 'default' feed experience, so to speak, includes nudity + borderline content unless otherwise controlled." _To supplement its research findings, Facebook also reached out to 'vulnerable users' at their homes and conducted detailed interviews to study their experience on the platform based on their low digital skill levels.

Facebook's Engagement-Loving Content Algorithm Is To Blame

Facebook's team realized that many users in this segment alienated themselves from the platform after seeing upsetting content in their feed that added to the problems they were already struggling with. For example, posts that showed children being bullied, "threatening, and killing other people," and racial tensions surfaced on a middle-aged Black woman's Facebook content feed. The findings are not surprising, as Facebook courted controversy for inciteful content ahead of the Capitol Hill incident earlier this year and continues to struggle against COVID-19 misinformation, hate speech,and conspiracies like 5G's health effects. For another at-risk user who became a member of the Narcotic Anonymous Group, Facebook started showing recommendations and ads for alcoholic beverages. Those following coupon and savings pages were soon flooded with financial scam posts.

Sister platform Instagram is no stranger to the problem either, as it recently received a stern warning for allowing the online drugs trade to thrive. This accounted for multiple overdose-related deaths in the U.S. Facebook's research concluded that its content algorithm is harmful to folks who are well-versed with the nooks and crannies of social media. And since users are unaware of tools like 'hide,' 'unfollow,' 'block,' and reporting, they continue to see inappropriate content appearing in their feed. Again, people of color, those with lower socioeconomic status, and lower education levels were most vulnerable. More importantly, between one-quarter and one-third of all Facebook users fell under the 'low-tech-skilled' category as per the social media titan's own research.

Read the Source

Research

  • Defining an “AI Incident”
  • Defining an “AI Incident Response”
  • Database Roadmap
  • Related Work
  • Download Complete Database

Project and Community

  • About
  • Contact and Follow
  • Apps and Summaries
  • Editor’s Guide

Incidents

  • All Incidents in List Form
  • Flagged Incidents
  • Submission Queue
  • Classifications View
  • Taxonomies

2024 - AI Incident Database

  • Terms of use
  • Privacy Policy
  • Open twitterOpen githubOpen rssOpen facebookOpen linkedin
  • e1b50cd