Skip to Content
logologo
AI Incident Database
Open TwitterOpen RSS FeedOpen FacebookOpen LinkedInOpen GitHub
Open Menu
発見する
投稿する
  • ようこそAIIDへ
  • インシデントを発見
  • 空間ビュー
  • テーブル表示
  • リスト表示
  • 組織
  • 分類法
  • インシデントレポートを投稿
  • 投稿ランキング
  • ブログ
  • AIニュースダイジェスト
  • リスクチェックリスト
  • おまかせ表示
  • サインアップ
閉じる
発見する
投稿する
  • ようこそAIIDへ
  • インシデントを発見
  • 空間ビュー
  • テーブル表示
  • リスト表示
  • 組織
  • 分類法
  • インシデントレポートを投稿
  • 投稿ランキング
  • ブログ
  • AIニュースダイジェスト
  • リスクチェックリスト
  • おまかせ表示
  • サインアップ
閉じる

レポート 1639

関連インシデント

インシデント 1974 Report
Facebook Internally Reported Failure of Ranking Algorithm, Exposing Harmful Content to Viewers over Months

Loading...
Facebook boosted harmful posts due to 'massive ranking failure' bug
protocol.com · 2022

For the last six months, Facebook engineers have been seeing intermittent spikes in misinformation and other harmful content on News Feed, with posts that would usually be demoted by the company's algorithms being boosted by as much as 30% instead. The cause, according to reporting by The Verge, was a bug that one internal report described as a “massive ranking failure.”

The bug first originated in 2019, but its impact was first noticed in October 2021. The company said it was resolved March 11. “We traced the root cause to a software bug and applied needed fixes,” Meta spokesperson Joe Osborne told The Verge.

The bug caused posts that had been flagged by fact-checkers, as well as nudity, violence and Russian state media, to slip through the company's usual down-ranking filters, according to an internal report obtained by The Verge.

Meta and other tech giants have leaned on down-ranking as a more palatable approach to content moderation than removing content altogether. Scholars like Stanford's Renée DiResta have also called on tech giants to embrace this approach and realize that "free speech is not the same as free reach."

In this case, those ranking systems appear to have failed. But Osborne told The Verge the bug “has not had any meaningful, long-term impact on our metrics.”

It will be difficult for those outside of Meta to vet those metrics. Meta has blocked new users from accessing CrowdTangle, one of the core tools researchers and journalists have used to track trends in what's popular on Facebook, and has dismantled the team leading it. And while the company does release reports on the prevalence of certain kinds of policy violations in any given quarter, those reports offer little indication of what's behind those numbers. Even if the report did show an uptick in, say, violence on Facebook, it'd be impossible to know if that's due to this bug or to Russia's invasion of Ukraine or some other global atrocity.

The company in a statement to Protocol said:

"The Verge vastly overstated what this bug was because ultimately it had no meaningful, long-term impact on problematic content. Only a very small number of views of content in Feed were ever impacted because the overwhelming majority of posts in Feed are not eligible to be down-ranked in the first place. After detecting inconsistencies we found the root cause and quickly applied fixes. Even without the fixes, the multitude of other mechanisms we have to keep people from seeing harmful content — including other demotions, fact-checking labels and violating content removals — remained in place.”

But it's still unclear which posts were boosted due to the bug or how many views they received.

情報源を読む

リサーチ

  • “AIインシデント”の定義
  • “AIインシデントレスポンス”の定義
  • データベースのロードマップ
  • 関連研究
  • 全データベースのダウンロード

プロジェクトとコミュニティ

  • AIIDについて
  • コンタクトとフォロー
  • アプリと要約
  • エディタのためのガイド

インシデント

  • 全インシデントの一覧
  • フラグの立ったインシデント
  • 登録待ち一覧
  • クラスごとの表示
  • 分類法

2024 - AI Incident Database

  • 利用規約
  • プライバシーポリシー
  • Open twitterOpen githubOpen rssOpen facebookOpen linkedin
  • e1b50cd