Skip to Content
logologo
AI Incident Database
Open TwitterOpen RSS FeedOpen FacebookOpen LinkedInOpen GitHub
Open Menu
発見する
投稿する
  • ようこそAIIDへ
  • インシデントを発見
  • 空間ビュー
  • テーブル表示
  • リスト表示
  • 組織
  • 分類法
  • インシデントレポートを投稿
  • 投稿ランキング
  • ブログ
  • AIニュースダイジェスト
  • リスクチェックリスト
  • おまかせ表示
  • サインアップ
閉じる
発見する
投稿する
  • ようこそAIIDへ
  • インシデントを発見
  • 空間ビュー
  • テーブル表示
  • リスト表示
  • 組織
  • 分類法
  • インシデントレポートを投稿
  • 投稿ランキング
  • ブログ
  • AIニュースダイジェスト
  • リスクチェックリスト
  • おまかせ表示
  • サインアップ
閉じる

レポート 1641

関連インシデント

インシデント 1974 Report
Facebook Internally Reported Failure of Ranking Algorithm, Exposing Harmful Content to Viewers over Months

Loading...
Facebook system designed to smother harmful misinformation actually spread it
thedrum.com · 2022

Facebook engineers have belatedly uncovered a significant flaw in its downranking system to filter out harmful content, which exposed up to half of all News Feed views to potential ’integrity risks’ for six months.

Reports in The Verge suggest the ‘massive ranking failure’ was first identified last October when engineers battled against a wave of misinformation that threatened to inundate the News Feed. Closer investigations revealed that a ranking system designed to suppress misinformation from flagged accounts, as identified by a team of external fact-checkers, was instead surfacing these posts to audiences.

Leaked correspondence suggests the bug boosted views of malign posts by as much as 30% intermittently until the issue was finally resolved on March 11.

Throughout this six-month period, Facebook’s much-vaunted policing algorithms failed to properly downrank nudity, violence and Russian state propaganda – a period that overlapped with the country’s invasion of Ukraine.

Fielding inquiries from The Verge, Meta spokesperson Joe Osborne described five separate instances of ”inconsistencies in downranking” attributed to a ”software bug” during which inappropriate material was given raised visibility.

Osborne insists however that the episode ”has not had any meaningful, long-term impact on our metrics,” stressing that content that passed the threshold for deletion was not affected.

The system of downranking has been touted by Facebook as evidence that self-regulation is effective, heading off calls for new legislation to curb the spread of ‘sensationalist and provocative’ content that typically attracts the most attention.

Until now, Facebook has boasted of the success its algorithms have had identifying ‘borderline’ content that skirts the boundaries of acceptability in areas such as hate speech, flagging suspected infractions for manual review.

A recent report found that hate speech was present in six out of every 10,000 Facebook views.

情報源を読む

リサーチ

  • “AIインシデント”の定義
  • “AIインシデントレスポンス”の定義
  • データベースのロードマップ
  • 関連研究
  • 全データベースのダウンロード

プロジェクトとコミュニティ

  • AIIDについて
  • コンタクトとフォロー
  • アプリと要約
  • エディタのためのガイド

インシデント

  • 全インシデントの一覧
  • フラグの立ったインシデント
  • 登録待ち一覧
  • クラスごとの表示
  • 分類法

2024 - AI Incident Database

  • 利用規約
  • プライバシーポリシー
  • Open twitterOpen githubOpen rssOpen facebookOpen linkedin
  • e1b50cd