インシデント 84の引用情報

Description: Avaaz, an international advocacy group, released a review of Facebook's misinformation identifying software showing that the labeling process failed to label 42% of false information posts, most surrounding COVID-19 and the 2020 USA Presidential Election.

インシデントのステータス

インシデントID
84
レポート数
1
インシデント発生日
2020-10-09
エディタ
Sean McGregor, Khoa Lam

CSETv0 分類法のクラス

分類法の詳細

Full Description

Avaaz, an international advocacy group, released a review of Facebook's misinformation identifying software showing that the labeling process failed to label 42% of false information posts, most surrounding COVID-19 and the 2020 USA Presidential Election. Avaaz found that by adjusting the cropping or background of a post containing misinformation, the Facebook algorithm would fail to recognize it as misinformation, allowing it to be posted and shared without a cautionary label.

Short Description

Avaaz, an international advocacy group, released a review of Facebook's misinformation identifying software showing that the labeling process failed to label 42% of false information posts, most surrounding COVID-19 and the 2020 USA Presidential Election.

Severity

Unclear/unknown

Harm Type

Harm to social or political systems

AI System Description

Facebook's algorithm and process used to place cautionary labels on posts that are decided to contain misinformation

System Developer

Facebook

Sector of Deployment

Information and communication

Relevant AI functions

Perception, Cognition

AI Techniques

Language recognition, content filtering, image recognition

AI Applications

misinformation labeling, image recognition, image labeling

Location

Global

Named Entities

Facebook, Avaaz, Reuters, AP, PolitiFact

Technology Purveyor

Facebook

Beginning Date

2020-10-09T07:00:00.000Z

Ending Date

2020-10-09T07:00:00.000Z

Near Miss

Unclear/unknown

Intent

Unclear

Lives Lost

No

Infrastructure Sectors

Communications

Data Inputs

User posts

CSETv1 分類法のクラス

分類法の詳細

Harm Distribution Basis

none

Sector of Deployment

information and communication

インシデントレポート

Tiny Changes Let False Claims About COVID-19, Voting Evade Facebook Fact Checks
npr.org · 2020

Something as simple as changing the font of a message or cropping an image can be all it takes to bypass Facebook's defenses against hoaxes and lies.

A new analysis by the international advocacy group Avaaz shines light on why, despite the …

バリアント

「バリアント」は既存のAIインシデントと同じ原因要素を共有し、同様な被害を引き起こし、同じ知的システムを含んだインシデントです。バリアントは完全に独立したインシデントとしてインデックスするのではなく、データベースに最初に投稿された同様なインシデントの元にインシデントのバリエーションとして一覧します。インシデントデータベースの他の投稿タイプとは違い、バリアントではインシデントデータベース以外の根拠のレポートは要求されません。詳細についてはこの研究論文を参照してください

よく似たインシデント

テキスト類似度による

Did our AI mess up? Flag the unrelated incidents