インシデント 213の引用情報

Description: The performance of Facebook’s political ad detection was revealed by researchers to be imprecise, uneven across countries in errors, and inadequate for preventing systematic violations of political advertising policies.
推定: Facebookが開発し提供したAIシステムで、Facebook usersに影響を与えた

インシデントのステータス

インシデントID
213
レポート数
5
インシデント発生日
2020-07-01
エディタ
Khoa Lam

インシデントレポート

レポートタイムライン

An Audit of Facebook’s Political Ad Policy Enforcement
osf.io · 2021

Major technology companies strive to protect the integrity of political advertising on their platforms by implementing and enforcing self-regulatory policies that impose transparency requirements on political ads. In this paper, we quantify…

Summary of findings: An audit of Facebook’s political ad policy enforcement
cybersecurityfordemocracy.org · 2021

In the first known study to quantify the performance of Facebook’s political ad policy enforcement at a large and representative scale, researchers found that when making decisions on how to classify undeclared ads, Facebook often missed po…

How political advertisers get away with skirting Facebook’s rules
protocol.com · 2021

A new study shows that the vast majority of the time Facebook has made an enforcement decision on a political ad after it ran, it’s made the wrong call.

Political advertisers on Facebook are supposed to identify themselves as such. That way…

Facebook's Political Ad Promises Mostly Miss the Mark, Study Shows
gizmodo.com · 2021

Researchers found thousands of cases where advertisers skirted the company's rules without ever being flagged.

In the years since the Cambridge Analytica scandal revealed how easily Facebook’s political ads could be weaponized by bad actors…

Facebook misidentified thousands of political ads: Study
straitstimes.com · 2021

PARIS (AFP) - Facebook misidentified tens of thousands of advertisements flagged under its political ads policy, according to a study released Thursday (Dec 9), which warned that the failure could lead to political manipulation.

Researchers…

バリアント

「バリアント」は既存のAIインシデントと同じ原因要素を共有し、同様な被害を引き起こし、同じ知的システムを含んだインシデントです。バリアントは完全に独立したインシデントとしてインデックスするのではなく、データベースに最初に投稿された同様なインシデントの元にインシデントのバリエーションとして一覧します。インシデントデータベースの他の投稿タイプとは違い、バリアントではインシデントデータベース以外の根拠のレポートは要求されません。詳細についてはこの研究論文を参照してください