Skip to Content
logologo
AI Incident Database
Open TwitterOpen RSS FeedOpen FacebookOpen LinkedInOpen GitHub
Open Menu
発見する
投稿する
  • ようこそAIIDへ
  • インシデントを発見
  • 空間ビュー
  • テーブル表示
  • リスト表示
  • 組織
  • 分類法
  • インシデントレポートを投稿
  • 投稿ランキング
  • ブログ
  • AIニュースダイジェスト
  • リスクチェックリスト
  • おまかせ表示
  • サインアップ
閉じる
発見する
投稿する
  • ようこそAIIDへ
  • インシデントを発見
  • 空間ビュー
  • テーブル表示
  • リスト表示
  • 組織
  • 分類法
  • インシデントレポートを投稿
  • 投稿ランキング
  • ブログ
  • AIニュースダイジェスト
  • リスクチェックリスト
  • おまかせ表示
  • サインアップ
閉じる

レポート 2260

関連インシデント

インシデント 3805 Report
Facebook's Auto-Generated Targeting Ad Categories Contained Anti-Semitic Options

Loading...
Facebook allowed advertisers to target 'Jew haters'
theguardian.com · 2017

Facebook has allowed advertisers to target users interested in the topics of “Jew hater” and “How to burn Jews”, according to an investigation that adds to mounting criticisms of the way the company allows and profits from unethical ads.

ProPublica, an investigative news organization, reported on Thursday that the social network’s self-service ad-buying system allowed people to direct advertisements to nearly 2,300 users interested in several explicitly antisemitic subjects, including a category labeled “History of ‘why Jews ruin the world’”.

The journalists tested the legitimacy of the ad categories by paying $30 to target “promoted posts” to those specific groups. ProPublica said three ads were approved within 15 minutes. Facebook later took down the offensive categories after ProPublica contacted the company for comment.

Asked about ProPublica’s findings, Rob Leathern, product management director at Facebook, confirmed in a statement to the Guardian that it had “removed the associated targeting fields”.

Leathern said: “We know we have more work to do, so we’re also building new guardrails in our product and review processes to prevent other issues like this from happening in the future.”

A Facebook algorithm had created the antisemitic categories, and the company said it is considering changes to prevent this kind of problem. After publication of this article, Facebook announced that it was removing the self-reported targeting fields “until we have the right processes in place” to “ensure that targeting is not used for discriminatory purposes”.

The embarrassing findings have come at a time when Facebook is facing widespread scrutiny for its ad practices. Earlier this month, Facebook disclosed that a group that appeared to be based in Russia had spent $100,000 on ads promoting political messages in a two-year period. The ads had spread divisive views on topics such as immigration, race and LGBT rights and had promoted 470 “inauthentic” pages and accounts later suspended by Facebook, according to the company.

Facebook said it was cooperating with related federal investigations in the US. The revelations potentially bolster the findings of intelligence officials that Russia was involved in influencing the 2016 presidential election.

In the UK, a series of Conservative party attack ads were also sent to voters in a key marginal constituency, and activists captured the ads using dummy Facebook accounts, the Guardian reported in May. A recent report in the Daily Beast suggested that Russian operatives used Facebook’s events tool to organize and promote political protests and anti-immigrant rallies while hiding behind false identities. Studies have repeatedly suggested that fake news and dark ads on Facebook and other social media sites can swing political opinions and manipulate elections.

In its report, ProPublica noted that the objectionable ad categories were very small. But Facebook’s algorithms had suggestions to boost the audience size, including to people who like gun rights. When ProPublica searched for categories related to “Hitler”, Facebook suggested a field called “Hitler did nothing wrong”. After the ad was approved, Facebook also automatically described the advert as targeting people interested in “Antysemityzm”, a Polish word.

Other tech corporations have faced similar challenges. Last year, the Guardian reported last year that Google’s algorithms were suggesting neo-Nazi white supremacist websites promoting Holocaust deniers.

Facebook has also recently faced backlash for allowing hate groups, including neo-Nazis, to flourish and organize on the platform, contradicting Mark Zuckerberg’s statements about his site bringing people together for “meaningful” groups. After recent white supremacist violence in Charlottesville, the CEO said Facebook would remove violent threats, and the company subsequently deleted certain neo-Nazi and white nationalist accounts that it had previously permitted.

Leathern said in his statement on Thursday: “We don’t allow hate speech on Facebook. Our community standards strictly prohibit attacking people based on their protected characteristics, including religion, and we prohibit advertisers from discriminating against people based on religion and other attributes.”

情報源を読む

リサーチ

  • “AIインシデント”の定義
  • “AIインシデントレスポンス”の定義
  • データベースのロードマップ
  • 関連研究
  • 全データベースのダウンロード

プロジェクトとコミュニティ

  • AIIDについて
  • コンタクトとフォロー
  • アプリと要約
  • エディタのためのガイド

インシデント

  • 全インシデントの一覧
  • フラグの立ったインシデント
  • 登録待ち一覧
  • クラスごとの表示
  • 分類法

2024 - AI Incident Database

  • 利用規約
  • プライバシーポリシー
  • Open twitterOpen githubOpen rssOpen facebookOpen linkedin
  • e1b50cd