Skip to Content
logologo
AI Incident Database
Open TwitterOpen RSS FeedOpen FacebookOpen LinkedInOpen GitHub
Open Menu
発見する
投稿する
  • ようこそAIIDへ
  • インシデントを発見
  • 空間ビュー
  • テーブル表示
  • リスト表示
  • 組織
  • 分類法
  • インシデントレポートを投稿
  • 投稿ランキング
  • ブログ
  • AIニュースダイジェスト
  • リスクチェックリスト
  • おまかせ表示
  • サインアップ
閉じる
発見する
投稿する
  • ようこそAIIDへ
  • インシデントを発見
  • 空間ビュー
  • テーブル表示
  • リスト表示
  • 組織
  • 分類法
  • インシデントレポートを投稿
  • 投稿ランキング
  • ブログ
  • AIニュースダイジェスト
  • リスクチェックリスト
  • おまかせ表示
  • サインアップ
閉じる

レポート 1944

関連インシデント

インシデント 3032 Report
Google’s Automated Child Abuse Detection Wrongfully Flagged a Parent’s Naked Photo of His Child

Loading...
Big tech gone bad: Google blocks accounts based on false child exploitation filter
siliconangle.com · 2022

The application of artificial intelligence in various aspects of life has been mostly a net positive, but what happens when machine learning algorithms cannot detect the difference between an innocent health photo and child exploitation material?

The New York Times today reported that a San Francisco-based tech worker named only “Mark” — who ironically works for an unnamed tech company in content filtering — was banned by Google LLC for “a severe violation of Google’s policies and might be illegal” after his son became ill and photos were taken of his son. In this case, the child had an issue with his penis and after being asked by a medical professional, his wife had taken photos and sent them online through his Google account for a consultation.

Google’s AI and filtering tagged the photos as child abuse without taking into context the situation. An incorrect or out-of-context block from Google is not greatly surprising. What happened next, though, is arguably appalling.

Google informed authorities that the man, named in the report only as Mark, was dealing in child pornography and a police investigation was launched. Fortunately, the San Francisco Police Department understood the context, but nearly two years later, Google has not and Mark is still locked out of his account.

In response to the New York Times story, Google’s only comment was “child sexual abuse material is abhorrent and we’re committed to preventing the spread of it on our platforms.” No one could argue with that statement, but in this case, the photos were not abusive material, rather a legitimate health issue.

The case highlights how dependent billions of people have become on tech companies and how a simple false positive can spiral into something far worse. In Mark’s case, he not only lost his email account, Google Photos account and contact information, he also had a phone number through Google Fi that was also closed down by Google. The Google Fi phone number was connected to various other accounts, meaning he lost access to two-factor authentication as well.

“The more eggs you have in one basket, the more likely the basket is to break,” Mark told the Times.

Mark’s case is not unique. The Times also referenced a similar case from a father in Texas where Google flagged medically related photos taken and sent online as abusive material. In the Texas case, as with Mark’s case, Google suspended the father’s account and the robot tech workers in Mountain View rejected any appeals.

Google and other big tech companies should be proactively looking for and screening abusive material. However, that Google cannot find a human employee to look at the context on appeal and sort the situation is not a positive reflection on Google and other tech companies.

Fortunately, in both Mark’s case and the Texas case, the medical issues with their children were sorted out, but there is no medical prescription for dealing with Google and other tech companies once they have made up their minds and falsely labeled someone as being a child pornographer.

情報源を読む

リサーチ

  • “AIインシデント”の定義
  • “AIインシデントレスポンス”の定義
  • データベースのロードマップ
  • 関連研究
  • 全データベースのダウンロード

プロジェクトとコミュニティ

  • AIIDについて
  • コンタクトとフォロー
  • アプリと要約
  • エディタのためのガイド

インシデント

  • 全インシデントの一覧
  • フラグの立ったインシデント
  • 登録待ち一覧
  • クラスごとの表示
  • 分類法

2024 - AI Incident Database

  • 利用規約
  • プライバシーポリシー
  • Open twitterOpen githubOpen rssOpen facebookOpen linkedin
  • e1b50cd