Skip to Content
logologo
AI Incident Database
Open TwitterOpen RSS FeedOpen FacebookOpen LinkedInOpen GitHub
Open Menu
発見する
投稿する
  • ようこそAIIDへ
  • インシデントを発見
  • 空間ビュー
  • テーブル表示
  • リスト表示
  • 組織
  • 分類法
  • インシデントレポートを投稿
  • 投稿ランキング
  • ブログ
  • AIニュースダイジェスト
  • リスクチェックリスト
  • おまかせ表示
  • サインアップ
閉じる
発見する
投稿する
  • ようこそAIIDへ
  • インシデントを発見
  • 空間ビュー
  • テーブル表示
  • リスト表示
  • 組織
  • 分類法
  • インシデントレポートを投稿
  • 投稿ランキング
  • ブログ
  • AIニュースダイジェスト
  • リスクチェックリスト
  • おまかせ表示
  • サインアップ
閉じる

レポート 578

Loading...
Chinese facial recognition system confuses bus ad for jaywalker
techspot.com · 2018

Facepalm: China is well known for embracing facial recognition tech to catch lawbreakers, but these systems don’t always get it right. Earlier this week, one camera captured the image of a famous businesswoman and publicly shamed her, but she wasn’t even there at the time.

As reported by Abacus, a camera at an intersection in the eastern city of Ningbo, located in east China's Zhejiang province, filmed what was assumed to be a jaywalker. The facial recognition tech identified her as Dong Mingzhu, a famous businesswoman, who, as noted by The Verge, topped Forbes' 100 outstanding businesswomen in China list last year.

But it turned out that Dong wasn’t even present. The camera had seen her face on the side of a bus advertisement for Gree Electric and mistakenly thought she was crossing during a red light.

Whenever the system identifies jaywalkers, it posts their photo onto a large public screen to ‘name and shame’ the perpetrators. It showed Dong’s face and name, though it incorrectly spelled her surname as “Ju,” along with her government ID. There was also text that said she had broken the law.

Ningbo’s traffic police wrote on Chinese microblogging site Weibo that the system had made a mistake and all record of the violation was being deleted. They also claimed that an upgrade had been carried out to reduce the chances of such an error happening again.

We’ve heard reports of China using facial recognition in several ways, from analyzing students’ emotions in schools to scanning for suspects via special glasses. Back in April, the system reportedly identified a suspected criminal from a crowd of 50,000 people, which sounds impressive, but in a country with a poor reputation when it comes to privacy and human rights, and the prospect of more false positives, such technology is a concern.

情報源を読む

リサーチ

  • “AIインシデント”の定義
  • “AIインシデントレスポンス”の定義
  • データベースのロードマップ
  • 関連研究
  • 全データベースのダウンロード

プロジェクトとコミュニティ

  • AIIDについて
  • コンタクトとフォロー
  • アプリと要約
  • エディタのためのガイド

インシデント

  • 全インシデントの一覧
  • フラグの立ったインシデント
  • 登録待ち一覧
  • クラスごとの表示
  • 分類法

2024 - AI Incident Database

  • 利用規約
  • プライバシーポリシー
  • Open twitterOpen githubOpen rssOpen facebookOpen linkedin
  • e1b50cd