Skip to Content
logologo
AI Incident Database
Open TwitterOpen RSS FeedOpen FacebookOpen LinkedInOpen GitHub
Open Menu
発見する
投稿する
  • ようこそAIIDへ
  • インシデントを発見
  • 空間ビュー
  • テーブル表示
  • リスト表示
  • 組織
  • 分類法
  • インシデントレポートを投稿
  • 投稿ランキング
  • ブログ
  • AIニュースダイジェスト
  • リスクチェックリスト
  • おまかせ表示
  • サインアップ
閉じる
発見する
投稿する
  • ようこそAIIDへ
  • インシデントを発見
  • 空間ビュー
  • テーブル表示
  • リスト表示
  • 組織
  • 分類法
  • インシデントレポートを投稿
  • 投稿ランキング
  • ブログ
  • AIニュースダイジェスト
  • リスクチェックリスト
  • おまかせ表示
  • サインアップ
閉じる

レポート 92

関連インシデント

インシデント 1623 Report
Images of Black People Labeled as Gorillas

Loading...
Google apologises for Photos app's racist blunder
bbc.com · 2015

Mr Alcine tweeted Google about the fact its app had misclassified his photo

Google says it is "appalled" that its new Photos app mistakenly labelled a black couple as being "gorillas".

Its product automatically tags uploaded pictures using its own artificial intelligence software.

The error was brought to its attention by a New York-based software developer who was one of the people pictured in the photos involved.

Google was later criticised on social media because of the label's racist connotations.

"This is 100% not OK," acknowledged Google executive Yonatan Zunger after being contacted by Jacky Alcine via Twitter.

"It was high on my list of bugs you 'never' want to see happen."

Mr Zunger said Google had already taken steps to avoid others experiencing a similar mistake.

Image copyright Twitter Image caption Mr Alcine said the error had affected several photos in his collection

He added it was "also working on longer-term fixes around both linguistics - words to be careful about in photos of people - and image recognition itself - eg better recognition of dark-skinned faces".

This is not the first time Google Photos has mislabelled one species as another.

The news site iTech Post noted that the app was tagging pictures of dogs as horses in May.

Users are able to remove badly identified photo classifications within the app, which should help it improve its accuracy over time - a technology known as machine learning.

Image copyright Twitter Image caption Google has faced criticism since the error was made public

However, Google has acknowledged the sensitivity of the latest mistake.

"We're appalled and genuinely sorry that this happened," a spokeswoman told the BBC.

"We are taking immediate action to prevent this type of result from appearing.

"There is still clearly a lot of work to do with automatic image labelling, and we're looking at how we can prevent these types of mistakes from happening in the future."

But Mr Alcine told the BBC that he still had concerns.

"I do have a few questions, like what kind of images and people were used in their initial priming that led to results like these," he said.

"Google has mentioned a more intensified search into getting person of colour candidates through the door, but only time will tell if that'll happen and help correct the image Silicon Valley companies have with intersectional diversity - the act of unifying multiple fronts of disadvantaged people so that their voices are heard and not muted."

情報源を読む

リサーチ

  • “AIインシデント”の定義
  • “AIインシデントレスポンス”の定義
  • データベースのロードマップ
  • 関連研究
  • 全データベースのダウンロード

プロジェクトとコミュニティ

  • AIIDについて
  • コンタクトとフォロー
  • アプリと要約
  • エディタのためのガイド

インシデント

  • 全インシデントの一覧
  • フラグの立ったインシデント
  • 登録待ち一覧
  • クラスごとの表示
  • 分類法

2024 - AI Incident Database

  • 利用規約
  • プライバシーポリシー
  • Open twitterOpen githubOpen rssOpen facebookOpen linkedin
  • e1b50cd