Skip to Content
logologo
AI Incident Database
Open TwitterOpen RSS FeedOpen FacebookOpen LinkedInOpen GitHub
Open Menu
発見する
投稿する
  • ようこそAIIDへ
  • インシデントを発見
  • 空間ビュー
  • テーブル表示
  • リスト表示
  • 組織
  • 分類法
  • インシデントレポートを投稿
  • 投稿ランキング
  • ブログ
  • AIニュースダイジェスト
  • リスクチェックリスト
  • おまかせ表示
  • サインアップ
閉じる
発見する
投稿する
  • ようこそAIIDへ
  • インシデントを発見
  • 空間ビュー
  • テーブル表示
  • リスト表示
  • 組織
  • 分類法
  • インシデントレポートを投稿
  • 投稿ランキング
  • ブログ
  • AIニュースダイジェスト
  • リスクチェックリスト
  • おまかせ表示
  • サインアップ
閉じる

レポート 3770

関連インシデント

インシデント 64535 Report
Seeming Pattern of Gemini Bias and Sociotechnical Training Failures Harm Google's Reputation

Loading...
Google Suspends AI Tool's Image Generation of People After It Created Historical 'Inaccuracies,' Including Racially Diverse WWII-Era Nazi Soldiers
variety.com · 2024

After critics slammed Google‘s “woke” generative AI system, the company halted the ability of its Gemini tool to create images of people to fix what it acknowledged were “inaccuracies in some historical image generation depictions.”

Google’s Gemini, formerly known as Bard, began offering text-to-image generative features this month. The problem appeared to be that, in trying to compensate for gender- and racial-representation bias in AI, the system was creating ahistorical images of people.

Among the bizarre results generated by Google Gemini that users posted on social media was a racially diverse group of images depicting a “1943 German soldier,” as shared by X user JohnLu0x:

Other historically anomalous images generated by Google Gemini included Black Vikings; a woman as the Catholic pope; women NHL players; the founders of Google depicted as Asian men; and non-white people in a scene of the U.S.’s Founding Fathers.

In a statement early Thursday, Google said, “We’re already working to address recent issues with Gemini’s image generation feature. While we do this, we’re going to pause the image generation of people and will re-release an improved version soon,”

“We’re aware that Gemini is offering inaccuracies in some historical image generation depictions,” the tech giant said in a statement on Feb. 22, after users reported the errors. “We’re working to improve these kinds of depictions immediately. Gemini’s Al image generation does generate a wide range of people. And that’s generally a good thing because people around the world use it. But it’s missing the mark here.”

Critics lambasted Google’s corporate culture for the Gemini snafus. “The ridiculous images generated by Gemini aren’t an anomaly. They’re a self-portrait of Google’s bureaucratic corporate culture,” computer scientist and investor Paul Graham wrote in a post on X. “The bigger your cash cow, the worse your culture can get without driving you out of business. And Google’s cash cow, search advertising, is one of the biggest the world has ever seen.”

Right-wing commentator Ashley St. Clair framed the controversy in a broader culture-war context. “What’s happening with Google’s woke Gemini AI has been happening for years in Media and Hollywood and everybody who called it out was called racist and shunned from society,” St. Clair wrote in a post on X.

情報源を読む

リサーチ

  • “AIインシデント”の定義
  • “AIインシデントレスポンス”の定義
  • データベースのロードマップ
  • 関連研究
  • 全データベースのダウンロード

プロジェクトとコミュニティ

  • AIIDについて
  • コンタクトとフォロー
  • アプリと要約
  • エディタのためのガイド

インシデント

  • 全インシデントの一覧
  • フラグの立ったインシデント
  • 登録待ち一覧
  • クラスごとの表示
  • 分類法

2024 - AI Incident Database

  • 利用規約
  • プライバシーポリシー
  • Open twitterOpen githubOpen rssOpen facebookOpen linkedin
  • e1b50cd