Skip to Content
logologo
AI Incident Database
Open TwitterOpen RSS FeedOpen FacebookOpen LinkedInOpen GitHub
Open Menu
発見する
投稿する
  • ようこそAIIDへ
  • インシデントを発見
  • 空間ビュー
  • テーブル表示
  • リスト表示
  • 組織
  • 分類法
  • インシデントレポートを投稿
  • 投稿ランキング
  • ブログ
  • AIニュースダイジェスト
  • リスクチェックリスト
  • おまかせ表示
  • サインアップ
閉じる
発見する
投稿する
  • ようこそAIIDへ
  • インシデントを発見
  • 空間ビュー
  • テーブル表示
  • リスト表示
  • 組織
  • 分類法
  • インシデントレポートを投稿
  • 投稿ランキング
  • ブログ
  • AIニュースダイジェスト
  • リスクチェックリスト
  • おまかせ表示
  • サインアップ
閉じる

レポート 3775

関連インシデント

インシデント 64535 Report
Seeming Pattern of Gemini Bias and Sociotechnical Training Failures Harm Google's Reputation

Loading...
Google Apologizes For Inaccurate Gemini Photos: Tried Avoiding ‘Traps’ Of AI Technology
forbes.com · 2024

Google apologized Friday for a tranche of historically inaccurate images generated on its Gemini AI image service, saying the feature “missed the mark” after widely circulated images sparked backlash from right-wing users and billionaire X owner Elon Musk.

Google acknowledged some images generated on Gemini were “inaccurate or even offensive” in a blog post Friday afternoon, saying the company had been trying to stave off “some of the traps” it had “seen in the past with image generation technology,” including examples of sexually explicit images, violent imagery or depictions of real people.

The tech giant temporarily paused the feature Thursday, amid an online fury over some Gemini-created images, with Musk arguing the images put on full display the company’s “insane racist, anti-civilizational programming,” and the New York Post reporting some images of Black vikings, a female pope and female NHL players were created in response to prompts that did not specify race or gender.

Google, which had previously admitted the errors stem from its efforts to combat bias and boost representation, said in its apology post that the company had hoped the feature would “work well for everyone,” noting its “users come from all over the world.”

Google stated its “tuning to ensure that Gemini showed a range of people failed to account for cases that should clearly not show a range,” and that its AI image generator had become “way more cautious” than it had intended, with the generator “wrongly interpreting some very anodyne prompts as sensitive.”

“If you ask for a picture of football players, or someone walking a dog, you may want to receive a range of people,” Google stated, adding: “You probably don’t just want to only receive images of people of just one type of ethnicity (or any other characteristic).”

Key Backgound

Google unveiled Gemini, a feature formerly known as Bard, in December, calling the product its “most capable and general model yet,” featuring “state-of-the-art performance.” Google launched the image generator feature last month, putting it up against rivals including OpenAI’s DALL-E and Midjourney. Days into the release, however, some images posted online appeared to feature historically inaccurate depictions of prompts, including a Black woman among U.S. senators in the 1800s and a Black man in a German World War II-era military uniform, The Verge reported. Google said Thursday it would pause the feature, releasing a statement vowing to “improve these kinds of depictions immediately.”

Google found itself wrapped in another controversy Thursday afternoon when screenshots of a purported message to Gmail users suggested the widely used email service would be sunsetting—a shocking revelation that briefly sent the internet into a tailspin, even though it turned out to be a hoax. Google appeared to deny the development in a Thursday afternoon post, saying on the Gmail X account that “Gmail is here to stay.”

情報源を読む

リサーチ

  • “AIインシデント”の定義
  • “AIインシデントレスポンス”の定義
  • データベースのロードマップ
  • 関連研究
  • 全データベースのダウンロード

プロジェクトとコミュニティ

  • AIIDについて
  • コンタクトとフォロー
  • アプリと要約
  • エディタのためのガイド

インシデント

  • 全インシデントの一覧
  • フラグの立ったインシデント
  • 登録待ち一覧
  • クラスごとの表示
  • 分類法

2024 - AI Incident Database

  • 利用規約
  • プライバシーポリシー
  • Open twitterOpen githubOpen rssOpen facebookOpen linkedin
  • e1b50cd