Skip to Content
logologo
AI Incident Database
Open TwitterOpen RSS FeedOpen FacebookOpen LinkedInOpen GitHub
Open Menu
発見する
投稿する
  • ようこそAIIDへ
  • インシデントを発見
  • 空間ビュー
  • テーブル表示
  • リスト表示
  • 組織
  • 分類法
  • インシデントレポートを投稿
  • 投稿ランキング
  • ブログ
  • AIニュースダイジェスト
  • リスクチェックリスト
  • おまかせ表示
  • サインアップ
閉じる
発見する
投稿する
  • ようこそAIIDへ
  • インシデントを発見
  • 空間ビュー
  • テーブル表示
  • リスト表示
  • 組織
  • 分類法
  • インシデントレポートを投稿
  • 投稿ランキング
  • ブログ
  • AIニュースダイジェスト
  • リスクチェックリスト
  • おまかせ表示
  • サインアップ
閉じる

レポート 3788

関連インシデント

インシデント 64535 Report
Seeming Pattern of Gemini Bias and Sociotechnical Training Failures Harm Google's Reputation

Loading...
After Google Gemini, Facebook parent Meta’s AI tool creates ‘historically inaccurate images
timesofindia.indiatimes.com · 2024

Google Gemini and Meta's Imagine AI faced backlash for generating historically inaccurate images. The models showed people of Southeast Asia as American colonials and people of color as Founding Fathers. The issue highlighted bias and over correction in AI models, raising concerns about the accuracy and diversity of AI-generated content.

Google Gemini faced heat when it returned ‘historically inaccurate’ images, prompting the company to temporarily cease the ability of the AI chatbot to generate photos. Company CEO Sundar Pichai called it ‘unacceptable’ with the tech giant saying that it will release a fine-tuned model. Now, a report has claimed that Facebook parent Meta’s AI tool has created ahistorical images.

According to a report by Axios, Meta's Imagine AI image generator has also returned the similar kind of historical gaffes as Gemini.

A screenshot shared in the report shows AI-generated images from Meta AI's Imagine tool inside Instagram direct messages. In one of the images, “a group of people in [an] American colonial” is depicted by people of Southeast Asia. In another image, the Founding Fathers [of America] are people of colour.

The prompt for “Professional American football players” produced only photos of women in football uniforms, the report said.

Additionally, Imagine AI model does not respond to the “pope” prompt, but when asked for a group of popes, it showed Black popes, it said.

What is the problem

People have taken offence by the images generated by AI models as AI makers claim that it was due to the inclusion of “diversity”, there is a bias and stereotyping in the data they used to train their models. They also say that while diversity is a good thing, their AI models are over-correcting and producing problematic results.

Google’s Gemini created Black men in Nazi uniforms and female popes.

“Our tuning to ensure that Gemini showed a range of people failed to account for cases that should clearly not show a range” Google said, adding that the model is “way more cautious than we intended.”

情報源を読む

リサーチ

  • “AIインシデント”の定義
  • “AIインシデントレスポンス”の定義
  • データベースのロードマップ
  • 関連研究
  • 全データベースのダウンロード

プロジェクトとコミュニティ

  • AIIDについて
  • コンタクトとフォロー
  • アプリと要約
  • エディタのためのガイド

インシデント

  • 全インシデントの一覧
  • フラグの立ったインシデント
  • 登録待ち一覧
  • クラスごとの表示
  • 分類法

2024 - AI Incident Database

  • 利用規約
  • プライバシーポリシー
  • Open twitterOpen githubOpen rssOpen facebookOpen linkedin
  • e1b50cd