Skip to Content
logologo
AI Incident Database
Open TwitterOpen RSS FeedOpen FacebookOpen LinkedInOpen GitHub
Open Menu
Découvrir
Envoyer
  • Bienvenue sur AIID
  • Découvrir les incidents
  • Vue spatiale
  • Vue de tableau
  • Vue de liste
  • Entités
  • Taxonomies
  • Soumettre des rapports d'incident
  • Classement des reporters
  • Blog
  • Résumé de l’Actualité sur l’IA
  • Contrôle des risques
  • Incident au hasard
  • S'inscrire
Fermer
Découvrir
Envoyer
  • Bienvenue sur AIID
  • Découvrir les incidents
  • Vue spatiale
  • Vue de tableau
  • Vue de liste
  • Entités
  • Taxonomies
  • Soumettre des rapports d'incident
  • Classement des reporters
  • Blog
  • Résumé de l’Actualité sur l’IA
  • Contrôle des risques
  • Incident au hasard
  • S'inscrire
Fermer

Problème 3720

Incidents associés

Incident 64535 Rapports
Seeming Pattern of Gemini Bias and Sociotechnical Training Failures Harm Google's Reputation

Google pauses Gemini’s ability to generate AI images of people after diversity errors
theverge.com · 2024

Google says it’s pausing the ability for its Gemini AI to generate images of people, after the tool was found to be generating inaccurate historical images. Gemini has been creating diverse images of the US Founding Fathers and Nazi-era German soldiers, in what looked like an attempt to subvert the gender and racial stereotypes found in generative AI.

“We’re already working to address recent issues with Gemini’s image generation feature,” says Google in a statement posted on X. “While we do this, we’re going to pause the image generation of people and will re-release an improved version soon.”

Google’s decision to pause image generation of people in Gemini comes less than 24 hours after the company apologized for the inaccuracies in some historical images its AI model generated. Some Gemini users have been requesting images of historical groups or figures like the Founding Fathers and found non-white AI-generated people in the results. That’s led to conspiracy theories online that Google is intentionally avoiding depicting white people.

The Verge tested several Gemini queries yesterday, which included a request for “a US senator from the 1800s” that returned results that included what appeared to be Black and Native American women. The first female senator was a white woman in 1922, so Gemini’s AI images were essentially erasing the history of race and gender discrimination.

Now that Google has disabled Gemini’s ability to generate pictures of people, here’s how the AI model responds if you request an image of a person:

We are working to improve Gemini’s ability to generate images of people. We expect this feature to return soon and will notify you in release updates when it does.

Google first started offering image generation through Gemini (formerly Bard) earlier this month, in a bid to compete with OpenAI and Microsoft’s Copilot. Much like competitors, the image generation tool produces a collection of images based on a text input.

Correction February 22nd, 6:54AM ET: Google confirmed that image generation is available globally in English, but not in the European Economic Area, UK, or Switzerland. That explains why testing from the UK failed.

Lire la source

Recherche

  • Définition d'un « incident d'IA »
  • Définir une « réponse aux incidents d'IA »
  • Feuille de route de la base de données
  • Travaux connexes
  • Télécharger la base de données complète

Projet et communauté

  • À propos de
  • Contacter et suivre
  • Applications et résumés
  • Guide de l'éditeur

Incidents

  • Tous les incidents sous forme de liste
  • Incidents signalés
  • File d'attente de soumission
  • Affichage des classifications
  • Taxonomies

2024 - AI Incident Database

  • Conditions d'utilisation
  • Politique de confidentialité
  • Open twitterOpen githubOpen rssOpen facebookOpen linkedin
  • 300d90c