Skip to Content
logologo
AI Incident Database
Open TwitterOpen RSS FeedOpen FacebookOpen LinkedInOpen GitHub
Open Menu
Découvrir
Envoyer
  • Bienvenue sur AIID
  • Découvrir les incidents
  • Vue spatiale
  • Vue de tableau
  • Vue de liste
  • Entités
  • Taxonomies
  • Soumettre des rapports d'incident
  • Classement des reporters
  • Blog
  • Résumé de l’Actualité sur l’IA
  • Contrôle des risques
  • Incident au hasard
  • S'inscrire
Fermer
Découvrir
Envoyer
  • Bienvenue sur AIID
  • Découvrir les incidents
  • Vue spatiale
  • Vue de tableau
  • Vue de liste
  • Entités
  • Taxonomies
  • Soumettre des rapports d'incident
  • Classement des reporters
  • Blog
  • Résumé de l’Actualité sur l’IA
  • Contrôle des risques
  • Incident au hasard
  • S'inscrire
Fermer

Problème 3776

Incidents associés

Incident 64535 Rapports
Seeming Pattern of Gemini Bias and Sociotechnical Training Failures Harm Google's Reputation

Loading...
Google Stops Gemini AI From Making Images Of People—After Musk Calls Service ‘Woke’
forbes.com · 2024

Google announced Thursday that it will “pause” its Gemini image generator’s ability to create images of people, after the program was criticized for showing misleading images of people's races in historical contexts—leading billionaire Elon Musk to call the service "woke."

Key Facts

Google said Wednesday it was “working to improve these kinds of depictions immediately,” but the company disabled images of people less than 24 hours later, promising an “improved version” of the service soon.

Google has not provided a timeframe for when image generation of people would return, and has not returned a request for comment from Forbes.

Jack Krawczyk, Google’s product director for Gemini Experiences, confirmed in a post on X the chatbot was “offering inaccuracies in some historical image generation depictions,” and said the company was working to “further tune” the service to accurately reflect historical contexts.

Forbes could still use Gemini to create images of inanimate objects such as houses, but could not generate images of people—with the chatbot stating “we expect this feature to return soon and will notify you in release updates when it does.”

Google launched Gemini, a chatbot formerly known as Bard and powered by a large language model, on February 8—competing with generative AI programs like OpenAI’s ChatGPT, which is backed by rival Microsoft. As part of the new service, Gemini offers an image generator, similar to Midjourney and OpenAI’s DALL-E. However, days after release, users began noticing the generator would create images of “historical” figures and scenes with historical inaccuracies. In some examples created by The Verge, the generator displayed images of black women for the prompt “US senator from the 1800s” (the first black woman to serve in the Senate was Carol Moseley Braun, who was elected in 1992). Another image showed women and black men wearing World War II-era German military uniforms. Google has since acknowledged the misleading images. “Gemini’s AI image generation does generate a wide range of people. And that’s generally a good thing because people around the world use it. But it’s missing the mark here,” Google said in a statement posted on X on Wednesday.

Chief Critic

Musk, billionaire founder of competitor xAI, voiced his criticism of Gemini’s image generation service in a post Tuesday, claiming Google’s AI chatbot was “woke” and “racist.” xAI, on the other hand, is described as maximum truth seeking. The company, which was founded in March 2023, released its chatbot Grok in November, and does not offer image generation services yet.

Other AI image generators have faced the opposite criticism, frequently being accused of creating racially biased images that disproportionately show white people. A 2023 report in the Washington Post found these biases were rampant in images generated by Stable Diffusion XL, the service offered by startup Stability AI. Using the service, prompts for “a productive person” and “attractive people” produced images only of white people. Meanwhile, the prompt “person at social services” produced only images of people of color. According to Krawczyk, Google designed Gemini’s image generation to specifically address these problems. “We design our image generation capabilities to reflect our global user base, and we take representation and bias seriously,” the developer said in a post on Tuesday. “We will continue to do this for open ended prompts (images of a person walking a dog are universal!)”

Lire la source

Recherche

  • Définition d'un « incident d'IA »
  • Définir une « réponse aux incidents d'IA »
  • Feuille de route de la base de données
  • Travaux connexes
  • Télécharger la base de données complète

Projet et communauté

  • À propos de
  • Contacter et suivre
  • Applications et résumés
  • Guide de l'éditeur

Incidents

  • Tous les incidents sous forme de liste
  • Incidents signalés
  • File d'attente de soumission
  • Affichage des classifications
  • Taxonomies

2024 - AI Incident Database

  • Conditions d'utilisation
  • Politique de confidentialité
  • Open twitterOpen githubOpen rssOpen facebookOpen linkedin
  • e1b50cd