Skip to Content
logologo
AI Incident Database
Open TwitterOpen RSS FeedOpen FacebookOpen LinkedInOpen GitHub
Open Menu
Descubrir
Enviar
  • Bienvenido a la AIID
  • Descubrir Incidentes
  • Vista espacial
  • Vista Tabular
  • Vista de lista
  • Entidades
  • Taxonomías
  • Enviar Informes de Incidentes
  • Ranking de Reportadores
  • Blog
  • Resumen de noticias de IA
  • Control de Riesgos
  • Incidente aleatorio
  • Registrarse
Colapsar
Descubrir
Enviar
  • Bienvenido a la AIID
  • Descubrir Incidentes
  • Vista espacial
  • Vista Tabular
  • Vista de lista
  • Entidades
  • Taxonomías
  • Enviar Informes de Incidentes
  • Ranking de Reportadores
  • Blog
  • Resumen de noticias de IA
  • Control de Riesgos
  • Incidente aleatorio
  • Registrarse
Colapsar

Problema 3720

Incidentes Asociados

Incidente 64535 Reportes
Seeming Pattern of Gemini Bias and Sociotechnical Training Failures Harm Google's Reputation

Google pauses Gemini’s ability to generate AI images of people after diversity errors
theverge.com · 2024

Google says it’s pausing the ability for its Gemini AI to generate images of people, after the tool was found to be generating inaccurate historical images. Gemini has been creating diverse images of the US Founding Fathers and Nazi-era German soldiers, in what looked like an attempt to subvert the gender and racial stereotypes found in generative AI.

“We’re already working to address recent issues with Gemini’s image generation feature,” says Google in a statement posted on X. “While we do this, we’re going to pause the image generation of people and will re-release an improved version soon.”

Google’s decision to pause image generation of people in Gemini comes less than 24 hours after the company apologized for the inaccuracies in some historical images its AI model generated. Some Gemini users have been requesting images of historical groups or figures like the Founding Fathers and found non-white AI-generated people in the results. That’s led to conspiracy theories online that Google is intentionally avoiding depicting white people.

The Verge tested several Gemini queries yesterday, which included a request for “a US senator from the 1800s” that returned results that included what appeared to be Black and Native American women. The first female senator was a white woman in 1922, so Gemini’s AI images were essentially erasing the history of race and gender discrimination.

Now that Google has disabled Gemini’s ability to generate pictures of people, here’s how the AI model responds if you request an image of a person:

We are working to improve Gemini’s ability to generate images of people. We expect this feature to return soon and will notify you in release updates when it does.

Google first started offering image generation through Gemini (formerly Bard) earlier this month, in a bid to compete with OpenAI and Microsoft’s Copilot. Much like competitors, the image generation tool produces a collection of images based on a text input.

Correction February 22nd, 6:54AM ET: Google confirmed that image generation is available globally in English, but not in the European Economic Area, UK, or Switzerland. That explains why testing from the UK failed.

Leer la Fuente

Investigación

  • Definición de un “Incidente de IA”
  • Definición de una “Respuesta a incidentes de IA”
  • Hoja de ruta de la base de datos
  • Trabajo relacionado
  • Descargar Base de Datos Completa

Proyecto y Comunidad

  • Acerca de
  • Contactar y Seguir
  • Aplicaciones y resúmenes
  • Guía del editor

Incidencias

  • Todos los incidentes en forma de lista
  • Incidentes marcados
  • Cola de envío
  • Vista de clasificaciones
  • Taxonomías

2024 - AI Incident Database

  • Condiciones de uso
  • Política de privacidad
  • Open twitterOpen githubOpen rssOpen facebookOpen linkedin
  • 300d90c