Skip to Content
logologo
AI Incident Database
Open TwitterOpen RSS FeedOpen FacebookOpen LinkedInOpen GitHub
Open Menu
Descubrir
Enviar
  • Bienvenido a la AIID
  • Descubrir Incidentes
  • Vista espacial
  • Vista Tabular
  • Vista de lista
  • Entidades
  • Taxonomías
  • Enviar Informes de Incidentes
  • Ranking de Reportadores
  • Blog
  • Resumen de noticias de IA
  • Control de Riesgos
  • Incidente aleatorio
  • Registrarse
Colapsar
Descubrir
Enviar
  • Bienvenido a la AIID
  • Descubrir Incidentes
  • Vista espacial
  • Vista Tabular
  • Vista de lista
  • Entidades
  • Taxonomías
  • Enviar Informes de Incidentes
  • Ranking de Reportadores
  • Blog
  • Resumen de noticias de IA
  • Control de Riesgos
  • Incidente aleatorio
  • Registrarse
Colapsar

Problema 3787

Incidentes Asociados

Incidente 64535 Reportes
Seeming Pattern of Gemini Bias and Sociotechnical Training Failures Harm Google's Reputation

Google pulls the plug on Gemini AI image generation after being mocked for revisionist history
fastcompany.com · 2024

Google said Thursday it's temporarily stopping its Gemini artificial intelligence chatbot from generating images of people a day after apologizing for "inaccuracies" in historical depictions that it was creating.

Gemini users this week posted screenshots on social media of historically white-dominated scenes with racially diverse characters that they say it generated, leading critics to raise questions about whether the company is overcorrecting for the risk of racial bias in its AI model.

"We're already working to address recent issues with Gemini's image generation feature," Google said in a post on the social media platform X. "While we do this, we're going to pause the image generation of people and will re-release an improved version soon."

Previous studies have shown AI image-generators can amplify racial and gender stereotypes found in their training data, and without filters are more likely to generate lighter-skinned men when asked to generate a person in various contexts.

Google said on Wednesday that it's "aware that Gemini is offering inaccuracies in some historical image generation depictions" and that it's "working to improve these kinds of depictions immediately."

Gemini can generate a "wide range of people," which the company said is "generally a good thing" because people around the world use the system, but it is "missing the mark."

When the AP asked Gemini to generate pictures of people, it responded by saying it's "working to improve" the ability to do so. "We expect this feature to return soon and will notify you in release updates when it does," the chatbot said.

Leer la Fuente

Investigación

  • Definición de un “Incidente de IA”
  • Definición de una “Respuesta a incidentes de IA”
  • Hoja de ruta de la base de datos
  • Trabajo relacionado
  • Descargar Base de Datos Completa

Proyecto y Comunidad

  • Acerca de
  • Contactar y Seguir
  • Aplicaciones y resúmenes
  • Guía del editor

Incidencias

  • Todos los incidentes en forma de lista
  • Incidentes marcados
  • Cola de envío
  • Vista de clasificaciones
  • Taxonomías

2024 - AI Incident Database

  • Condiciones de uso
  • Política de privacidad
  • Open twitterOpen githubOpen rssOpen facebookOpen linkedin
  • a9df9cf