Skip to Content
logologo
AI Incident Database
Open TwitterOpen RSS FeedOpen FacebookOpen LinkedInOpen GitHub
Open Menu
Descubrir
Enviar
  • Bienvenido a la AIID
  • Descubrir Incidentes
  • Vista espacial
  • Vista Tabular
  • Vista de lista
  • Entidades
  • Taxonomías
  • Enviar Informes de Incidentes
  • Ranking de Reportadores
  • Blog
  • Resumen de noticias de IA
  • Control de Riesgos
  • Incidente aleatorio
  • Registrarse
Colapsar
Descubrir
Enviar
  • Bienvenido a la AIID
  • Descubrir Incidentes
  • Vista espacial
  • Vista Tabular
  • Vista de lista
  • Entidades
  • Taxonomías
  • Enviar Informes de Incidentes
  • Ranking de Reportadores
  • Blog
  • Resumen de noticias de IA
  • Control de Riesgos
  • Incidente aleatorio
  • Registrarse
Colapsar

Problema 3777

Incidentes Asociados

Incidente 64535 Reportes
Seeming Pattern of Gemini Bias and Sociotechnical Training Failures Harm Google's Reputation

Loading...
Google apologizes over Gemini image generation inaccuracies
siliconangle.com · 2024

Google LLC has apologized after its Gemini chatbot generated images that incorrectly depicted racially diverse characters in historical contexts. 

Prabhakar Raghavan, the company’s senior vice president of knowledge and information, acknowledged the bug in a blog post published today. He wrote that “we’re grateful for users’ feedback and are sorry the feature didn’t work well.” Raghavan’s post goes on to detail why the bug emerged, as well as what steps Google is taking to reduce the risk of similar errors occurring in the future.

Gemini is the new name of the ChatGPT-like chatbot that was known until Feb. 8 as Bard. That day, Google rebranded the service as part of a broader update in which it also rolled out a paid tier. Gemini Advanced, as the new $20 per month version is known, is powered by a more powerful large language model than the original.

The rebrand and new features rolled out a few days after another update that saw Google equip Gemini with an image generation feature. The feature is powered by an AI model called Imagen 2. According to the search giant, the model was trained on a higher-quality dataset than its previous AI image generator and can follow user instructions more accurately.

Users began noticing issues in Gemini’s image generation feature shortly after it launched. Earlier this week, the feature drew criticism for generating images of racially diverse people in a historically inaccurate way. Google responded by temporarily disabling Gemini’s ability to generate images that depict people.

In today’s blog post, Raghavan provided an account of how the bug emerged.

“If you ask for a picture of football players, or someone walking a dog, you may want to receive a range of people. You probably don’t just want to only receive images of people of just one type of ethnicity (or any other characteristic),” Raghavan wrote. “So what went wrong? In short, two things. First, our tuning to ensure that Gemini showed a range of people failed to account for cases that should clearly not show a range. And second, over time, the model became way more cautious than we intended and refused to answer certain prompts entirely.”

Raghavan detailed that Google plans to eventually reactivate Gemini’s ability to generate images depicting people. Before that happens, however, the company intends to “significantly” improve the chatbot. Raghavan emphasized the development process will include extensive testing to ensure the bug fixes are implemented effectively.

The executive cautioned that, even after the planned update, errors may still emerge from time to time in Gemini’s output. “I can’t promise that Gemini won’t occasionally generate embarrassing, inaccurate or offensive results — but I can promise that we will continue to take action whenever we identify an issue,” the executive wrote. “AI is an emerging technology which is helpful in so many ways, with huge potential, and we’re doing our best to roll it out safely and responsibly.”

Leer la Fuente

Investigación

  • Definición de un “Incidente de IA”
  • Definición de una “Respuesta a incidentes de IA”
  • Hoja de ruta de la base de datos
  • Trabajo relacionado
  • Descargar Base de Datos Completa

Proyecto y Comunidad

  • Acerca de
  • Contactar y Seguir
  • Aplicaciones y resúmenes
  • Guía del editor

Incidencias

  • Todos los incidentes en forma de lista
  • Incidentes marcados
  • Cola de envío
  • Vista de clasificaciones
  • Taxonomías

2024 - AI Incident Database

  • Condiciones de uso
  • Política de privacidad
  • Open twitterOpen githubOpen rssOpen facebookOpen linkedin
  • e1b50cd