Skip to Content
logologo
AI Incident Database
Open TwitterOpen RSS FeedOpen FacebookOpen LinkedInOpen GitHub
Open Menu
Descubrir
Enviar
  • Bienvenido a la AIID
  • Descubrir Incidentes
  • Vista espacial
  • Vista Tabular
  • Vista de lista
  • Entidades
  • Taxonomías
  • Enviar Informes de Incidentes
  • Ranking de Reportadores
  • Blog
  • Resumen de noticias de IA
  • Control de Riesgos
  • Incidente aleatorio
  • Registrarse
Colapsar
Descubrir
Enviar
  • Bienvenido a la AIID
  • Descubrir Incidentes
  • Vista espacial
  • Vista Tabular
  • Vista de lista
  • Entidades
  • Taxonomías
  • Enviar Informes de Incidentes
  • Ranking de Reportadores
  • Blog
  • Resumen de noticias de IA
  • Control de Riesgos
  • Incidente aleatorio
  • Registrarse
Colapsar

Problema 3771

Incidentes Asociados

Incidente 64535 Reportes
Seeming Pattern of Gemini Bias and Sociotechnical Training Failures Harm Google's Reputation

Loading...
Google making changes after Gemini AI portrayed people of color inaccurately
nbcnews.com · 2024

Google said Thursday that it would temporarily limit the ability to create images of people with its artificial-intelligence tool Gemini after it produced illustrations with historical inaccuracies.

The pause was announced in a post on X after the company acknowledged the issues in a statement the day before, writing: “We’re working to improve these kinds of depictions immediately. Gemini’s AI image generation does generate a wide range of people. And that’s generally a good thing because people around the world use it. But it’s missing the mark here.”

Google announced its “next-generation” AI model, Gemini 1.5, last week, touting it as its most capable yet. Google is one of many tech companies that are feverishly competing to develop the best generative AI systems that can create text, images and video from simple prompts.

After its launch, Gemini drew the attention of some pundits and technologists, including many on the right who have been critical of efforts to make AI more inclusive, with some leveling accusations that Google’s AI was a prime example of “woke AI.”

The most widely noticed inaccuracies involved inserting people of color into places where they wouldn’t historically appear. For example, The Verge reported that asking Gemini to create an illustration of a 1943 German soldier resulted in AI-generated drawings of nonwhite people in Nazi uniforms. Gemini also created images of some nonwhite Founding Fathers and U.S. senators from the 1800s, when in reality they were all white men. 

Gemini’s racially diverse image output comes amid long-standing concerns around racial bias within AI models, especially a lack of representation for minorities and people of color. Such biases can directly harm people who rely on AI algorithms, such as in health care settings, where AI tools can affect health care outcomes for hundreds of millions of patients. 

Google isn’t the only Big Tech company addressing major issues with flagship AI tools this week — OpenAI, the company behind AI text generator ChatGPT, said Wednesday it had resolved an issue causing “unexpected responses” from ChatGPT. Users began noticing Tuesday that ChatGPT was malfunctioning, spitting out nonsensical sentences instead of its usual output.

“LLMs generate responses by randomly sampling words based in part on probabilities. Their ‘language’ consists of numbers that map to tokens,” OpenAI wrote in a status update on its website. “In this case, the bug was in the step where the model chooses these numbers. Akin to being lost in translation, the model chose slightly wrong numbers, which produced word sequences that made no sense.”

Leer la Fuente

Investigación

  • Definición de un “Incidente de IA”
  • Definición de una “Respuesta a incidentes de IA”
  • Hoja de ruta de la base de datos
  • Trabajo relacionado
  • Descargar Base de Datos Completa

Proyecto y Comunidad

  • Acerca de
  • Contactar y Seguir
  • Aplicaciones y resúmenes
  • Guía del editor

Incidencias

  • Todos los incidentes en forma de lista
  • Incidentes marcados
  • Cola de envío
  • Vista de clasificaciones
  • Taxonomías

2024 - AI Incident Database

  • Condiciones de uso
  • Política de privacidad
  • Open twitterOpen githubOpen rssOpen facebookOpen linkedin
  • e1b50cd