Skip to Content
logologo
AI Incident Database
Open TwitterOpen RSS FeedOpen FacebookOpen LinkedInOpen GitHub
Open Menu
Descubrir
Enviar
  • Bienvenido a la AIID
  • Descubrir Incidentes
  • Vista espacial
  • Vista Tabular
  • Vista de lista
  • Entidades
  • Taxonomías
  • Enviar Informes de Incidentes
  • Ranking de Reportadores
  • Blog
  • Resumen de noticias de IA
  • Control de Riesgos
  • Incidente aleatorio
  • Registrarse
Colapsar
Descubrir
Enviar
  • Bienvenido a la AIID
  • Descubrir Incidentes
  • Vista espacial
  • Vista Tabular
  • Vista de lista
  • Entidades
  • Taxonomías
  • Enviar Informes de Incidentes
  • Ranking de Reportadores
  • Blog
  • Resumen de noticias de IA
  • Control de Riesgos
  • Incidente aleatorio
  • Registrarse
Colapsar

Incidente 686: Meta AI Image Generator Reportedly Fails to Accurately Represent Interracial Relationships

Respondido
Descripción: Meta's AI image generator is alleged to produce inaccurate and biased images, consistently failing to depict interracial relationships involving Asian individuals and Caucasian or Black individuals. Instead, it generates images featuring two Asian people or stereotypes, erasing the diversity and representation of Asian people.

Herramientas

Nuevo InformeNuevo InformeNueva RespuestaNueva RespuestaDescubrirDescubrirVer HistorialVer Historial

Entidades

Ver todas las entidades
Presunto: un sistema de IA desarrollado e implementado por Meta, perjudicó a Asian People , Interracial couples y General public.

Estadísticas de incidentes

ID
686
Cantidad de informes
2
Fecha del Incidente
2024-04-03
Editores
Daniel Atherton
Applied Taxonomies
MIT

Clasificaciones de la Taxonomía MIT

Machine-Classified
Detalles de la Taxonomía

Risk Subdomain

A further 23 subdomains create an accessible and understandable classification of hazards and harms associated with AI
 

1.1. Unfair discrimination and misrepresentation

Risk Domain

The Domain Taxonomy of AI Risks classifies risks into seven AI risk domains: (1) Discrimination & toxicity, (2) Privacy & security, (3) Misinformation, (4) Malicious actors & misuse, (5) Human-computer interaction, (6) Socioeconomic & environmental harms, and (7) AI system safety, failures & limitations.
 
  1. Discrimination and Toxicity

Entity

Which, if any, entity is presented as the main cause of the risk
 

AI

Timing

The stage in the AI lifecycle at which the risk is presented as occurring
 

Post-deployment

Intent

Whether the risk is presented as occurring as an expected or unexpected outcome from pursuing a goal
 

Unintentional

Informes del Incidente

Cronología de Informes

+1
Meta’s AI image generator can’t imagine an Asian man with a white woman
AI-generated Asians were briefly unavailable on Instagram - Respuesta
Meta’s AI image generator can’t imagine an Asian man with a white woman

Meta’s AI image generator can’t imagine an Asian man with a white woman

theverge.com

AI-generated Asians were briefly unavailable on Instagram

AI-generated Asians were briefly unavailable on Instagram

theverge.com

Meta’s AI image generator can’t imagine an Asian man with a white woman
theverge.com · 2024

Have you ever seen an Asian person with a white person, whether that’s a mixed-race couple or two friends of different races? Seems pretty common to me — I have lots of white friends!

To Meta’s AI-powered image generator, apparently this is…

AI-generated Asians were briefly unavailable on Instagram
theverge.com · 2024
Respuesta post-incidente de Mia Sato

Yesterday, I reported that Meta's AI image generator was making everyone Asian, even when the text prompt specified another race. Today, I briefly had the opposite problem: I was unable to generate any Asian people using the same prompts as…

Variantes

Una "Variante" es un incidente que comparte los mismos factores causales, produce daños similares e involucra los mismos sistemas inteligentes que un incidente de IA conocido. En lugar de indexar las variantes como incidentes completamente separados, enumeramos las variaciones de los incidentes bajo el primer incidente similar enviado a la base de datos. A diferencia de otros tipos de envío a la base de datos de incidentes, no se requiere que las variantes tengan informes como evidencia externa a la base de datos de incidentes. Obtenga más información del trabajo de investigación.

Incidentes Similares

Por similitud de texto

Did our AI mess up? Flag the unrelated incidents

Biased Google Image Results

'Black teenagers' vs. 'white teenagers': Why Google's algorithm displays racist results

Mar 2016 · 18 informes
Gender Biases of Google Image Search

Google Image Search Has A Gender Bias Problem

Apr 2015 · 11 informes
FaceApp Racial Filters

FaceApp deletes new black, white and Asian filters after racism storm

Apr 2017 · 23 informes
Incidente AnteriorSiguiente Incidente

Incidentes Similares

Por similitud de texto

Did our AI mess up? Flag the unrelated incidents

Biased Google Image Results

'Black teenagers' vs. 'white teenagers': Why Google's algorithm displays racist results

Mar 2016 · 18 informes
Gender Biases of Google Image Search

Google Image Search Has A Gender Bias Problem

Apr 2015 · 11 informes
FaceApp Racial Filters

FaceApp deletes new black, white and Asian filters after racism storm

Apr 2017 · 23 informes

Investigación

  • Definición de un “Incidente de IA”
  • Definición de una “Respuesta a incidentes de IA”
  • Hoja de ruta de la base de datos
  • Trabajo relacionado
  • Descargar Base de Datos Completa

Proyecto y Comunidad

  • Acerca de
  • Contactar y Seguir
  • Aplicaciones y resúmenes
  • Guía del editor

Incidencias

  • Todos los incidentes en forma de lista
  • Incidentes marcados
  • Cola de envío
  • Vista de clasificaciones
  • Taxonomías

2024 - AI Incident Database

  • Condiciones de uso
  • Política de privacidad
  • Open twitterOpen githubOpen rssOpen facebookOpen linkedin
  • 1420c8e