Incidente 12: Sesgos comunes de las incrustaciones vectoriales
Descripción: Investigadores de la Universidad de Boston y Microsoft Research, Nueva Inglaterra demostraron sesgo de género en las técnicas más comunes utilizadas para integrar palabras en el procesamiento del lenguaje natural (PLN).
Entidades
Ver todas las entidadesAlleged: Microsoft Research , Boston University y Google developed an AI system deployed by Microsoft Research y Boston University, which harmed Women y Minority Groups.
Clasificaciones de la Taxonomía CSETv1
Detalles de la TaxonomíaIncident Number
The number of the incident in the AI Incident Database.
12
Clasificaciones de la Taxonomía CSETv0
Detalles de la TaxonomíaPublic Sector Deployment
"Yes" if the AI system(s) involved in the accident were being used by the public sector or for the administration of public goods (for example, public transportation). "No" if the system(s) were being used in the private sector or for commercial purposes (for example, a ride-sharing company), on the other.
No
Lives Lost
Were human lives lost as a result of the incident?
No
Intent
Was the incident an accident, intentional, or is the intent unclear?
Unclear
Near Miss
Was harm caused, or was it a near miss?
Unclear/unknown
Ending Date
The date the incident ended.
2016-01-01T00:00:00.000Z
Beginning Date
The date the incident began.
2016-01-01T00:00:00.000Z
Risk Subdomain
A further 23 subdomains create an accessible and understandable classification of hazards and harms associated with AI
1.1. Unfair discrimination and misrepresentation
Risk Domain
The Domain Taxonomy of AI Risks classifies risks into seven AI risk domains: (1) Discrimination & toxicity, (2) Privacy & security, (3) Misinformation, (4) Malicious actors & misuse, (5) Human-computer interaction, (6) Socioeconomic & environmental harms, and (7) AI system safety, failures & limitations.
- Discrimination and Toxicity
Entity
Which, if any, entity is presented as the main cause of the risk
AI
Timing
The stage in the AI lifecycle at which the risk is presented as occurring
Post-deployment
Intent
Whether the risk is presented as occurring as an expected or unexpected outcome from pursuing a goal
Unintentional
Informes del Incidente
Cronología de Informes
La aplicación ciega del aprendizaje automático corre el riesgo de amplificar los sesgos presentes en los datos. Nos enfrentamos a un peligro de este tipo con la incrustación de palabras, un marco popular para representar datos de texto como…
Variantes
Una "Variante" es un incidente de IA similar a un caso conocido—tiene los mismos causantes, daños y sistema de IA. En lugar de enumerarlo por separado, lo agrupamos bajo el primer incidente informado. A diferencia de otros incidentes, las variantes no necesitan haber sido informadas fuera de la AIID. Obtenga más información del trabajo de investigación.
¿Has visto algo similar?
Incidentes Similares
Did our AI mess up? Flag the unrelated incidents
Incidentes Similares
Did our AI mess up? Flag the unrelated incidents