Incidente 815: El uso de software de reconocimiento facial por parte de la policía provoca arrestos injustificados sin el conocimiento del acusado
Descripción: Los departamentos de policía de todo Estados Unidos han utilizado software de reconocimiento facial para identificar sospechosos en investigaciones criminales, lo que ha provocado múltiples arrestos falsos y detenciones injustas. La poca fiabilidad del software, especialmente para identificar a personas de color, ha dado lugar a identificaciones erróneas que no se revelaron a los acusados. En algunos casos, las personas desconocían que el reconocimiento facial había influido en su arresto, lo que vulneró sus derechos legales y dio lugar a detenciones injustas.
Editor Notes: This collective incident ID, based on a Washington Post investigation, details many harm events, the overarching theme of which is widespread facial recognition technology assisting in arrests made by police departments across the United States combined with a lack of transparency about the technology's use in making the arrests. Some of the documented incidents in the Washington Post's investigation are as follows: (1) 2019: Facial recognition technology used to misidentify Francisco Arteaga in New Jersey, which led to his wrongful detention for four years (see Incident 816). (2) 2020-2024: Miami Police Department conducts 2,500 facial recognition searches, leading to at least 186 arrests and 50 convictions. Less than 7% of defendants were informed of the technology's use. (3) 2022: Quran Reid is wrongfully arrested in Louisiana due to a facial recognition match, despite never visiting the state (see Incident 515). (4) June 2023: New Jersey appeals court rules that a defendant has the right to information regarding the use of facial recognition technology in their case. (5) July 2023: Miami Police Department acknowledges that they may not have informed prosecutors about the use of facial recognition in many cases. (6) October 6, 2024: The Washington Post publishes its investigation on these incidents and practices.
Entidades
Ver todas las entidadesAlleged: Clearview AI developed an AI system deployed by Police departments , Evansville PD , Pflugerville PD , Jefferson Parish Sheriff’s Office , Miami PD , West New York PD , NYPD , Coral Springs PD y Arvada PD, which harmed Quran Reid , Francisco Arteaga y Defendants wrongfully accused by facial recognition.
Estadísticas de incidentes
Risk Subdomain
A further 23 subdomains create an accessible and understandable classification of hazards and harms associated with AI
7.4. Lack of transparency or interpretability
Risk Domain
The Domain Taxonomy of AI Risks classifies risks into seven AI risk domains: (1) Discrimination & toxicity, (2) Privacy & security, (3) Misinformation, (4) Malicious actors & misuse, (5) Human-computer interaction, (6) Socioeconomic & environmental harms, and (7) AI system safety, failures & limitations.
- AI system safety, failures, and limitations
Entity
Which, if any, entity is presented as the main cause of the risk
Human
Timing
The stage in the AI lifecycle at which the risk is presented as occurring
Post-deployment
Intent
Whether the risk is presented as occurring as an expected or unexpected outcome from pursuing a goal
Intentional
Informes del Incidente
Cronología de Informes
Cientos de estadounidenses han sido arrestados después de que se los relacionara con un crimen mediante un software de reconocimiento facial, según ha descubierto una investigación del Washington Post, pero muchos nunca lo saben porque la p…
Variantes
Una "Variante" es un incidente de IA similar a un caso conocido—tiene los mismos causantes, daños y sistema de IA. En lugar de enumerarlo por separado, lo agrupamos bajo el primer incidente informado. A diferencia de otros incidentes, las variantes no necesitan haber sido informadas fuera de la AIID. Obtenga más información del trabajo de investigación.
¿Has visto algo similar?