Incident 74: La police de Détroit a arrêté à tort un homme noir en raison d'un défaut de FRT
Répondu
Description: Un homme noir a été injustement détenu par le département de police de Détroit à la suite d'un faux résultat de reconnaissance faciale (FRT).
Entités
Voir toutes les entitésAlleged: DataWorks Plus developed an AI system deployed by Detroit Police Department, which harmed Robert Julian-Borchak Williams et Black people in Detroit.
Classifications de taxonomie CSETv0
Détails de la taxonomieProblem Nature
Indicates which, if any, of the following types of AI failure describe the incident: "Specification," i.e. the system's behavior did not align with the true intentions of its designer, operator, etc; "Robustness," i.e. the system operated unsafely because of features or changes in its environment, or in the inputs the system received; "Assurance," i.e. the system could not be adequately monitored or controlled during operation.
Specification, Assurance
Physical System
Where relevant, indicates whether the AI system(s) was embedded into or tightly associated with specific types of hardware.
Software only
Level of Autonomy
The degree to which the AI system(s) functions independently from human intervention. "High" means there is no human involved in the system action execution; "Medium" means the system generates a decision and a human oversees the resulting action; "low" means the system generates decision-support output and a human makes a decision and executes an action.
High
Nature of End User
"Expert" if users with special training or technical expertise were the ones meant to benefit from the AI system(s)’ operation; "Amateur" if the AI systems were primarily meant to benefit the general public or untrained users.
Amateur
Public Sector Deployment
"Yes" if the AI system(s) involved in the accident were being used by the public sector or for the administration of public goods (for example, public transportation). "No" if the system(s) were being used in the private sector or for commercial purposes (for example, a ride-sharing company), on the other.
Yes
Data Inputs
A brief description of the data that the AI system(s) used or were trained on.
biometrics, images, camera footage
Classifications de taxonomie GMF
Détails de la taxonomieKnown AI Goal Snippets
One or more snippets that justify the classification.
(Snippet Text: On a Thursday afternoon in January, Robert Julian-Borchak Williams was in his office at an automotive supply company when he got a call from the Detroit Police Department telling him to come to the station to be arrested., Related Classifications: Face Recognition)
Classifications de taxonomie CSETv1
Détails de la taxonomieIncident Number
The number of the incident in the AI Incident Database.
74
Risk Subdomain
A further 23 subdomains create an accessible and understandable classification of hazards and harms associated with AI
1.3. Unequal performance across groups
Risk Domain
The Domain Taxonomy of AI Risks classifies risks into seven AI risk domains: (1) Discrimination & toxicity, (2) Privacy & security, (3) Misinformation, (4) Malicious actors & misuse, (5) Human-computer interaction, (6) Socioeconomic & environmental harms, and (7) AI system safety, failures & limitations.
- Discrimination and Toxicity
Entity
Which, if any, entity is presented as the main cause of the risk
AI
Timing
The stage in the AI lifecycle at which the risk is presented as occurring
Post-deployment
Intent
Whether the risk is presented as occurring as an expected or unexpected outcome from pursuing a goal
Unintentional