Description: The NYPD's facial recognition system allegedly misidentified Trevis Williams as a suspect in a Union Square indecent exposure case. Despite reportedly notable physical differences and exculpatory phone data, Williams was arrested, jailed for more than two days, and charged. The case was later dismissed.
Editor Notes: Timeline note: This incident ID takes 04/21/2025 as the incident ID date because that is when Trevis Williams was reportedly arrested and jailed following an alleged facial recognition misidentification. The original crime was reported on 02/10/2025, and Williams was allegedly taken into custody on 04/21/2025. The charges were purportedly dismissed in July 2025, and The New York Times subsequently reported on the case on 08/26/2025 (the date of ingestion into the database as well).
Tools
New ReportNew ResponseDiscoverView History
The OECD AI Incidents and Hazards Monitor (AIM) automatically collects and classifies AI-related incidents and hazards in real time from reputable news sources worldwide.
Entities
View all entitiesAlleged: Unknown facial recognition system developer developed an AI system deployed by New York Police Department and NYPD, which harmed Trevis Williams , Judicial integrity , Judicial system , Law enforcement , General public and General public of New York.
Alleged implicated AI system: Unknown facial recognition system
Incident Stats
Risk Subdomain
A further 23 subdomains create an accessible and understandable classification of hazards and harms associated with AI
7.3. Lack of capability or robustness
Risk Domain
The Domain Taxonomy of AI Risks classifies risks into seven AI risk domains: (1) Discrimination & toxicity, (2) Privacy & security, (3) Misinformation, (4) Malicious actors & misuse, (5) Human-computer interaction, (6) Socioeconomic & environmental harms, and (7) AI system safety, failures & limitations.
- AI system safety, failures, and limitations
Entity
Which, if any, entity is presented as the main cause of the risk
AI
Timing
The stage in the AI lifecycle at which the risk is presented as occurring
Post-deployment
Intent
Whether the risk is presented as occurring as an expected or unexpected outcome from pursuing a goal
Unintentional
Incident Reports
Reports Timeline
Loading...
In February, a woman told the police that a delivery man had exposed himself to her in a Manhattan building. He was about 5 feet 6 inches tall.
Two months later, evidence shows, the police arrested the wrong man. He was 6-foot-2.
The man, T…
Variants
A "variant" is an AI incident similar to a known case—it has the same causes, harms, and AI system. Instead of listing it separately, we group it under the first reported incident. Unlike other incidents, variants do not need to have been reported outside the AIID. Learn more from the research paper.
Seen something similar?
Similar Incidents
Did our AI mess up? Flag the unrelated incidents
Similar Incidents
Did our AI mess up? Flag the unrelated incidents

