Description: NYC implemented an AI enabled weapons scanner for a month-long pilot with limited success. Despite not finding any weapons during the September 2024 testing phase, there were 118 false positives in which a person was searched under suspicion of carrying a weapon with no actual gun detections.
Editor Notes: The 118 false positives can be considered a privacy invasion, and according to some cited legal advocacy groups, a violation of due process. Reconstructing the timeline of events: (1) March 28, 2024: Mayor Eric Adams announces plans to deploy Evolv's AI-powered weapons scanners in selected NYC subway stations. (2) Summer 2024: Pilot program begins, deploying AI scanners across 20 subway stations. (3) September 2024: NYPD completes a 30-day testing period with the scanners, performing 2,749 scans and recording 118 false positives and no firearms detections. (4) October 23, 2024: NYPD releases a brief statement summarizing the pilot results, which is marked at the incident date for our purposes, even though each false positive (as well as the potential for firearms to have slipped past detection) may be considered discrete incidents in and of themselves.
Entities
View all entitiesAlleged: Evolv Technology developed an AI system deployed by New York City Government, which harmed New York City subway riders.
Incident Stats
Risk Subdomain
A further 23 subdomains create an accessible and understandable classification of hazards and harms associated with AI
7.3. Lack of capability or robustness
Risk Domain
The Domain Taxonomy of AI Risks classifies risks into seven AI risk domains: (1) Discrimination & toxicity, (2) Privacy & security, (3) Misinformation, (4) Malicious actors & misuse, (5) Human-computer interaction, (6) Socioeconomic & environmental harms, and (7) AI system safety, failures & limitations.
- AI system safety, failures, and limitations
Entity
Which, if any, entity is presented as the main cause of the risk
AI
Timing
The stage in the AI lifecycle at which the risk is presented as occurring
Post-deployment
Intent
Whether the risk is presented as occurring as an expected or unexpected outcome from pursuing a goal
Unintentional
Incident Reports
Reports Timeline
NEW YORK (AP) — A pilot program testing AI-powered weapons scanners inside some New York City subway stations this summer did not detect any passengers with firearms — but falsely alerted more than 100 times, according to newly released pol…

Updated: This post has been updated, as the original potentially overclaimed both what the FTC settlement said regarding what Evolv could market as well as Evolv’s response to it (suggesting it would try to limit the settlement it agreed to…
Variants
A "variant" is an incident that shares the same causative factors, produces similar harms, and involves the same intelligent systems as a known AI incident. Rather than index variants as entirely separate incidents, we list variations of incidents under the first similar incident submitted to the database. Unlike other submission types to the incident database, variants are not required to have reporting in evidence external to the Incident Database. Learn more from the research paper.
Similar Incidents
Did our AI mess up? Flag the unrelated incidents

Uber AV Killed Pedestrian in Arizona
· 25 reports

Predictive Policing Biases of PredPol
· 17 reports
Similar Incidents
Did our AI mess up? Flag the unrelated incidents

Uber AV Killed Pedestrian in Arizona
· 25 reports

Predictive Policing Biases of PredPol
· 17 reports