Skip to Content
logologo
AI Incident Database
Open TwitterOpen RSS FeedOpen FacebookOpen LinkedInOpen GitHub
Open Menu
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse

Incident 1343: ICE AI Resume Screening Error Allegedly Routed Inexperienced Recruits Into Inadequate Training Pathways

Description: U.S. Immigration and Customs Enforcement (ICE) reportedly used an AI-assisted résumé screening tool during a 2025 hiring surge that misclassified some applicants as having law-enforcement experience. As a result, certain recruits without policing backgrounds were allegedly routed into a shortened training pathway. ICE reportedly identified the error, reviewed résumés manually, and reassigned affected recruits for additional training.
Editor Notes: Timeline notes: Reporting on this incident suggests it occurred sometime in mid-fall 2025. Outlets began reporting on it on 01/14/2026, which is the date this incident ID takes.

Tools

New ReportNew ReportNew ResponseNew ResponseDiscoverDiscoverView HistoryView History

Entities

View all entities
Alleged: Unknown generative AI developers and Unknown AI-assisted résumé screening developers developed an AI system deployed by United States Immigration and Customs Enforcement, which harmed ICE recruits without law-enforcement experience and Members of the public subject to ICE enforcement.
Alleged implicated AI systems: Unknown AI-assisted résumé screening technology and Automated applicant classification tool (ICE recruitment)

Incident Stats

Incident ID
1343
Report Count
1
Incident Date
2026-01-14
Editors
Daniel Atherton

Incident Reports

Reports Timeline

+1
ICE error meant some recruits were sent into field offices without proper training, sources say
Loading...
ICE error meant some recruits were sent into field offices without proper training, sources say

ICE error meant some recruits were sent into field offices without proper training, sources say

nbcnews.com

Loading...
ICE error meant some recruits were sent into field offices without proper training, sources say
nbcnews.com · 2026

As Immigration and Customs Enforcement was racing to add 10,000 new officers to its force, an artificial intelligence error in how their applications were processed sent many new recruits into field offices without proper training, accordin…

Variants

A "variant" is an AI incident similar to a known case—it has the same causes, harms, and AI system. Instead of listing it separately, we group it under the first reported incident. Unlike other incidents, variants do not need to have been reported outside the AIID. Learn more from the research paper.
Seen something similar?

Similar Incidents

By textual similarity

Did our AI mess up? Flag the unrelated incidents

Loading...
Justice Department’s Recidivism Risk Algorithm PATTERN Allegedly Caused Persistent Disparities Along Racial Lines

Justice Department’s Recidivism Risk Algorithm PATTERN Allegedly Caused Persistent Disparities Along Racial Lines

Jan 2022 · 1 report
Loading...
Predictive Policing Biases of PredPol

Predictive Policing Biases of PredPol

Nov 2015 · 17 reports
Loading...
UT Austin GRADE Algorithm Allegedly Reinforced Historical Inequalities

UT Austin GRADE Algorithm Allegedly Reinforced Historical Inequalities

Dec 2012 · 2 reports
Previous IncidentNext Incident

Similar Incidents

By textual similarity

Did our AI mess up? Flag the unrelated incidents

Loading...
Justice Department’s Recidivism Risk Algorithm PATTERN Allegedly Caused Persistent Disparities Along Racial Lines

Justice Department’s Recidivism Risk Algorithm PATTERN Allegedly Caused Persistent Disparities Along Racial Lines

Jan 2022 · 1 report
Loading...
Predictive Policing Biases of PredPol

Predictive Policing Biases of PredPol

Nov 2015 · 17 reports
Loading...
UT Austin GRADE Algorithm Allegedly Reinforced Historical Inequalities

UT Austin GRADE Algorithm Allegedly Reinforced Historical Inequalities

Dec 2012 · 2 reports

Research

  • Defining an “AI Incident”
  • Defining an “AI Incident Response”
  • Database Roadmap
  • Related Work
  • Download Complete Database

Project and Community

  • About
  • Contact and Follow
  • Apps and Summaries
  • Editor’s Guide

Incidents

  • All Incidents in List Form
  • Flagged Incidents
  • Submission Queue
  • Classifications View
  • Taxonomies

2024 - AI Incident Database

  • Terms of use
  • Privacy Policy
  • Open twitterOpen githubOpen rssOpen facebookOpen linkedin
  • f5f2449