Welcome to theAI Incident Database
Incident 1476: Maryland Police Allegedly Relied on Facial Recognition Lead in Wrongful Arrest and Detention of Kimberlee Williams
“More than a Dozen Wrongful Arrests Due to Police Reliance on Facial Recognition Technology”Latest Incident Report
One ACLU client spent six months in jail, because police relied on facial recognition technology to incorrectly identify her as a suspect. She's the fourteenth person known to be wrongfully arrested due to the technology's failures.
When police arrested Kimberlee Williams, a grandmother living in Oklahoma, because of a warrant from Maryland, she was shocked. She had never been to Maryland in her life.
Ms. Williams later learned that Maryland police had relied on an incorrect result from facial recognition technology that falsely flagged her as a suspect. She is the fourteenth person in the U.S. to join a growing list of people wrongfully arrested because police let flawed facial recognition technology taint their investigations.
Police use of facial recognition technology is dangerous, and stories of people wrongfully arrested because of police reliance on incorrect facial recognition results continue to surface. Today, the ACLU and ACLU of Maryland sent letters to three Maryland police departments on behalf of Ms. Williams, who was wrongfully arrested and jailed for six months because Maryland police relied on a false facial recognition result and concealed their reliance on that unreliable technology from the court when applying for an arrest warrant.
One Woman Arrested for a Crime She Didn't Commit
On June 23, 2021, Ms. Williams was accompanying one of her daughters on a DoorDash delivery to a local military base in Lawton, Oklahoma. When base security at the entry checkpoint conducted a standard identification check, they discovered outstanding Maryland arrest warrants for Ms. Williams and detained her.
These warrants sought Ms. Williams' arrest for a series of fraudulent over-the-counter cash withdrawals in Maryland in December 2019 and January 2020. An unknown individual had entered SunTrust and Truist bank branches in three different counties, impersonated account holders, and fraudulently withdrew thousands of dollars from those individuals' accounts.
Ms. Williams, however, was nowhere near Maryland during this time. She was a resident of Oklahoma, living with two of her daughters and their children. While someone was defrauding banks in Maryland, Ms. Williams was in Oklahoma with her family celebrating Christmas and her daughter's birthday.
At the bank's headquarters, a financial crimes investigator obtained images of the suspect from security camera footage and sent an image to a national listserv of police and private investigators called Crimedex. Someone on the email list ran the image through facial recognition technology and sent back Ms. Williams' name and photo as a purported match to the suspect.
In the first county, the bank investigator informed Montgomery County police that Ms. Williams was identified "using facial recognition software" but provided no further information, such as who ran the facial recognition search or how they conducted it. To police in the other two counties, the bank investigator provided even less information, writing that the suspect was "recognized" as Ms. Williams, "a suspect in" the previous Montgomery County investigation.
The detectives made no attempt to establish whether Ms. Williams, an Oklahoma resident, could have been anywhere near Maryland during the relevant period. Her alibi evidence, including social media posts geotagged to Oklahoma, would have shown she couldn't be the right person. The detectives also ignored other obvious leads: one of the incidents involved a fraudulent check made payable to a name not associated with the bank accounts. Yet there was no investigation of that name. Instead, the police relied only on the facial recognition lead, plus their own visual comparisons of the photos of the suspect and Ms. Williams. They thought the two looked similar.
After she was arrested, Ms. Williams spent a total of six months in jail, first in Oklahoma waiting to be extradited, and then in Montgomery and Prince George's counties in Maryland. When the last of the charges were finally dropped, Ms. Williams was unceremoniously released onto the street in the middle of December and left to find her way home halfway across the country. She had no phone and no money. Relying on the kindness of strangers, Ms. Williams managed to borrow a phone to contact her family and stay at a nearby hotel before she could return home.
The ordeal turned her and her family's life upside down, and she is still trying to recover five years later. She lost her job because of the arrest. She also worries that she might unknowingly catch the attention of law enforcement again for a crime she had nothing to do with, causing another ordeal like this one.
Wrongful Arrests Due to Facial Recognition Technology Increase
What happened to Ms. Williams is outrageous and is unfortunately a predictable consequence of police using facial recognition technology. At least thirteen other people are publicly known to have been wrongfully arrested by U.S. police because of reliance on erroneous facial recognition results:
- Nijeer Parks, arrested by police in Woodbridge, New Jersey (February 2019)
- Michael Oliver, arrested by police in Detroit, Michigan (July 2019)
- Robert Williams, arrested by police in Detroit, Michigan (January 2020)
- Christopher Gatlin, arrested by police in St. Louis, Missouri (August 2021)
- Alonzo Sawyer, arrested by Maryland transit police (March 2022)
- Randal Quran Reid, arrested by Georgia police on a warrant issued in Jefferson Parish, Louisiana (November 2022)
- Porcha Woodruff, arrested by police in Detroit, Michigan (February 2023)
- Jason Killinger, arrested by police in Reno, Nevada (September 2023)
- Robert Dillion, arrested on a warrant obtained by police in Jacksonville Beach, Florida (August 2024)
- Javier Lorenzano-Nunez, arrested by police in Phoenix, Arizona (October 2024)
- Trevis Williams, arrested by police in New York City (April 2025)
- Angela Lipps, arrested by U.S. Marshals in Tennessee on a warrant obtained by police in Fargo, North Dakota (July 2025)
- Beau Burgess, arrested by police in Orlando, Florida (August 2025)
When the ACLU sued Detroit police on behalf of Robert Williams in 2021, supporters of police using facial recognition technology characterized his wrongful arrest as an unfortunate but isolated mistake that shouldn't undermine trust in the technology. But the wrongful arrests have kept coming. It is now impossible to ignore the dangers of facial recognition technology in policing. Indeed, police keep letting the same predictable failures happen again and again.
Facial recognition technology often produces false matches. Part of what is so dangerous about these systems is that when they get it wrong, innocent people who look similar to a suspect are often flagged. That is exactly what facial recognition is designed to do --- find similar faces in a database, most or all of whom aren't actually a match. When facial recognition technology generates false matches to innocent lookalikes, it can taint the investigation by tricking witnesses and police into mistakenly believing they've found the suspect.
In several cases, people have been wrongfully arrested after police moved straight from facial recognition results to photo lineups presented to witnesses. When presented with photos containing an image of a person who was chosen by facial recognition technology, surrounded by filler photos who look less like the suspect, witnesses unsurprisingly think they've found the culprit. That tainting of lineup identifications has led to the wrongful arrests of at least seven people.
In other cases, including Ms. Williams', the only confirmation that police obtained was a visual comparison of the suspect photo to the facial recognition result by a law enforcement officer, who was similarly influenced by the technology into thinking they had a match.
Time and time again, police have failed to conduct reliable investigation and ignored obvious reasons to question a facial recognition lead. Many of the people wrongfully arrested had visible differencesto the suspect in the photo: Michael Oliver had full tattoo sleeves, Porcha Woodruff was eight months pregnant, and Trevis Williams was eight inches taller and seventy pounds heavier. In the case leading to Nijeer Parks' arrest, police didn't wait for the results of DNA and fingerprint analysis that would have pointed to someone else. And for those who were arrested for crimes far from where they lived --- including Kimberlee Williams, Randal Quran Reid, Robert Dillon, and Angela Lipps --- police appeared to have not investigated whether they could have even been in the right city or state at the time of the crime.
Without Protections, Facial Recognition Technology Will Keep Tainting Investigations
Despite police department policies and official disclaimers warning officers that facial recognition technology results are not sufficient grounds to arrest someone, police continue letting the technology ruin their investigations. The problems don't stop there. Multiple studies have shown that facial recognition technology produces higher false match rates for people of color, women, younger people, and the elderly. Unsurprisingly, most of the known wrongful arrests from this technology have been of Black people. But as more stories come to light, including that of Ms. Williams and other white people subjected to false arrests, we are seeing that nobody is safe from having their lives upended by this technology.
These serious dangers call for serious solutions. More than 20 cities and other jurisdictions across the country have banned police from using facial recognition technology at all. In Detroit, under a landmark settlement in Robert Williams' wrongful arrest case, the police department no longer permits officers to request arrest warrants based on only a photo lineup combined with a facial recognition-based lead. Indiana has enacted a similar protection into state law.
Ms. Williams will never get back the six months she spent in jail for a crime she clearly had nothing to do with. In our letters to Maryland police today, we are seeking both accountability and serious policy changes to minimize the chance of this happening to anyone in the future. One wrongful arrest from this dangerous technology is an outrage. More than a dozen, and counting, is a complete travesty that lawmakers and police must take immediate action to end.
Incident 1471: Meta Internal AI Agent Reportedly Gave Advice That Allegedly Exposed Sensitive Data to Unauthorized Employees
“A rogue AI led to a serious security incident at Meta”
For almost two hours last week, Meta employees had unauthorized access to company and user data thanks to an AI agent that gave an employee inaccurate technical advice, as previously reported by The Information. Meta spokesperson Tracy Clayton said in a statement to The Verge that "no user data was mishandled" during the incident.
A Meta engineer was using an internal AI agent, which Clayton described as "similar in nature to OpenClaw within a secure development environment," to analyze a technical question another employee posted on an internal company forum. But the agent also independently publicly replied to the question after analyzing it, without getting approval first. The reply was only meant to be shown to the employee who requested it, not posted publicly.
An employee then acted on the AI's advice, which "provided inaccurate information" that led to a "SEV1" level security incident, the second-highest severity rating Meta uses. The incident temporarily allowed employees to access sensitive data they were not authorized to view, but the issue has since been resolved.
According to Clayton, the AI agent involved didn't take any technical action itself, beyond posting inaccurate technical advice, something a human could have also done. A human, however, might have done further testing and made a more complete judgment call before sharing the information --- and it's not clear whether the employee who originally prompted the answer planned to post it publicly.
"The employee interacting with the system was fully aware that they were communicating with an automated bot. This was indicated by a disclaimer noted in the footer and by the employee's own reply on that thread," Clayton commented to The Verge. "The agent took no action aside from providing a response to a question. Had the engineer that acted on that known better, or did other checks, this would have been avoided."
Last month, an AI agent from open-source platform OpenClaw went more directly rogue at Meta when an employee asked it to sort through emails in her inbox, deleting emails without permission. The whole idea behind agents like OpenClaw is that they can take action on their own, but like any other AI model, they don't always interpret prompts and instructions correctly or give accurate responses, a fact Meta employees have now discovered twice.
Incident 1472: Ahmedabad Aadhaar Fraud Racket Reportedly Used Purportedly AI-Generated Deepfakes to Change Businessman's Linked Mobile Number
“Ahmedabad cyber police bust deepfake-enabled Aadhaar fraud racket, four arrested”
In a significant breakthrough, the Cyber Cell of Crime Branch Ahmedabad has arrested four persons for allegedly orchestrating a sophisticated identity fraud racket in which they used deepfake technology and illegally accessed Aadhaar-linked biometric systems to alter victims' registered mobile numbers, open bank accounts, and apply for loans.
The arrests were made in connection with a complaint filed by Ahmedabad-based businessman Amit Patel, who alleged that unknown persons had fraudulently changed the mobile number linked to his Aadhaar card and used his identity details to access DigiLocker, conduct KYC verification, open bank accounts and apply for personal loans.
According to the FIR registered at the Cyber Crime Police Station, the fraud came to light when Patel, a resident of Thaltej and director of Bonneville Foods Private Limited, attempted to access Aadhaar-linked services for business documentation and discovered that the mobile number linked to his Aadhaar had been changed without his knowledge.
Subsequent checks allegedly revealed that his Aadhaar profile had been altered and linked to another mobile number and email address. Police said the accused also used the compromised credentials to access DigiLocker, activate banking-related services and apply for personal loans through digital lending platforms, including RKBansal, True Credits and EarlySalary.
Deepfake videos used to bypass biometric verification
During the investigation, the cyber police found that the accused had allegedly bypassed Aadhaar's facial authentication process by creating AI-generated deepfake videos from the complainant's photograph.
Police said the accused used the manipulated videos to simulate blinking and facial movement, thereby fooling the facial authentication system during Aadhaar update procedures.
Investigators said the mobile number linked to the complainant's Aadhaar was then changed using unauthorised Aadhaar update kits allegedly operated through Common Service Centre (CSC) operators.
The four arrested have been identified as:
Kanubhai Bahadursinh Parmar (32), a CSC operator from Anand district
Ashish Rajendrabhai Waland (27), a CSC operator based in Vadodara
Mohammad Kaif Iqbalbhai Patel (26), associated with a CSC centre in Bharuch district
Deep Maheshbhai Gupta (29), a machine operator from Ahmedabad, originally from Uttar Pradesh. All four accused are currently in judicial custody.**
According to police,
Kanubhai Parmar allegedly supplied Aadhaar update kits to the co-accused in exchange for commission.
Ashish Waland allegedly passed on the Aadhaar kit used in the fraudulent mobile number update and earned commission for each transaction.
Mohammad Kaif Patel allegedly coordinated the use of the complainant's Aadhaar number, target mobile number and photograph, and facilitated the creation of the deepfake facial authentication video.
Deep Gupta allegedly assisted in arranging and transmitting the victim's Aadhaar details and photograph for the fraudulent operation.
Police said the Aadhaar update kit used to alter the complainant's mobile number has been recovered. Investigators said one of the accused, Ashish Waland, was previously booked by Vadodara Rural Police in a separate case involving the alleged preparation of fake Aadhaar cards.
Modus operandi
Obtaining victims' Aadhaar details and personal data through unauthorised means;
Illegally updating the mobile number linked to Aadhaar using Aadhaar update kits;
Using AI-generated deepfake videos to bypass biometric/facial authentication;
Gaining access to DigiLocker and Aadhaar-linked e-KYC systems;
Opening bank accounts and applying for digital loans in the victim's name.
Ahmedabad Cyber Crime police said further investigation is underway to identify additional accused and determine whether more victims were targeted by the racket.
Police have booked the accused under relevant provisions of the Bharatiya Nyaya Sanhita, 2023, and the Information Technology Act for criminal conspiracy, forgery, identity theft, cheating and unauthorised access to computer resources.
Incident 1473: Jasper County Student in Texas Reportedly Posted Purported AI-Generated Nude Image of Classmate on Snapchat
“Jasper County officials warn of AI dangers after first deepfake arrest”
JASPER COUNTY, Texas (KTRE) - Jasper County law enforcement is sounding the alarm about artificial intelligence after arresting a 17-year-old Buna ISD student in the county's first deepfake case.
Nathaniel Davis was arrested last Friday under state law and is charged as an adult. Jasper County Sheriff Chuck Havard said Davis admitted to creating an AI-generated nude photo of a classmate and posting it on Snapchat.
"It's a misdemeanor, it's a very serious charge," Havard said. "I mean obviously can you realize how embarrassing that would be for someone. And so, you can't do that."
The post was taken down after a few minutes, but Havard said the damage was already done.
Havard said this is the first case Jasper County deputies have investigated involving artificial intelligence.
"This is our first case that we've been notified about artificial intelligence on this type of thing, but I can see it being a huge problem though," Havard said.
Deep AI fakes use a real person's face or body to create fake and often explicit images or videos.
Smith County District Attorney Jacob Putman is not prosecuting this case, but said the issue is becoming a bigger problem and Texas law has caught up with the technology.
"That's a crime in Texas," Putman said. "Texas law has caught up with technology and if you take someone's image or likeness and you turn it into an image like that there are legal consequences."
Putman said those consequences can range from prison or jail time to probation. If convicted, several things could determine punishment on cases like these.
"Are they remorseful, did they take it down, how long was it up --- those are all factors to take into consideration," Putman said.
Havard said these decisions come with a bad lack of judgment and said Davis likely did not think it was that big of a deal as far as criminal responsibility.
"It is a very serious crime," Havard said. "I mean we have to remember that we have a victim in this case."
The case is now in the hands of the court.
Havard said the solution starts at home by talking to kids about the problems of misusing AI before it becomes a criminal matter.
He has started training his deputies about AI dangers and hopes to target schools and post more Facebook content about it.
Incident 1474: Tasmanian School Students Reportedly Created Purported AI-Generated Pornographic Images of Female Classmates
“AI deepfakes are circulating in our schools. What's the right way to handle them?”
A Tasmanian school has been criticised by parents over its response to a deepfake incident targeting 21 female students.
The parents say they were advised by The Friends School not to tell their daughters their images had been identified in deepfake pornographic images.
They chose to tell them, but said their children felt silenced and unsupported, not knowing whether their peers had been told or not.
The school said it had acted on police and expert advice, and that the images had been created on personal devices outside of school hours, meaning it could not address the challenges of deepfakes on its own.
So what should parents and schools do when there's a deepfake incident involving students?
What is a 'deepfake'?
A deepfake is a photo, video or sound file that's been created using AI technology to show someone doing or saying something that they didn't actually do or say.
The material can be very realistic, especially when seen on a small screen, and is easily shared.
The eSafety commissioner Julie Inman Grant said deepfakes might be created for a range of reasons, such as to harass or humiliate someone, for sexual gratification, or for child sexual abuse.
Victims often don't know their image has been used in a deepfake.
The person creating the deepfake might use someone's social media image, such as a selfie or even a school photo.
Unfortunately, the apps that create this kind of material are easy to use, often at low or no cost.
How big of an issue is it among school students?
It's a crisis, according to the eSafety commissioner.
In 2024, data showed a 550 per cent year-on-year increase in deepfakes appearing online since 2019.
Pornographic videos made up 98 per cent of the deepfake material online, and that 99 per cent of the imagery was of women and girls.
Deepfake incidents were throwing entire schools into turmoil, Ms Inman Grant said, with staff, parents and students often unsure how to respond.
Targets were left feeling humiliated, angry and afraid.
Students on the receiving end of deepfake material --- in group chats or private messages --- were sometimes also left feeling fearful and confused, unsure whether to report it or how to support their friends.
They often also worried they'd be next.
What is the school's responsibility? And what is best practice?
It can be a very difficult topic to navigate, particularly when material can be made on private devices and in private settings, but it still involves students.
The well-being of the young people and/or any staff members involved is a priority, according to the eSafety Commissioner.
Schools are considered frontline responders and they need to give student victims a sense of agency and be involved in decision-making.
The reputation of the school shouldn't be the focus of the response.
A school staff leader should be appointed to manage the incident, with information shared only on a need-to-know basis.
The incident should be reported to police and eSafety, and the family of an affected student should be kept informed of the actions the school is taking.
The school should also engage additional services to support the well-being of those involved.
It's also important to build policies and education around digital literacy, respectful relationships, and consent.
What should you do if your child's image has been used in deepfake pornography?
If your child comes to you, stay calm and start working out a plan to respond to the incident together.
The eSafety commissioner suggests collecting evidence --- screenshots, links, usernames --- without saving or sharing the content.
Next, help them report the incident to the school, police, eSafety and any online platforms involved.
Know that your child may be feeling shame, humiliation, self-blame, fear that they won't be believed, isolation and mistrust. Seek support on their behalf, if they need it.
There is no one-size-fits-all approach to addressing it with your child if you find out before them that they've been identified in deepfake material.
The Sexual Assault Support Service says there may be various reasons that parents don't want to tell their child --- such as age or mental health concerns --- but it encourages transparency where possible and says parents can approach services like SASS for advice.
If your child tells you they received a deepfake, praise them for telling you, for not sharing it and reinforce that speaking up was the right thing to do.
If your child has created or shared a deepfake, explain the serious legal and emotional consequences and encourage accountability by ensuring they delete and report the content, and apologise for their part in it.
The Sexual Assault Support Service recommends its PAST (Prevention, Assessment, Support and Treatment) early intervention program for parents concerned their child may have been involved in creating or sharing sexually-explicit images, including deepfakes.
Is it illegal?
Yes.
In 2024, Australia introduced new laws banning the sharing of non-consensual deepfake adult pornography.
It carries a maximum jail sentence of six years. Some young people might create and share deepfakes as a prank or experiment without realising that it can be a serious criminal offence.
In April this year, the first Australian to be prosecuted under Commonwealth laws pleaded guilty.
Quick Add New Report URL
About the Database
The AI Incident Database is dedicated to indexing the collective history of harms or near harms realized in the real world by the deployment of artificial intelligence systems. Like similar databases in aviation and computer security, the AI Incident Database aims to learn from experience so we can prevent or mitigate bad outcomes.
You are invited to submit incident reports, whereupon submissions will be indexed and made discoverable to the world. Artificial intelligence will only be a benefit to people and society if we collectively record and learn from its failings. (Learn More)

AI Incident Roundup – February, March, and April 2026
By Daniel Atherton
2026-05-05
Lisière de la forêt de Fontainebleau, Alfred Sisley, 1865 🗄 Trending in the AIID For this roundup, I'll be surveying the new incident IDs t...
The Database in Print
Read about the database at Time Magazine, Vice News, Venture Beat, Wired, Bulletin of the Atomic Scientists , and Newsweek among other outlets.
Incident Report Submission Leaderboards
These are the persons and entities credited with creating and submitted incident reports. More details are available on the leaderboard page.
The AI Incident Briefing

Create an account to subscribe to new incident notifications and other updates.
Random Incidents
The Responsible AI Collaborative
The AI Incident Database is a project of the Responsible AI Collaborative, an organization chartered to advance the AI Incident Database. The governance of the Collaborative is architected around the participation in its impact programming. For more details, we invite you to read the founding report and learn more on our board and contributors.

View the Responsible AI Collaborative's Form 990 and tax-exempt application. We kindly request your financial support with a donation.
Organization Founding Sponsor
Database Founding Sponsor

Sponsors and Grants





