Welcome to theAI Incident Database
Incident 1479: ChatGPT-Generated Image of Nonexistent 'Homeless Man' Was Used in False St. Petersburg, Florida Burglary and Sexual Battery Reports
“Woman Guilty Of ChatGPT Rape Hoax”Latest Incident Report
DECEMBER 2--The Florida Woman who gave police an AI-generated photo of a homeless man who purportedly barged into her residence and raped her has been convicted of falsely reporting a crime, a sham
prompted by a "TikTok challenge" and her desire for attention, records show.
Brooke Schinault, 32, yesterday pleaded no contest to the hoax and was adjudged guilty of the misdemeanor. She was fined and placed on probation.
In October, Schinault called 911 to report that a male intruder forced his way into her St. Petersburg apartment and committed the sexual assault while her infant slept in another room.
Schinault claimed to have taken a picture (seen at right) of the man while he sat on her sofa. Police provided TSG with the photo in response to a public records request.
The reported burglary/battery prompted a police response that included seven officers, a detective, six rescue personnel, and a forensic technician who gathered DNA samples.
Schinault's story fell apart when Det. Dagni Closser examined the photo of the purported suspect, a hoodie-wearing white male in his 30s. "As I am familiar with many social media sites and trends, I recognized the male as a TikTok challenge that is viral," Closser reported. The "AI homeless man challenge," Closser noted, involved the uploading of a residential photo to an AI application, which then inserts a photo of a homeless man into the image.
"Most place the subject on their couch and then screenshot the result and send to loved ones in a prank that places concern that an unknown subject is in their home," said Closser, who reported finding "several TikTok reels of this challenge" that used the same photo of the homeless man.
Confronted by police, Schinault . . . initially claimed she only used AI to sharpen a blurry photo of her attacker. She eventually confessed, saying that she was dealing with depression, was not sleeping, and "wanted attention."
The hoax image, Schinault said, was made by using "Google and ChatGPT." To create the photo, cops reported, "Brooke typed 'a homeless man.'"
Incident 1478: Scammers Reportedly Used AI-Generated Images of Missing Dog Archer to Solicit Fraudulent Vet Payment from Deltona, Florida Family
“Florida family warns of AI scam targeting owners of missing pets”
DELTONA, Fla. - A Florida family is sharing their story to warn other pet owners after scammers used Artificial Intelligence to orchestrate a scam involving their missing dog.
Bill Cosens said the ordeal began Sunday morning shortly after his beagle mix, Archer, escaped from his backyard. After posting about his missing dog on social media, Cosens received a call from a woman claiming she had found the dog.
The caller told Cosens that Archer had been hit by a vehicle and required $2,800 for immediate emergency surgery. The scammer sent photos that appeared to show Archer on an operating table using pictures Cosens posted to social media. The pictures were fake and AI-generated.

Generated AI Photos
What they're saying:
"When we start thinking logically and stepping back from our emotions, then it starts becoming pretty obvious that we're being scammed," Cosens said. "And now we go from being upset and distraught to being pretty ticked off."
Cosens grew suspicious when he researched the address provided for the veterinary clinic by the scammer and discovered it was the address to the nearby city hall. He luckily didn't send any money.
Cosens said a neighbor witnessed a woman putting Archer into a vehicle and driving away. The neighbor was eventually able to track the woman down and return the dog safely to Cosens.
A Growing Trend:
The Volusia County Sheriff's Office is currently investigating the incident. This case mirrors a similar report from St. Petersburg earlier this year, where another pet owner was targeted by a nearly identical scheme. In that instance, the victim's bank flagged the suspicious transaction before the funds were transferred.
Experts say these scammers often "scrape" photos from social media lost-and-found posts and use AI tools to add fake injuries or medical equipment to the images.
Cosens advises anyone in a similar situation to remain vigilant despite the emotional toll of a lost pet.
"The first biggest thing to avoid being scammed, I would say, is if they're not willing to meet you in person and get payment in person, they're scamming you," Cosens said.
The Source: This article was written using information from two different pet owners who were scammed this year, and reporting from FOX 13's Kailey Tracy.
Incident 1480: Purportedly AI-Recreated Clips from Beastie Boys' 'Sabotage' Video Reportedly Appeared in FBI Promotional Video Posted by Kash Patel
“Did FBI Director Kash Patel use AI to rip off the Beastie Boys?”
A promotional video for the FBI posted by director Kash Patel appears to have used AI to generate short clips nearly identical to those in the Beastie Boys' iconic music video for their 1994 classic song "Sabotage."
Patel released the video on Monday on X, in a post about the FBI's effort to combat "massive fraud." The roughly-two minute video used the instrumental version of the song "Sabotage," and footage nearly identical to the original music video, interspersed with what appeared to be authentic footage of FBI agents conducting their work.
By Tuesday it had received roughly half-a-million views.
An analysis by NPR shows at least six clips in the FBI video were frame-by-frame recreations of shots in the iconic "Sabotage" music video, which was directed by Spike Jonze. The clips featured vehicles, people and buildings that were incredibly similar to the original video, but with small differences that would likely be generated by AI.
For example, in one shot where a car is spinning out, grilles are clearly visible in some of the windows in the original footage, but they are missing in the FBI version of the clip. Another shot shows an individual with a megaphone jumping from roof-to-roof with telephone lines in the background. The lines and dirt on the building all align identically to the 1994 video, which was filmed over 30 years ago. In one frame, one of the telephone lines appears to go through the head of the character: the sort of flaw that can be common in AI video generation.
Representatives for Spike Jonze and the Beastie Boys did not respond to NPR's request for comment. The FBI also did not respond to NPR's request for more information about the video and how it was made.
Independent experts who reviewed the video for NPR agreed that the clips were likely generated by AI.
"It does seem like it would be highly likely to be AI," Kolina Koltai, a researcher with the online investigations group Bellingcat told NPR in an email. "You can even see some of the AI errors."
For example, Koltai says there's a moment of characteristic AI-generated artifacts that appear in the "No Fraud" license plate on the FBI car in the opening shot.
The clips were likely created by taking screenshots or short clips from the original "Sabotage" music video and feeding them into an image-to-video model, Hany Farid, a professor at the University of California, Berkeley who specializes in the analysis of digital images, wrote to NPR in an email. It's also possible that the AI model generated the video clips itself because the original music video was in its training data -- though Farid believes that's less likely.
In any event, Farid believes AI was involved: "The similarities are hard to explain otherwise," he wrote.
In President Trump's second term, members of his administration have enthusiastically co-opted popular music, movies and memes as a way of spreading their message, even when artists have protested.
Using AI has also been a common tactic. Notably, last October President Trump himself posted an AI-generated video of himself dumping brown fluid on protestors at a "No Kings" rally set to the song "Danger Zone" by Kenny Loggins. Loggins demanded the video be taken down. It remains up on Trump's Truth Social account.
In January, the White House posted an AI-doctored image of a protestor in Minneapolis who had been arrested by federal authorities without labeling that the image had been manipulated.
Patel, who was born in Long Island in 1980, would have been in junior high or high school when the Beastie Boys released "Sabotage."
Incident 1476: Maryland Police Allegedly Relied on Facial Recognition Lead in Wrongful Arrest and Detention of Kimberlee Williams
“More than a Dozen Wrongful Arrests Due to Police Reliance on Facial Recognition Technology”
One ACLU client spent six months in jail, because police relied on facial recognition technology to incorrectly identify her as a suspect. She's the fourteenth person known to be wrongfully arrested due to the technology's failures.
When police arrested Kimberlee Williams, a grandmother living in Oklahoma, because of a warrant from Maryland, she was shocked. She had never been to Maryland in her life.
Ms. Williams later learned that Maryland police had relied on an incorrect result from facial recognition technology that falsely flagged her as a suspect. She is the fourteenth person in the U.S. to join a growing list of people wrongfully arrested because police let flawed facial recognition technology taint their investigations.
Police use of facial recognition technology is dangerous, and stories of people wrongfully arrested because of police reliance on incorrect facial recognition results continue to surface. Today, the ACLU and ACLU of Maryland sent letters to three Maryland police departments on behalf of Ms. Williams, who was wrongfully arrested and jailed for six months because Maryland police relied on a false facial recognition result and concealed their reliance on that unreliable technology from the court when applying for an arrest warrant.
One Woman Arrested for a Crime She Didn't Commit
On June 23, 2021, Ms. Williams was accompanying one of her daughters on a DoorDash delivery to a local military base in Lawton, Oklahoma. When base security at the entry checkpoint conducted a standard identification check, they discovered outstanding Maryland arrest warrants for Ms. Williams and detained her.
These warrants sought Ms. Williams' arrest for a series of fraudulent over-the-counter cash withdrawals in Maryland in December 2019 and January 2020. An unknown individual had entered SunTrust and Truist bank branches in three different counties, impersonated account holders, and fraudulently withdrew thousands of dollars from those individuals' accounts.
Ms. Williams, however, was nowhere near Maryland during this time. She was a resident of Oklahoma, living with two of her daughters and their children. While someone was defrauding banks in Maryland, Ms. Williams was in Oklahoma with her family celebrating Christmas and her daughter's birthday.
At the bank's headquarters, a financial crimes investigator obtained images of the suspect from security camera footage and sent an image to a national listserv of police and private investigators called Crimedex. Someone on the email list ran the image through facial recognition technology and sent back Ms. Williams' name and photo as a purported match to the suspect.
In the first county, the bank investigator informed Montgomery County police that Ms. Williams was identified "using facial recognition software" but provided no further information, such as who ran the facial recognition search or how they conducted it. To police in the other two counties, the bank investigator provided even less information, writing that the suspect was "recognized" as Ms. Williams, "a suspect in" the previous Montgomery County investigation.
The detectives made no attempt to establish whether Ms. Williams, an Oklahoma resident, could have been anywhere near Maryland during the relevant period. Her alibi evidence, including social media posts geotagged to Oklahoma, would have shown she couldn't be the right person. The detectives also ignored other obvious leads: one of the incidents involved a fraudulent check made payable to a name not associated with the bank accounts. Yet there was no investigation of that name. Instead, the police relied only on the facial recognition lead, plus their own visual comparisons of the photos of the suspect and Ms. Williams. They thought the two looked similar.
After she was arrested, Ms. Williams spent a total of six months in jail, first in Oklahoma waiting to be extradited, and then in Montgomery and Prince George's counties in Maryland. When the last of the charges were finally dropped, Ms. Williams was unceremoniously released onto the street in the middle of December and left to find her way home halfway across the country. She had no phone and no money. Relying on the kindness of strangers, Ms. Williams managed to borrow a phone to contact her family and stay at a nearby hotel before she could return home.
The ordeal turned her and her family's life upside down, and she is still trying to recover five years later. She lost her job because of the arrest. She also worries that she might unknowingly catch the attention of law enforcement again for a crime she had nothing to do with, causing another ordeal like this one.
Wrongful Arrests Due to Facial Recognition Technology Increase
What happened to Ms. Williams is outrageous and is unfortunately a predictable consequence of police using facial recognition technology. At least thirteen other people are publicly known to have been wrongfully arrested by U.S. police because of reliance on erroneous facial recognition results:
- Nijeer Parks, arrested by police in Woodbridge, New Jersey (February 2019)
- Michael Oliver, arrested by police in Detroit, Michigan (July 2019)
- Robert Williams, arrested by police in Detroit, Michigan (January 2020)
- Christopher Gatlin, arrested by police in St. Louis, Missouri (August 2021)
- Alonzo Sawyer, arrested by Maryland transit police (March 2022)
- Randal Quran Reid, arrested by Georgia police on a warrant issued in Jefferson Parish, Louisiana (November 2022)
- Porcha Woodruff, arrested by police in Detroit, Michigan (February 2023)
- Jason Killinger, arrested by police in Reno, Nevada (September 2023)
- Robert Dillion, arrested on a warrant obtained by police in Jacksonville Beach, Florida (August 2024)
- Javier Lorenzano-Nunez, arrested by police in Phoenix, Arizona (October 2024)
- Trevis Williams, arrested by police in New York City (April 2025)
- Angela Lipps, arrested by U.S. Marshals in Tennessee on a warrant obtained by police in Fargo, North Dakota (July 2025)
- Beau Burgess, arrested by police in Orlando, Florida (August 2025)
When the ACLU sued Detroit police on behalf of Robert Williams in 2021, supporters of police using facial recognition technology characterized his wrongful arrest as an unfortunate but isolated mistake that shouldn't undermine trust in the technology. But the wrongful arrests have kept coming. It is now impossible to ignore the dangers of facial recognition technology in policing. Indeed, police keep letting the same predictable failures happen again and again.
Facial recognition technology often produces false matches. Part of what is so dangerous about these systems is that when they get it wrong, innocent people who look similar to a suspect are often flagged. That is exactly what facial recognition is designed to do --- find similar faces in a database, most or all of whom aren't actually a match. When facial recognition technology generates false matches to innocent lookalikes, it can taint the investigation by tricking witnesses and police into mistakenly believing they've found the suspect.
In several cases, people have been wrongfully arrested after police moved straight from facial recognition results to photo lineups presented to witnesses. When presented with photos containing an image of a person who was chosen by facial recognition technology, surrounded by filler photos who look less like the suspect, witnesses unsurprisingly think they've found the culprit. That tainting of lineup identifications has led to the wrongful arrests of at least seven people.
In other cases, including Ms. Williams', the only confirmation that police obtained was a visual comparison of the suspect photo to the facial recognition result by a law enforcement officer, who was similarly influenced by the technology into thinking they had a match.
Time and time again, police have failed to conduct reliable investigation and ignored obvious reasons to question a facial recognition lead. Many of the people wrongfully arrested had visible differencesto the suspect in the photo: Michael Oliver had full tattoo sleeves, Porcha Woodruff was eight months pregnant, and Trevis Williams was eight inches taller and seventy pounds heavier. In the case leading to Nijeer Parks' arrest, police didn't wait for the results of DNA and fingerprint analysis that would have pointed to someone else. And for those who were arrested for crimes far from where they lived --- including Kimberlee Williams, Randal Quran Reid, Robert Dillon, and Angela Lipps --- police appeared to have not investigated whether they could have even been in the right city or state at the time of the crime.
Without Protections, Facial Recognition Technology Will Keep Tainting Investigations
Despite police department policies and official disclaimers warning officers that facial recognition technology results are not sufficient grounds to arrest someone, police continue letting the technology ruin their investigations. The problems don't stop there. Multiple studies have shown that facial recognition technology produces higher false match rates for people of color, women, younger people, and the elderly. Unsurprisingly, most of the known wrongful arrests from this technology have been of Black people. But as more stories come to light, including that of Ms. Williams and other white people subjected to false arrests, we are seeing that nobody is safe from having their lives upended by this technology.
These serious dangers call for serious solutions. More than 20 cities and other jurisdictions across the country have banned police from using facial recognition technology at all. In Detroit, under a landmark settlement in Robert Williams' wrongful arrest case, the police department no longer permits officers to request arrest warrants based on only a photo lineup combined with a facial recognition-based lead. Indiana has enacted a similar protection into state law.
Ms. Williams will never get back the six months she spent in jail for a crime she clearly had nothing to do with. In our letters to Maryland police today, we are seeking both accountability and serious policy changes to minimize the chance of this happening to anyone in the future. One wrongful arrest from this dangerous technology is an outrage. More than a dozen, and counting, is a complete travesty that lawmakers and police must take immediate action to end.
Incident 1471: Meta Internal AI Agent Reportedly Gave Advice That Allegedly Exposed Sensitive Data to Unauthorized Employees
“A rogue AI led to a serious security incident at Meta”
For almost two hours last week, Meta employees had unauthorized access to company and user data thanks to an AI agent that gave an employee inaccurate technical advice, as previously reported by The Information. Meta spokesperson Tracy Clayton said in a statement to The Verge that "no user data was mishandled" during the incident.
A Meta engineer was using an internal AI agent, which Clayton described as "similar in nature to OpenClaw within a secure development environment," to analyze a technical question another employee posted on an internal company forum. But the agent also independently publicly replied to the question after analyzing it, without getting approval first. The reply was only meant to be shown to the employee who requested it, not posted publicly.
An employee then acted on the AI's advice, which "provided inaccurate information" that led to a "SEV1" level security incident, the second-highest severity rating Meta uses. The incident temporarily allowed employees to access sensitive data they were not authorized to view, but the issue has since been resolved.
According to Clayton, the AI agent involved didn't take any technical action itself, beyond posting inaccurate technical advice, something a human could have also done. A human, however, might have done further testing and made a more complete judgment call before sharing the information --- and it's not clear whether the employee who originally prompted the answer planned to post it publicly.
"The employee interacting with the system was fully aware that they were communicating with an automated bot. This was indicated by a disclaimer noted in the footer and by the employee's own reply on that thread," Clayton commented to The Verge. "The agent took no action aside from providing a response to a question. Had the engineer that acted on that known better, or did other checks, this would have been avoided."
Last month, an AI agent from open-source platform OpenClaw went more directly rogue at Meta when an employee asked it to sort through emails in her inbox, deleting emails without permission. The whole idea behind agents like OpenClaw is that they can take action on their own, but like any other AI model, they don't always interpret prompts and instructions correctly or give accurate responses, a fact Meta employees have now discovered twice.
Quick Add New Report URL
About the Database
The AI Incident Database is dedicated to indexing the collective history of harms or near harms realized in the real world by the deployment of artificial intelligence systems. Like similar databases in aviation and computer security, the AI Incident Database aims to learn from experience so we can prevent or mitigate bad outcomes.
You are invited to submit incident reports, whereupon submissions will be indexed and made discoverable to the world. Artificial intelligence will only be a benefit to people and society if we collectively record and learn from its failings. (Learn More)

AI Incident Roundup – February, March, and April 2026
By Daniel Atherton
2026-05-05
Lisière de la forêt de Fontainebleau, Alfred Sisley, 1865 🗄 Trending in the AIID For this roundup, I'll be surveying the new incident IDs t...
The Database in Print
Read about the database at Time Magazine, Vice News, Venture Beat, Wired, Bulletin of the Atomic Scientists , and Newsweek among other outlets.
Incident Report Submission Leaderboards
These are the persons and entities credited with creating and submitted incident reports. More details are available on the leaderboard page.
The AI Incident Briefing

Create an account to subscribe to new incident notifications and other updates.
Random Incidents
The Responsible AI Collaborative
The AI Incident Database is a project of the Responsible AI Collaborative, an organization chartered to advance the AI Incident Database. The governance of the Collaborative is architected around the participation in its impact programming. For more details, we invite you to read the founding report and learn more on our board and contributors.

View the Responsible AI Collaborative's Form 990 and tax-exempt application. We kindly request your financial support with a donation.
Organization Founding Sponsor
Database Founding Sponsor

Sponsors and Grants





