Welcome to theAI Incident Database
Incident 1349: AI Training Dataset for Detecting Nudity Allegedly Found to Contain CSAM Images of Identified Victims
“AI Dataset for Detecting Nudity Contained Child Sexual Abuse Images”Latest Incident Report
A large image dataset used to develop AI tools for detecting nudity contains a number of images of child sexual abuse material (CSAM), according to the Canadian Centre for Child Protection (C3P).
The NudeNet dataset, which contains more than 700,000 images scraped from the internet, was used to train an AI image classifier which could automatically detect nudity in an image. C3P found that more than 250 academic works either cited or used the NudeNet dataset since it was available download from Academic Torrents, a platform for sharing research data, in June 2019.
"A non-exhaustive review of 50 of these academic projects found 13 made use of the NudeNet data set, and 29 relied on the NudeNet classifier or model," C3P said in its announcement.
C3P found more than 120 images of identified or known victims of CSAM in the dataset, including nearly 70 images focused on the genital or anal area of children who are confirmed or appear to be pre-pubescent. "In some cases, images depicting sexual or abusive acts involving children and teenagers such as fellatio or penile-vaginal penetration," C3P said.
People and organizations that downloaded the dataset would have no way of knowing it contained CSAM unless they went looking for it, and most likely they did not, but having those images on their machines would be technically criminal.
"CSAM is illegal and hosting and distributing creates huge liabilities for the creators and researchers. There is also a larger ethical issue here in that the victims in these images have almost certainly not consented to have these images distributed and used in training," Hany Farid, a professor at UC Berkeley and one of the world's leading experts on digitally manipulated images, told me in an email. Farid also developed PhotoDNA, a widely used image-identification and content filtering tool. "Even if the ends are noble, they don't justify the means in this case."
"Many of the AI models used to support features in applications and research initiatives have been trained on data that has been collected indiscriminately or in ethically questionable ways. This lack of due diligence has led to the appearance of known child sexual abuse and exploitation material in these types of datasets, something that is largely preventable," Lloyd Richardson, C3P's director of technology, said.
Academic Torrents removed the dataset after C3P issued a removal notice to its administrators.
"In operating Canada's national tipline for reporting the sexual exploitation of children we receive information or tips from members of the public on a daily basis," Richardson told me in an email. "In the case of the NudeNet image dataset, an individual flagged concerns about the possibility of the dataset containing CSAM, which prompted us to look into it more closely."
C3P's findings are similar to 2023 research from Stanford University's Cyber Policy Center, which found that LAION-5B, one of the largest datasets powering AI-generated images, also contained CSAM. The organization that manages LAION-5B removed it from the internet following that report and only shared it again once it had removed the offending images.
"These image datasets, which have typically not been vetted, are promoted and distributed online for hundreds of researchers, companies, and hobbyists to use, sometimes for commercial pursuits," Richardson told me. "By this point, few are considering the possible harm or exploitation that may underpin their products. We also can't forget that many of these images are themselves evidence of child sexual abuse crimes. In the rush for innovation, we're seeing a great deal of collateral damage, but many are simply not acknowledging it --- ultimately, I think we have an obligation to develop AI technology in responsible and ethical ways."
Update: This story has been updated with comment from Lloyd Richardson.
Incident 1344: Purported AI-Generated Images Falsely Depict Kate Garraway With Fictitious Partner
“Kate Garraway says fake pics were 'hurtful' and warns of 'damage' AI can cause”
Kate Garraway said fake AI-generated images of her and a fictitious 'new partner' had been 'hurtful' for her children.
The Good Morning Britain star lost her 'darling' husband Derek Draper in January 2024, following a long health battle caused by Covid complications.
Now, two years on from his death, the ITV presenter has been the victim of an online hoax, the Mirror reports.
The 58-year-old said she initially 'laughed off' the the fake images but quickly realised how the false rumours were impacting her and Derek's two kids.
Kate took to Instagram to address the rumours and said: "For months now people (even those who know me really well) have been asking me about stuff out there that is totally fake.
"On one level it's really sweet that people care about my life but very confusing - and in some cases - hurtful for the kids when it's fake news."
She further touched on the subject in an article for The Sun, explaining the imagined match-ups were co-stars and friends, including Richard Arnold or Ben Shephard.
Kate wrote in the publication "we laughed it off".
However, she added: "The turning point for me was when I became aware of the impact it might have on my children and Derek's family, too.
"A headline popped up, sent to me by an algorithm I assume, saying: 'Everything you need to know about Kate Garraway's new boyfriend'."
She said she was "flabbergasted and intrigued" to find out more about the new man in her life - who didn't actually exist - but said one detail "stopped me cold".
Kate was left fuming over claims her son was not happy and wanted his mum to end her 'relationship'.
The TV favourite said the 16-year-old already had enough to deal with without reports claiming he was being obstructive of his mum's happiness, insisting he was "the opposite of that".
"When everything can be faked, proof starts to lose its meaning," she said, adding that once trust was eroded, it was 'extremely hard to get it back'.
She previously told Instagram followers: "Thank goodness we still have some proper 'human' journalists out there, but this is something all of us need to think about when getting our 'news' on the net - that it's a trusted source.
"In this case it only affects me really and is fundamentally silly and daft but imagine if it was something more important the damage it could do?"
Incident 1340: Purported AI-Generated Image Falsely Depicting JD Vance and Usha Vance in Public Altercation Circulated on Social Media
“Does an image show JD Vance arguing with his wife, Usha, in a restaurant?”
Claim:
A photo shared online in December 2025 authentically showed U.S. Vice President JD Vance arguing with his wife, Usha Vance, in a restaurant.
Rating:
Fake
In December 2025, an image circulated on social media purportedly showing U.S. Vice President JD Vance arguing with his wife, Usha Vance, in a restaurant.
The image appeared to show JD Vance wearing a white T-shirt and a stern expression while sitting across from Usha Vance. One Facebook user who shared it (archived) wrote:
Looks like things are not so good in Republicanistan. Usha is a liberal. Always has been. Apparently they were having and [sic] argument loud enough to quiet the restaurant. Why is he wearing a tee shirt?
The image circulated elsewhere on Facebook and X (archived here and here). In short, it was fake.
After Snopes initially published its story, YouTuber Keith Edwards reached out to confirm the image was a thumbnail he created for an October 2025 video, describing it as "fake." Edwards told us he used the artificial intelligence platform ChatGPT to create the image.
The thumbnail matched the image circulating on social media. In both, the man wore the same facial expression and the woman was positioned the same way. A reverse image search did not lead us to any earlier version of the fabricated image.
JD Vance himself appeared to support the conclusion that the image was fake in a Dec. 9 X post (archived), replying to a user who shared the image with a comment that seemed to poke fun at the rumor: "I always wear an undershirt when I go out in public to have a fight loudly with my wife."
We also ran the image through three AI-detection platforms: Hive, Sightengine and Image Whisperer. Sightengine estimated a 97% probability that the image was AI-generated, while Image Whisperer concluded with "high confidence" it was AI-generated.
Hive, however, flagged the image as "not likely to contain AI-generated or deepfake content." It's worth noting that these tools are not always accurate and their results should be interpreted with caution.
Prior to Edwards' confirmation, the team at GetReal Security, which specializes in authenticating digital media, also analyzed the image for Snopes.
Hany Farid, co-founder of GetReal and a professor at the University of California, Berkeley, told us that the image's low quality made forensic analysis difficult. However, the team found "biometric anomalies" suggesting the image "may be AI-generated or is simply a photo of someone that resembles [JD] Vance."
"In particular, the facial features in the image do not match contemporaneous images of Vance," Farid added. Contemporaneous images are photos taken around the same time that are used to verify another image's authenticity.
Matthew Stamm, an associate professor of electrical and computer engineering at Drexel University, told Snopes that he and his doctoral student analyzed the image to determine whether it was AI-generated, using a specialized forensic tool that detects subtle "fingerprints" left by generative AI. Stamm said the tool "determined with roughly 95% confidence that it was unable to detect traces left by generative AI."
However, he cautioned that this does not prove the image is real and cannot definitively rule out AI generation. Factors such as its size, low resolution and repeated sharing on social media made the analysis difficult, changing "the quality of the forensic traces that we analyze, similar to how fingerprints at a crime scene might be smudged," Stamm explained.
The fabricated image spread online amid rumors of a rift in the Vances' marriage, which began to spread in October 2025. Photos of Usha Vance without her wedding ring taken the following month further fueled speculation.
However, a spokesperson told USA Today in a Nov. 21 statement that Usha Vance "is a mother of three young children, who does a lot of dishes, gives lots of baths, and forgets her ring sometimes." Other publicly available Getty Images from November showed Usha Vance wearing her wedding ring.
In a Dec. 4 interview with NBC News, JD Vance said the couple's marriage is "as strong as it's ever been."
Incident 1343: ICE AI Resume Screening Error Allegedly Routed Inexperienced Recruits Into Inadequate Training Pathways
“ICE error meant some recruits were sent into field offices without proper training, sources say”
As Immigration and Customs Enforcement was racing to add 10,000 new officers to its force, an artificial intelligence error in how their applications were processed sent many new recruits into field offices without proper training, according to two law enforcement officials familiar with the error.
The AI tool used by ICE was tasked with looking for potential applicants with law enforcement experience to be placed into the agency's "LEO program" --- short for law enforcement officer --- for new recruits who are already law enforcement officers. It requires four weeks of online training.
Applicants without law enforcement backgrounds are required to take an eight-week in-person course at ICE's academy at the Federal Law Enforcement Training Center in Georgia, which includes courses in immigration law and handling a gun, as well as physical fitness tests.
"They were using AI to scan résumés and found out a bunch of the people who were LEOs weren't LEOs," one of the officials said.
The AI tool was initially the mechanism used to categorize résumés, the officials said and flagged anyone with the word "officer" on their résumés --- for example, a "compliance officer" or people who said they aspired to be ICE officers.
The majority of the new applicants were flagged as law enforcement officers, the officials said, but many had no experience in any local police or federal law enforcement force.
Both law enforcement officials noted that ICE's field offices provide more training beyond what is provided at the academy or in the online course before officers are sent out onto the street and that the officers singled out by the AI tool most likely received that training. The officials weren't authorized to speak publicly and spoke to NBC News on condition of anonymity.
A DHS spokesperson called it a "technological snag" that was quickly rectified, sand said it impacted about 200 hires who then reported to the Federal Law Enforcement Training Center for full training. The spokesperson said that no candidate was placed on enforcement duties without "appropriate training and credentials."
"The vast majority of new officers brought on during the hiring surge are experienced law enforcement officers who have already successfully completed a law enforcement academy," the spokesperson said.
The AI mistake was identified in mid-fall --- over a month into the recruitment surge --- and ICE immediately began taking steps to remedy the situation, including manual reviews of résumés of new hires, the officials said.
As the immigration agency surges agents into American cities, their enforcement tactics are increasingly questioned by local law enforcement, community groups and lawmakers following the shooting death of Renee Nicole Good in Minneapolis by ICE officer Jonathan Ross.
Ross had more than 10 years of experience with ICE and wouldn't have been subject to the AI screening for new recruits.
The error highlights the challenge of training such a large number of new recruits as ICE continues to ramp up operations to boost deportation numbers amid pressure from the White House. ICE has also placed some new recruits into a training program before they completed the agency's vetting process, NBC News has reported.
In Minneapolis alone, more than 2,000 ICE officers have been sent to the area to boost arrests, and they have apprehended over 2,400 people since Nov. 29, DHS spokesperson Tricia McLaughlin said. Minnesota has sued to try to remove DHS.
ICE had a mandate to hire 10,000 new officers by the end of 2025 and offered new recruits $50,000 signing bonuses using the money Congress allocated under the One Big Beautiful Bill. One of the officials said that although ICE met the goal on paper, bringing back people who were misidentified for more training means it didn't successfully add 10,000 ICE officers on the street in 2025.
Incident 1347: Automated Shuttle Bus Was Reportedly Rear-Ended During U.S. Department of Transportation Demonstration Ride in Washington, D.C.
“Officials showed off a robo-bus in D.C. It got hit by a Tesla driver.”
The U.S. Department of Transportation brought an automated bus to D.C. this week to showcase its work on self-driving vehicles, taking officials from around the country on a ride between agency headquarters at Navy Yard and Union Station. One of those trips was interrupted Sunday when the bus got rear-ended.
The bus, produced by the company Beep, was following its fixed route when it was struck by a Tesla with Maryland plates whose driver was trying to change lanes, officials said. The bus had a human driver behind the wheel for backup as required by the city. The Tesla driver stayed on the scene on H Street for about 10 minutes. No police were called.
"The service was temporarily paused after another vehicle made an illegal lane change and contacted the rear of the autonomous bus, which resulted in minor cosmetic damage to both vehicles," a spokesman for Beep said in a statement. "The autonomous bus operated appropriately in the moment and, after review, it was determined the autonomous bus was safe to resume service."
Beep is working with the Transportation Department and Carnegie Mellon University on a pilot program of automated public buses. The vehicle was brought to D.C. for an annual conference that brings together transportation researchers and policymakers, where officials from the Trump administration emphasized the White House's interest in automated driving.
"One of President [Donald] Trump's priorities, given to us through Secretary [Sean P.] Duffy, is to accelerate the commercial deployment of autonomous vehicles, and we're all in on that,*" *Peter Simshauser, chief counsel at the National Highway Traffic Safety Administration said during a Monday panel discussion at the conference.
Simshauser previously worked on a Hyundai robotaxi venture, Motional, which is hoping to launch in Las Vegas this year. Also on the panel about the administration's "innovation agenda" was Seval Oz, assistant transportation secretary for research, who worked on Google's self-driving car program, which became Waymo, and on sensors and software for autonomous driving.
She called the roughly 40,000 road deaths a year in the U.S. "an epidemic" autonomous vehicles could help end. "But autonomy will also change our cities. It will change the way in which we look at driving," she said.
Nationally, a handful of cities and states have robotaxis and buses engaged in fully autonomous driving, with many more doing tests and commercial plans for aggressive expansion in the next few years. District leaders have been considering allowing automated cars and buses for almost a decade, and Waymo and other companies are interested in operating self-driving taxis in the city. But officials say they are still waiting on a long-overdue report on how best to regulate the vehicles before allowing them on the streets without human oversight.
Any crash of an automated vehicle, no matter how minor, has to be reported to NHTSA. There have been six previous crashes involving automated vehicles tested in D.C., according to the agency.
Quick Add New Report URL
About the Database
The AI Incident Database is dedicated to indexing the collective history of harms or near harms realized in the real world by the deployment of artificial intelligence systems. Like similar databases in aviation and computer security, the AI Incident Database aims to learn from experience so we can prevent or mitigate bad outcomes.
You are invited to submit incident reports, whereupon submissions will be indexed and made discoverable to the world. Artificial intelligence will only be a benefit to people and society if we collectively record and learn from its failings. (Learn More)

Funding the AIID - Part I
By Sean McGregor
2026-01-25
Incidents and reports through time, AI Incident Database, 2026 Note: This post is one of several in a new series of posts that will highligh...
The Database in Print
Read about the database at Time Magazine, Vice News, Venture Beat, Wired, Bulletin of the Atomic Scientists , and Newsweek among other outlets.
Incident Report Submission Leaderboards
These are the persons and entities credited with creating and submitted incident reports. More details are available on the leaderboard page.
The AI Incident Briefing

Create an account to subscribe to new incident notifications and other updates.
Random Incidents
The Responsible AI Collaborative
The AI Incident Database is a project of the Responsible AI Collaborative, an organization chartered to advance the AI Incident Database. The governance of the Collaborative is architected around the participation in its impact programming. For more details, we invite you to read the founding report and learn more on our board and contributors.

View the Responsible AI Collaborative's Form 990 and tax-exempt application.
Organization Founding Sponsor
Database Founding Sponsor







