Welcome to theAI Incident Database
Incident 1344: Purported AI-Generated Images Falsely Depict Kate Garraway With Fictitious Partner
“Kate Garraway says fake pics were 'hurtful' and warns of 'damage' AI can cause”Latest Incident Report
Kate Garraway said fake AI-generated images of her and a fictitious 'new partner' had been 'hurtful' for her children.
The Good Morning Britain star lost her 'darling' husband Derek Draper in January 2024, following a long health battle caused by Covid complications.
Now, two years on from his death, the ITV presenter has been the victim of an online hoax, the Mirror reports.
The 58-year-old said she initially 'laughed off' the the fake images but quickly realised how the false rumours were impacting her and Derek's two kids.
Kate took to Instagram to address the rumours and said: "For months now people (even those who know me really well) have been asking me about stuff out there that is totally fake.
"On one level it's really sweet that people care about my life but very confusing - and in some cases - hurtful for the kids when it's fake news."
She further touched on the subject in an article for The Sun, explaining the imagined match-ups were co-stars and friends, including Richard Arnold or Ben Shephard.
Kate wrote in the publication "we laughed it off".
However, she added: "The turning point for me was when I became aware of the impact it might have on my children and Derek's family, too.
"A headline popped up, sent to me by an algorithm I assume, saying: 'Everything you need to know about Kate Garraway's new boyfriend'."
She said she was "flabbergasted and intrigued" to find out more about the new man in her life - who didn't actually exist - but said one detail "stopped me cold".
Kate was left fuming over claims her son was not happy and wanted his mum to end her 'relationship'.
The TV favourite said the 16-year-old already had enough to deal with without reports claiming he was being obstructive of his mum's happiness, insisting he was "the opposite of that".
"When everything can be faked, proof starts to lose its meaning," she said, adding that once trust was eroded, it was 'extremely hard to get it back'.
She previously told Instagram followers: "Thank goodness we still have some proper 'human' journalists out there, but this is something all of us need to think about when getting our 'news' on the net - that it's a trusted source.
"In this case it only affects me really and is fundamentally silly and daft but imagine if it was something more important the damage it could do?"
Incident 1340: Purported AI-Generated Image Falsely Depicting JD Vance and Usha Vance in Public Altercation Circulated on Social Media
“Does an image show JD Vance arguing with his wife, Usha, in a restaurant?”
Claim:
A photo shared online in December 2025 authentically showed U.S. Vice President JD Vance arguing with his wife, Usha Vance, in a restaurant.
Rating:
Fake
In December 2025, an image circulated on social media purportedly showing U.S. Vice President JD Vance arguing with his wife, Usha Vance, in a restaurant.
The image appeared to show JD Vance wearing a white T-shirt and a stern expression while sitting across from Usha Vance. One Facebook user who shared it (archived) wrote:
Looks like things are not so good in Republicanistan. Usha is a liberal. Always has been. Apparently they were having and [sic] argument loud enough to quiet the restaurant. Why is he wearing a tee shirt?
The image circulated elsewhere on Facebook and X (archived here and here). In short, it was fake.
After Snopes initially published its story, YouTuber Keith Edwards reached out to confirm the image was a thumbnail he created for an October 2025 video, describing it as "fake." Edwards told us he used the artificial intelligence platform ChatGPT to create the image.
The thumbnail matched the image circulating on social media. In both, the man wore the same facial expression and the woman was positioned the same way. A reverse image search did not lead us to any earlier version of the fabricated image.
JD Vance himself appeared to support the conclusion that the image was fake in a Dec. 9 X post (archived), replying to a user who shared the image with a comment that seemed to poke fun at the rumor: "I always wear an undershirt when I go out in public to have a fight loudly with my wife."
We also ran the image through three AI-detection platforms: Hive, Sightengine and Image Whisperer. Sightengine estimated a 97% probability that the image was AI-generated, while Image Whisperer concluded with "high confidence" it was AI-generated.
Hive, however, flagged the image as "not likely to contain AI-generated or deepfake content." It's worth noting that these tools are not always accurate and their results should be interpreted with caution.
Prior to Edwards' confirmation, the team at GetReal Security, which specializes in authenticating digital media, also analyzed the image for Snopes.
Hany Farid, co-founder of GetReal and a professor at the University of California, Berkeley, told us that the image's low quality made forensic analysis difficult. However, the team found "biometric anomalies" suggesting the image "may be AI-generated or is simply a photo of someone that resembles [JD] Vance."
"In particular, the facial features in the image do not match contemporaneous images of Vance," Farid added. Contemporaneous images are photos taken around the same time that are used to verify another image's authenticity.
Matthew Stamm, an associate professor of electrical and computer engineering at Drexel University, told Snopes that he and his doctoral student analyzed the image to determine whether it was AI-generated, using a specialized forensic tool that detects subtle "fingerprints" left by generative AI. Stamm said the tool "determined with roughly 95% confidence that it was unable to detect traces left by generative AI."
However, he cautioned that this does not prove the image is real and cannot definitively rule out AI generation. Factors such as its size, low resolution and repeated sharing on social media made the analysis difficult, changing "the quality of the forensic traces that we analyze, similar to how fingerprints at a crime scene might be smudged," Stamm explained.
The fabricated image spread online amid rumors of a rift in the Vances' marriage, which began to spread in October 2025. Photos of Usha Vance without her wedding ring taken the following month further fueled speculation.
However, a spokesperson told USA Today in a Nov. 21 statement that Usha Vance "is a mother of three young children, who does a lot of dishes, gives lots of baths, and forgets her ring sometimes." Other publicly available Getty Images from November showed Usha Vance wearing her wedding ring.
In a Dec. 4 interview with NBC News, JD Vance said the couple's marriage is "as strong as it's ever been."
Incident 1343: ICE AI Resume Screening Error Allegedly Routed Inexperienced Recruits Into Inadequate Training Pathways
“ICE error meant some recruits were sent into field offices without proper training, sources say”
As Immigration and Customs Enforcement was racing to add 10,000 new officers to its force, an artificial intelligence error in how their applications were processed sent many new recruits into field offices without proper training, according to two law enforcement officials familiar with the error.
The AI tool used by ICE was tasked with looking for potential applicants with law enforcement experience to be placed into the agency's "LEO program" --- short for law enforcement officer --- for new recruits who are already law enforcement officers. It requires four weeks of online training.
Applicants without law enforcement backgrounds are required to take an eight-week in-person course at ICE's academy at the Federal Law Enforcement Training Center in Georgia, which includes courses in immigration law and handling a gun, as well as physical fitness tests.
"They were using AI to scan résumés and found out a bunch of the people who were LEOs weren't LEOs," one of the officials said.
The AI tool was initially the mechanism used to categorize résumés, the officials said and flagged anyone with the word "officer" on their résumés --- for example, a "compliance officer" or people who said they aspired to be ICE officers.
The majority of the new applicants were flagged as law enforcement officers, the officials said, but many had no experience in any local police or federal law enforcement force.
Both law enforcement officials noted that ICE's field offices provide more training beyond what is provided at the academy or in the online course before officers are sent out onto the street and that the officers singled out by the AI tool most likely received that training. The officials weren't authorized to speak publicly and spoke to NBC News on condition of anonymity.
A DHS spokesperson called it a "technological snag" that was quickly rectified, sand said it impacted about 200 hires who then reported to the Federal Law Enforcement Training Center for full training. The spokesperson said that no candidate was placed on enforcement duties without "appropriate training and credentials."
"The vast majority of new officers brought on during the hiring surge are experienced law enforcement officers who have already successfully completed a law enforcement academy," the spokesperson said.
The AI mistake was identified in mid-fall --- over a month into the recruitment surge --- and ICE immediately began taking steps to remedy the situation, including manual reviews of résumés of new hires, the officials said.
As the immigration agency surges agents into American cities, their enforcement tactics are increasingly questioned by local law enforcement, community groups and lawmakers following the shooting death of Renee Nicole Good in Minneapolis by ICE officer Jonathan Ross.
Ross had more than 10 years of experience with ICE and wouldn't have been subject to the AI screening for new recruits.
The error highlights the challenge of training such a large number of new recruits as ICE continues to ramp up operations to boost deportation numbers amid pressure from the White House. ICE has also placed some new recruits into a training program before they completed the agency's vetting process, NBC News has reported.
In Minneapolis alone, more than 2,000 ICE officers have been sent to the area to boost arrests, and they have apprehended over 2,400 people since Nov. 29, DHS spokesperson Tricia McLaughlin said. Minnesota has sued to try to remove DHS.
ICE had a mandate to hire 10,000 new officers by the end of 2025 and offered new recruits $50,000 signing bonuses using the money Congress allocated under the One Big Beautiful Bill. One of the officials said that although ICE met the goal on paper, bringing back people who were misidentified for more training means it didn't successfully add 10,000 ICE officers on the street in 2025.
Incident 1347: Automated Shuttle Bus Was Reportedly Rear-Ended During U.S. Department of Transportation Demonstration Ride in Washington, D.C.
“Officials showed off a robo-bus in D.C. It got hit by a Tesla driver.”
The U.S. Department of Transportation brought an automated bus to D.C. this week to showcase its work on self-driving vehicles, taking officials from around the country on a ride between agency headquarters at Navy Yard and Union Station. One of those trips was interrupted Sunday when the bus got rear-ended.
The bus, produced by the company Beep, was following its fixed route when it was struck by a Tesla with Maryland plates whose driver was trying to change lanes, officials said. The bus had a human driver behind the wheel for backup as required by the city. The Tesla driver stayed on the scene on H Street for about 10 minutes. No police were called.
"The service was temporarily paused after another vehicle made an illegal lane change and contacted the rear of the autonomous bus, which resulted in minor cosmetic damage to both vehicles," a spokesman for Beep said in a statement. "The autonomous bus operated appropriately in the moment and, after review, it was determined the autonomous bus was safe to resume service."
Beep is working with the Transportation Department and Carnegie Mellon University on a pilot program of automated public buses. The vehicle was brought to D.C. for an annual conference that brings together transportation researchers and policymakers, where officials from the Trump administration emphasized the White House's interest in automated driving.
"One of President [Donald] Trump's priorities, given to us through Secretary [Sean P.] Duffy, is to accelerate the commercial deployment of autonomous vehicles, and we're all in on that,*" *Peter Simshauser, chief counsel at the National Highway Traffic Safety Administration said during a Monday panel discussion at the conference.
Simshauser previously worked on a Hyundai robotaxi venture, Motional, which is hoping to launch in Las Vegas this year. Also on the panel about the administration's "innovation agenda" was Seval Oz, assistant transportation secretary for research, who worked on Google's self-driving car program, which became Waymo, and on sensors and software for autonomous driving.
She called the roughly 40,000 road deaths a year in the U.S. "an epidemic" autonomous vehicles could help end. "But autonomy will also change our cities. It will change the way in which we look at driving," she said.
Nationally, a handful of cities and states have robotaxis and buses engaged in fully autonomous driving, with many more doing tests and commercial plans for aggressive expansion in the next few years. District leaders have been considering allowing automated cars and buses for almost a decade, and Waymo and other companies are interested in operating self-driving taxis in the city. But officials say they are still waiting on a long-overdue report on how best to regulate the vehicles before allowing them on the streets without human oversight.
Any crash of an automated vehicle, no matter how minor, has to be reported to NHTSA. There have been six previous crashes involving automated vehicles tested in D.C., according to the agency.
Incident 1325: Reported AI-Generated Deepfake Videos Impersonating Elon Musk and Dragon’s Den Allegedly Used in Cryptocurrency Investment Scam Targeting Canadian Victims
“Two Canadians lose $2.3 million in AI deepfake crypto investment scam”
Two Canadians from Ontario and Prince Edward Island lost a combined $2.3 million to an AI-enabled deepfake cryptocurrency investment scheme. The Ontario victim lost $1.7 million after being deceived by a fake Elon Musk video, while the other victim lost $600,000 after watching a clip falsely linked to Dragon's Den. The report was updated on December 21, 2025.
Updated on December 21, 2025 12:31 PM UTC, the report details that two Canadians from Markham, Ontario, and Prince Edward Island lost a combined $2.3 million to a deepfake cryptocurrency investment scheme. According to W5, AI-generated videos and fabricated dashboards were used to persuade targets that small deposits were producing real profits.
How victims were lured by AI imposters
A 51-year-old woman from Markham saw a Facebook clip that appeared to feature Elon Musk discussing a crypto opportunity. She sent an initial $250 and, two days later, was shown a $30 gain, which encouraged further deposits and trust in documents that looked official.
"I applied for almost a million dollars on the equity of my home. I took it out and I started sending it to them. Right? Like $350,000 and then $350,000." --- Ontario victim, Markham
Scammers later displayed a balance of $3 million and demanded taxes and fees before any withdrawal. To cover those costs, she borrowed $500,000 from family and friends and maxed out credit cards, bringing her total losses to $1.7 million.
A man in Prince Edward Island encountered a video that claimed a link to the TV program Dragon's Den and suggested investing could start at $250. He increased his transfers over time, at one point sending $10,000 per day, and ultimately lost $600,000. As with the first case, a fake balance of more than $1 million was shown, and withdrawal attempts were blocked.
Together, their losses totaled $2.3 million. According to the Canadian Anti-Fraud Centre, Canadians have lost $1.2 billion to investment scams over three years, and the agency believes actual losses are higher.
Reports of industrial-scale fraud networks
Former U.S. prosecutor Erin West said the fraud is organized like an industry and that many callers are themselves victims, trafficked to scam compounds in Southeast Asia and forced to work long hours. Those who refuse or attempt escape face beatings or torture, according to her account.
West described visiting cyber fraud compounds in the Philippines and said their scale reflects industrial-level operations that rely on psychological manipulation, technology, and human trafficking. She warned that as deepfake tools become cheaper and more accessible, similar scams are likely to expand globally, making it harder for ordinary investors to distinguish legitimate opportunities from AI-driven fraud.
Quick Add New Report URL
About the Database
The AI Incident Database is dedicated to indexing the collective history of harms or near harms realized in the real world by the deployment of artificial intelligence systems. Like similar databases in aviation and computer security, the AI Incident Database aims to learn from experience so we can prevent or mitigate bad outcomes.
You are invited to submit incident reports, whereupon submissions will be indexed and made discoverable to the world. Artificial intelligence will only be a benefit to people and society if we collectively record and learn from its failings. (Learn More)

AI Incident Roundup – August, September, and October 2025
By Daniel Atherton
2025-11-08
At Templestowe, Arthur Streeton, 1889 🗄 Trending in the AIID Across August, September, and October 2025, the AI Incident Database logged one...
Incident Report Submission Leaderboards
These are the persons and entities credited with creating and submitted incident reports. More details are available on the leaderboard page.
The AI Incident Briefing

Create an account to subscribe to new incident notifications and other updates.
Random Incidents
The Responsible AI Collaborative
The AI Incident Database is a project of the Responsible AI Collaborative, an organization chartered to advance the AI Incident Database. The governance of the Collaborative is architected around the participation in its impact programming. For more details, we invite you to read the founding report and learn more on our board and contributors.

View the Responsible AI Collaborative's Form 990 and tax-exempt application.
Organization Founding Sponsor
Database Founding Sponsor







