Welcome to theAI Incident Database
Incident 1340: Purported AI-Generated Image Falsely Depicting JD Vance and Usha Vance in Public Altercation Circulated on Social Media
“Does an image show JD Vance arguing with his wife, Usha, in a restaurant?”Latest Incident Report
Claim:
A photo shared online in December 2025 authentically showed U.S. Vice President JD Vance arguing with his wife, Usha Vance, in a restaurant.
Rating:
Fake
In December 2025, an image circulated on social media purportedly showing U.S. Vice President JD Vance arguing with his wife, Usha Vance, in a restaurant.
The image appeared to show JD Vance wearing a white T-shirt and a stern expression while sitting across from Usha Vance. One Facebook user who shared it (archived) wrote:
Looks like things are not so good in Republicanistan. Usha is a liberal. Always has been. Apparently they were having and [sic] argument loud enough to quiet the restaurant. Why is he wearing a tee shirt?
The image circulated elsewhere on Facebook and X (archived here and here). In short, it was fake.
After Snopes initially published its story, YouTuber Keith Edwards reached out to confirm the image was a thumbnail he created for an October 2025 video, describing it as "fake." Edwards told us he used the artificial intelligence platform ChatGPT to create the image.
The thumbnail matched the image circulating on social media. In both, the man wore the same facial expression and the woman was positioned the same way. A reverse image search did not lead us to any earlier version of the fabricated image.
JD Vance himself appeared to support the conclusion that the image was fake in a Dec. 9 X post (archived), replying to a user who shared the image with a comment that seemed to poke fun at the rumor: "I always wear an undershirt when I go out in public to have a fight loudly with my wife."
We also ran the image through three AI-detection platforms: Hive, Sightengine and Image Whisperer. Sightengine estimated a 97% probability that the image was AI-generated, while Image Whisperer concluded with "high confidence" it was AI-generated.
Hive, however, flagged the image as "not likely to contain AI-generated or deepfake content." It's worth noting that these tools are not always accurate and their results should be interpreted with caution.
Prior to Edwards' confirmation, the team at GetReal Security, which specializes in authenticating digital media, also analyzed the image for Snopes.
Hany Farid, co-founder of GetReal and a professor at the University of California, Berkeley, told us that the image's low quality made forensic analysis difficult. However, the team found "biometric anomalies" suggesting the image "may be AI-generated or is simply a photo of someone that resembles [JD] Vance."
"In particular, the facial features in the image do not match contemporaneous images of Vance," Farid added. Contemporaneous images are photos taken around the same time that are used to verify another image's authenticity.
Matthew Stamm, an associate professor of electrical and computer engineering at Drexel University, told Snopes that he and his doctoral student analyzed the image to determine whether it was AI-generated, using a specialized forensic tool that detects subtle "fingerprints" left by generative AI. Stamm said the tool "determined with roughly 95% confidence that it was unable to detect traces left by generative AI."
However, he cautioned that this does not prove the image is real and cannot definitively rule out AI generation. Factors such as its size, low resolution and repeated sharing on social media made the analysis difficult, changing "the quality of the forensic traces that we analyze, similar to how fingerprints at a crime scene might be smudged," Stamm explained.
The fabricated image spread online amid rumors of a rift in the Vances' marriage, which began to spread in October 2025. Photos of Usha Vance without her wedding ring taken the following month further fueled speculation.
However, a spokesperson told USA Today in a Nov. 21 statement that Usha Vance "is a mother of three young children, who does a lot of dishes, gives lots of baths, and forgets her ring sometimes." Other publicly available Getty Images from November showed Usha Vance wearing her wedding ring.
In a Dec. 4 interview with NBC News, JD Vance said the couple's marriage is "as strong as it's ever been."
Incident 1325: Reported AI-Generated Deepfake Videos Impersonating Elon Musk and Dragon’s Den Allegedly Used in Cryptocurrency Investment Scam Targeting Canadian Victims
“Two Canadians lose $2.3 million in AI deepfake crypto investment scam”
Two Canadians from Ontario and Prince Edward Island lost a combined $2.3 million to an AI-enabled deepfake cryptocurrency investment scheme. The Ontario victim lost $1.7 million after being deceived by a fake Elon Musk video, while the other victim lost $600,000 after watching a clip falsely linked to Dragon's Den. The report was updated on December 21, 2025.
Updated on December 21, 2025 12:31 PM UTC, the report details that two Canadians from Markham, Ontario, and Prince Edward Island lost a combined $2.3 million to a deepfake cryptocurrency investment scheme. According to W5, AI-generated videos and fabricated dashboards were used to persuade targets that small deposits were producing real profits.
How victims were lured by AI imposters
A 51-year-old woman from Markham saw a Facebook clip that appeared to feature Elon Musk discussing a crypto opportunity. She sent an initial $250 and, two days later, was shown a $30 gain, which encouraged further deposits and trust in documents that looked official.
"I applied for almost a million dollars on the equity of my home. I took it out and I started sending it to them. Right? Like $350,000 and then $350,000." --- Ontario victim, Markham
Scammers later displayed a balance of $3 million and demanded taxes and fees before any withdrawal. To cover those costs, she borrowed $500,000 from family and friends and maxed out credit cards, bringing her total losses to $1.7 million.
A man in Prince Edward Island encountered a video that claimed a link to the TV program Dragon's Den and suggested investing could start at $250. He increased his transfers over time, at one point sending $10,000 per day, and ultimately lost $600,000. As with the first case, a fake balance of more than $1 million was shown, and withdrawal attempts were blocked.
Together, their losses totaled $2.3 million. According to the Canadian Anti-Fraud Centre, Canadians have lost $1.2 billion to investment scams over three years, and the agency believes actual losses are higher.
Reports of industrial-scale fraud networks
Former U.S. prosecutor Erin West said the fraud is organized like an industry and that many callers are themselves victims, trafficked to scam compounds in Southeast Asia and forced to work long hours. Those who refuse or attempt escape face beatings or torture, according to her account.
West described visiting cyber fraud compounds in the Philippines and said their scale reflects industrial-level operations that rely on psychological manipulation, technology, and human trafficking. She warned that as deepfake tools become cheaper and more accessible, similar scams are likely to expand globally, making it harder for ordinary investors to distinguish legitimate opportunities from AI-driven fraud.
Incident 1326: Waymo Robotaxis Allegedly Contributed to Traffic Gridlock During San Francisco PG&E Power Outage
“Waymo explains why robotaxis stalled during San Francisco blackout”
Waymo on Tuesday acknowledged that its driverless cars contributed to traffic congestion during San Francisco's massive weekend power outage, saying the scale of the disruption overwhelmed parts of its system and prompted the company to implement immediate software and emergency-response changes.
The outage, caused by a fire at a PG&E substation, knocked out electricity to nearly one-third of the city on Saturday, disabling hundreds of traffic signals and triggering gridlock across major corridors. As police officers were deployed to manually control intersections, stalled Waymo robotaxis became one of the most visible signs of the citywide disruption, drawing scrutiny from residents and elected officials.
In a blog post published Tuesday, Waymo said the unprecedented number of dark traffic signals strained safeguards designed for smaller outages. Its vehicles are programmed to treat nonfunctioning signals as four-way stops, but in some cases they request a remote "confirmation check to ensure it makes the safest choice."
"While we successfully traversed more than 7,000 dark signals on Saturday, the outage created a concentrated spike in these requests," the company said, adding that delays in confirmations contributed to congestion on already overwhelmed streets.
As the blackout persisted and the San Francisco Department of Emergency Management urged residents to stay home, Waymo said it temporarily suspended service and directed vehicles to pull over and park so they could be returned to depots in stages.
Mayor Daniel Lurie said the city was in direct contact with the company, owned by Google parent Alphabet, as conditions deteriorated.
"I made a call to the Waymo CEO and asked them to get the cars off the road immediately," Lurie said at a Monday news conference. "They were very understanding ... but we need them to be more proactive."
The incident has prompted renewed questions about how autonomous vehicles perform during large-scale emergencies. San Francisco supervisors have called for a hearing on Waymo's response, and a California regulator said Monday it is reviewing incidents involving stalled robotaxis during the outage.
Waymo said it is already rolling out fleet-wide software updates that give vehicles more context about regional power failures, allowing them to navigate intersections "more decisively."
The company also said it is updating emergency preparedness plans, expanding coordination with city officials and continuing first-responder training, noting that more than 25,000 responders worldwide have already been trained to interact with its vehicles.
"While the failure of the utility infrastructure was significant, we are committed to ensuring our technology adjusts to traffic flow during such events," a Waymo spokesperson said. "We are focused on rapidly integrating the lessons learned from this event, and are committed to earning and maintaining the trust of the communities we serve every day."
Incident 1333: Purportedly AI-Generated Images and Videos Reportedly Spread Misinformation About Nicolás Maduro's Capture on X
“AI-generated content spreads after Maduro’s removal — blurring fact and fiction”
Following the U.S. military operation in Venezuela that led to the removal of its leader, Nicolas Maduro, AI-generated videos purporting to show Venezuelan citizens celebrating in the streets have gone viral on social media.
These artificial intelligence clips, depicting rejoicing crowds, have amassed millions of views across major platforms like TikTok, Instagram and X.
One of the earliest and most widely shared clips on X was posted by an account named "Wall Street Apes," which has over 1 million followers on the platform.
The post depicts a series of Venezuelan citizens crying tears of joy and thanking the U.S. and President Donald Trump for removing Maduro.
The video has since been flagged by a community note, a crowdsourced fact-checking feature on X that allows users to add context to posts they believe are misleading. The note read: "This video is AI generated and is currently being presented as a factual statement intended to mislead people."
The clip has been viewed over 5.6 million times and reshared by at least 38,000 accounts, including by business mogul Elon Musk, before he eventually removed the repost.
CNBC was unable to confirm the origin of the video, though fact-checkers at BBC and AFP said the earliest known version of the clip appeared on the TikTok account @curiousmindusa, which regularly posts AI-generated content.
Even before such videos appeared, AI-generated images showing Maduro in U.S. custody were circulating prior to the Trump administration releasing an authentic image of the captured leader.
The deposed Venezuelan president was captured on Jan. 3, 2026, after U.S. forces conducted airstrikes and a ground raid, an operation that has dominated global headlines at the start of the new year.
Along with the AI-generated videos, the AFP's fact-check team also flagged a number of examples of misleading content concerning Maduro's ousting, including footage of celebrations in Chile falsely presented as scenes from Venezuela.
Trump has also reposted several videos related to Venezuelan celebrations on Truth Social this week, though CNBC confirmed many of those were also filmed outside Venezuela, in cities such as Panama City and Buenos Aires.
One of the videos reshared by the president included old footage that first appeared online as early as July 2024 and was thus not related to the recent removal of Maduro.
The dissemination of that type of misinformation surrounding major news events is not new. Similar false or misleading content has been spread during the Israel-Palestine and Russia-Ukraine conflicts.
However, the massive reach of AI-generated content related to recent developments in Venezuela is a stark example of AI's growing role as a tool for misinformation.
Platforms such as Sora and Midjourney have made it easier than ever to quickly generate hyper-realistic video and pass it off as genuine in the chaos of fast-breaking events. The creators of that content often seek to amplify certain political narratives or sow confusion among global audiences.
Last year, AI-generated videos of women complaining about losing their Supplemental Nutrition Assistance Program, or SNAP, benefits during a government shutdown also went viral. One such AI-generated video fooled Fox News, which presented it as real in an article that was later removed.
In light of these trends, social media companies have faced growing pressure to step up efforts to label potentially misleading AI content.
Last year, India's government proposed a law requiring such labeling, while Spain approved fines of up to 35 million euros for unlabeled AI materials.
In addressing these concerns, major platforms, including TikTok and Meta, have rolled out AI detection and labeling tools, though the results appear mixed.
CNBC was able to identify some misleading TikTok videos on Venezuela that had been labeled as AI-generated, but others that appeared to be fabricated or digitally altered did not yet have warnings.
In the case of X, the platform has relied mostly on community notes for content labeling, though critics say the system often reacts too slowly to prevent AI misinformation from spreading before being identified.
Adam Mosseri, who oversees Instagram and Threads, acknowledged the challenge facing social media in a recent post. "All the major platforms will do good work identifying AI content, but they will get worse at it over time as AI gets better at imitating reality," he said.
"There is already a growing number of people who believe, as I do, that it will be more practical to fingerprint real media than fake media," he added.
— CNBC's Victoria Yeo contributed to this report
Incident 1329: Grok Reportedly Generated and Distributed Nonconsensual Sexualized Images of Adults and Minors in X Replies
“Elon Musk’s Grok AI generates images of ‘minors in minimal clothing’”
Elon Musk’s chatbot Grok posted on Friday that lapses in safeguards had led it to generate “images depicting minors in minimal clothing” on social media platform X. The chatbot, a product of Musk’s company xAI, has been generating a wave of sexualized images throughout the week in response to user prompts.
Screenshots shared by users on X showed Grok’s public media tab filled with such images. xAI said it was working to improve its systems to prevent future incidents.
“There are isolated cases where users prompted for and received AI images depicting minors in minimal clothing,” Grok said in a post on X in response to a user. “xAI has safeguards, but improvements are ongoing to block such requests entirely.”
“As noted, we’ve identified lapses in safeguards and are urgently fixing them—CSAM is illegal and prohibited,” xAI posted to the @Grok account on X, referring to child sexual abuse material.
Many users on X have prompted Grok to generate sexualized, nonconsensual AI-altered versions of images in recent days, in some cases removing people’s clothing without their consent. Musk on Thursday reposted an AI photo of himself in a bikini, captioned with cry-laughing emojis, in a nod to the trend.
Grok’s generation of sexualized images appeared to lack safety guardrails, allowing for minors to be featured in its posts of people, usually women, wearing little clothing, according to posts from the chatbot. In a reply to a user on X on Thursday, Grok said most cases could be prevented through advanced filters and monitoring although it said “no system is 100% foolproof”, adding that xAI was prioritising improvements and reviewing details shared by users.
When contacted for comment by email, xAI replied with the message: “Legacy Media Lies”.
The problem of AI being used to generate child sexual abuse material is a longstanding issue in the artificial intelligence industry. A 2023 Stanford study found that a dataset used to train a number of popular AI image-generation tools contained over 1000 CSAM images. Training AI on images of child abuse can allow models to generate new images of children being exploited, experts say.
Grok also has a history of failing to maintain its safety guardrails and posting misinformation. In May of last year, Grok began posting about the far-right conspiracy of “white genocide” in South Africa on posts with no relation to the concept. xAI also apologized in July after Grok began posting rape fantasies and antisemitic material, including calling itself “MechaHitler” and praising Nazi ideology. The company nevertheless secured a nearly $200m contract with the US Department of Defense a week after the incidents.
Quick Add New Report URL
About the Database
The AI Incident Database is dedicated to indexing the collective history of harms or near harms realized in the real world by the deployment of artificial intelligence systems. Like similar databases in aviation and computer security, the AI Incident Database aims to learn from experience so we can prevent or mitigate bad outcomes.
You are invited to submit incident reports, whereupon submissions will be indexed and made discoverable to the world. Artificial intelligence will only be a benefit to people and society if we collectively record and learn from its failings. (Learn More)

AI Incident Roundup – August, September, and October 2025
By Daniel Atherton
2025-11-08
At Templestowe, Arthur Streeton, 1889 🗄 Trending in the AIID Across August, September, and October 2025, the AI Incident Database logged one...
Incident Report Submission Leaderboards
These are the persons and entities credited with creating and submitted incident reports. More details are available on the leaderboard page.
The AI Incident Briefing

Create an account to subscribe to new incident notifications and other updates.
Random Incidents
The Responsible AI Collaborative
The AI Incident Database is a project of the Responsible AI Collaborative, an organization chartered to advance the AI Incident Database. The governance of the Collaborative is architected around the participation in its impact programming. For more details, we invite you to read the founding report and learn more on our board and contributors.

View the Responsible AI Collaborative's Form 990 and tax-exempt application.
Organization Founding Sponsor
Database Founding Sponsor







