Skip to Content
logologo
AI Incident Database
Open TwitterOpen RSS FeedOpen FacebookOpen LinkedInOpen GitHub
Open Menu
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse

Welcome to the
AI Incident Database

Loading...

Incident 1429: Bank of Italy Warned That Purported Deepfakes of Governor Fabio Panetta Were Used in Allegedly Fraudulent Investment Promotions

“”Latest Incident Report
bancaditalia.it2026-03-22
Read More
Loading...
Thousands have swooned over this MAGA dream girl. She’s made with AI.

Incident 1422: Unlabeled Purportedly AI-Generated 'Jessica Foster' Account Reportedly Posed as Pro-Trump Army Service Member to Attract Followers and Funnel Users to Paid Adult Content

“Thousands have swooned over this MAGA dream girl. She’s made with AI.”
washingtonpost.com2026-03-21

The beautiful Army blonde Jessica Foster has posed with an F-22 Raptor fighter jet, donned camouflage in the desert and walked a tarmac with President Donald Trump on the first day of the strikes on Iran.

The slew of photos and videos depicting the patriotic life of the MAGA dream girl have led her Instagram account to explode, gaining more than a million followers since she began posting four months ago.

But Foster is an illusion --- a fake woman who experts say was probably created by an artificial intelligence image generator. There's no public record of Foster's military service and the account, despite not being labeled AI, is packed with indicators that she is fake. Between many of her pro-Trump posts, Foster also prominently displays her feet.

Foster's viral takeoff highlights an increasingly prevalent strategy for winning online attention. A slew of right-wing accounts, peddling patriotism mixed with soft-core pornography, use fake women and convincing imagery to grab viewers across a distracted internet, monetize their interest and score political points.

Accounts showing AI-generated women masquerading as Trump-supporting soldiers, truckers and police officers have built surging audiences on platforms such as TikTok, Instagram and X, where thousands of commenters have offered responses suggesting they believe the women are real.

A version of the strategy has also played out in recent weeks beyond the United States. Hundreds of AI-generated videos showing Iranian female soldiers and pilots cheering on the nation's military have proliferated online, as the BBC first reported. One telltale sign they're fake: Iran bans women from combat roles.

Sam Gregory, executive director of Witness, a video-advocacy group that researches deepfakes, said Foster exemplifies how deceptive AI video generators can be.

AI advances have made it easier for creators to generate a consistent fake character for use across multiple photos or videos and to situate the character next to real public figures, making it seem like the character is at the center of actual events.

By applying political trappings and current events to these characters' fake lives, their creators probably hope to maximize their virality and stand out in an online crowd, Gregory said. Once they've got people's attention, the creators can --- as in Foster's case --- shunt them to a paid platform, where the user is told to pay up for more lurid scenes.

Foster is "the apotheosis of what MAGA fantasizes about, all packed into one channel, but it's obviously AI: There's no provenance to the images, no history around her, visible glitches," he said. "There's any number of real and unreal beautiful women online, but having one that's so proximate to power, around the big events of the day, has a different cachet."

The unknown person who runs the Foster account did not respond to requests for comment. After The Washington Post sought comment, the account on Wednesday posted a new photo showing Foster cruising aboard a military vessel in the Strait of Hormuz.

An Army spokeswoman said officials there could find no records of Foster. Instagram removed Foster's account for violating its policies late Thursday, a Meta spokesperson said. The White House did not respond to requests for comment.

Foster's first video, posted on Thanksgiving, showed the blue-eyed woman sitting beneath an American flag in a tight shirt and included a caption asking for comments from every "straight guy that likes a American army girl."

More than 50 photos and videos have followed in the months since, revealing a busy series of meetings with first lady Melania Trump, Ukrainian President Volodymyr Zelensky, Russian President Vladimir Putin and soccer star Lionel Messi. Between those moments, Foster made bawdy jokes, gave speeches and joined her female comrades for pillow fights.

“Best job in the world,” said a caption with a video last month showing Foster in a helmet and a tactical vest.

The moments were extraordinarily outlandish, but even the details offered their own giveaways. The insignia on her combat and service uniforms suggest a muddled mix of qualifications, indicating that she is either a staff sergeant, a Ranger school graduate or a one-star general.

In one photo, she is depicted giving a speech to the “Border of Peace Conference,” a bungled version of Trump’s new Board of Peace. In another, in which she is shown holding a captive Nicolás Maduro, Venezuela’s former president, her uniform lists her first name where it should list her last.

Thousands of users have flocked to her comment sections nevertheless. Referring to Foster, the Silicon Valley investor Justine Moore, of the venture capital firm Andreessen Horowitz, said in an X post, “I’m genuinely floored by how many dudes are following influencers that are clearly AI.”

Foster’s posts have received a total of more than 100,000 comments, many from accounts with men in their profile photos. Some called her out as AI, though many just celebrated her looks, sent her heart-eyes emojis or cheered her on.

The verified Instagram account of a Brazilian transportation official liked most of her photos and told Foster she was “linda,” or beautiful. Another user asked, “Why do you NEVER reply?” (The accounts did not respond to requests for comment.)

Foster’s Instagram, which includes galleries titled “training,” “U.S.,” and “dailyarmy,” originally linked to an account on OnlyFans, a subscription marketplace popular with porn creators. A spokesperson for OnlyFans said the account was removed for breaking its rules, which require that all creators be verified (human) adults.

Foster now links viewers to her account on a smaller OnlyFans competitor, Fanvue, that allows AI models and labels them as “generated or enhanced.”

Her account there, “jessicanextdoor,” says its location is Fort Bragg, the giant military base in North Carolina that is home to the Army’s Special Operations Command, and describes Foster as a “public servant by day, troublemaker by night🤍.”

Many influencers use this style of sales-funnel technique to convert free viewers into paying customers for more explicit, locked-away content. Fanvue declined to share information about the account, which invites viewers to subscribe for “special stuff.”

“Btw i respond to every message but be patient since i am not a robot,” the account said, with a winky-face emoji. Within days of its creation, the account received more than 10,000 likes.

Mischief-makers don’t need AI to deceive people on the internet. Real women have had their photos swiped online and used to distribute political messaging they didn’t endorse: In 2023, a Trump supporter was warped into a left-wing “rage bait” account. And in 2024, European influencers were made to appear as MAGA die-hards.

But Joan Donovan, an assistant professor at Boston University who studies media manipulation, said AI has helped such accounts multiply because they are easy to create, endlessly customizable and offer creators a clear path to moneymaking. The accounts’ political sheen also helps ensure the images end up appearing in people’s news feeds.

The biggest risk, Donovan said, is that the grift strategy can be transformed into information warfare, with the anonymously run accounts deployed as a kind of “bot army” that can distribute propaganda, disinformation or wartime talking points en masse.

“The danger of this is that we’re moving toward a society of the unreal,” Donovan said. “It’s one way to get political messaging across, and it’s effective. We don’t even know if selling feet pics is Jessica Foster’s final form.”

Alex Horton contributed to this report.

Read More
Loading...
AI 'fake applicant' case raises North Korea job scam fears

Incident 1421: Purported Deepfake Applicant Reportedly Impersonated Tokyo IT Executive Kenbun Yoshii During Online Job Interview

“AI 'fake applicant' case raises North Korea job scam fears”
upi.com2026-03-21

March 19 (Asia Today) -- A suspected deepfake job applicant infiltrated an online hiring interview at a Japanese IT company, raising concerns about possible links to North Korean schemes to secure overseas employment and generate foreign currency.

According to a report Thursday by Yomiuri Shimbun, the applicant used artificial intelligence to impersonate a real individual by altering facial features and personal credentials during a remote interview conducted earlier this month in Tokyo.

The man, who identified himself under a false name, claimed he had been raised in the United States and requested fully remote work. When told that in-person attendance was required, he ended the interview after about two minutes.

The applicant had submitted an English-language résumé through a Japanese recruitment platform, listing experience at a major company and claiming native-level Japanese proficiency. However, the recruiter later discovered that the profile and career details matched those of Kenbun Yoshii, the chief executive of a Tokyo-based IT firm.

Yoshii said publicly available images and videos of him appeared to have been used to create the fake identity, describing the incident as "creepy and frightening." He later received multiple reports that similar applicants using his identity had applied to other companies.

Analysis of the interview footage by several organizations, including Okta and a Tokyo-based deepfake detection startup, found a high likelihood the video was generated using AI. Investigators cited irregularities such as unnatural hairline boundaries, brief misalignment of the eyes and mismatched lip movements and audio.

Okta said more than 6,500 similar cases have been identified globally in recent years, involving individuals believed to be North Korean IT workers using fake identities to obtain remote jobs at foreign companies. Some cases involved earnings being transferred back to North Korea, potentially supporting its weapons programs.

A separate analysis by Trend Micro found evidence that North Korean cyber groups have been experimenting with deepfake technology and producing large volumes of falsified résumés, often claiming full-stack engineering expertise.

Security experts warned that such tactics, once concentrated in the United States and Europe, are now spreading to Japan. They urged companies to strengthen identity verification procedures, including multi-factor authentication and in-person interviews.

Researchers also noted that rapid advances in deepfake technology have made detection increasingly difficult without technical tools, recommending layered verification methods and in-depth technical questioning during hiring processes.

-- Reported by Asia Today; translated by UPI

Read More
Loading...
KPMG partner fined for using artificial intelligence to cheat in AI training test

Incident 1423: KPMG Australia Partner Reportedly Used AI to Cheat on Internal AI Training Test and Was Fined A$10,000

“KPMG partner fined for using artificial intelligence to cheat in AI training test”
theguardian.com2026-03-21

A partner at the consultancy KPMG has been fined for using artificial intelligence to cheat during an internal training course on AI.

The unnamed partner was fined A$10,000 (£5,200) for using the technology to cheat, one of a number of staff reportedly using the tactic.

More than two dozen KPMG Australia staff have been caught using AI tools to cheat on internal exams since July, the company said, increasing concerns over AI-fuelled cheating in accountancy firms.

The consultancy used its own AI detection tools to discover the cheating, according to the Australian Finance Review, which first reported on it.

The big four accountancy firms have grappled with cheating scandals in recent years. In 2021, KPMG Australia was fined A$615,000 over “widespread” misconduct, after it was found that more than 1,100 partners had been involved in “improper answer-sharing” on tests designed to assess skill and integrity.

But AI tools have introduced new possibilities for rule-breaking. In December, the UK’s largest accounting body, the Association of Chartered Certified Accountants (ACCA), said it would require accounting students to take exams in person, because otherwise it was too difficult to stop AI cheating.

Helen Brand, the chief executive of the ACCA, said at the time that AI tools had led to a “tipping point” as the use of them was outpacing safeguards against cheating put in place by the association.

Firms such as KPMG and PricewaterhouseCoopers have also been mandating their staff to use AI at work, reportedly in an effort to boost profits and cut costs.

KPMG partners will reportedly be assessed on their ability to use AI tools during their 2026 performance reviews, with the firm’s global AI workforce lead, Niale Cleobury, saying: “We all have a responsibility to be bringing AI to all of our work.”

Some commenters on LinkedIn noted the irony in using AI to cheat in AI training. KPMG is “fighting AI adoption instead of redesigning how they train people. This is a not a cheating problem – if we look at the new world order. This is a training problem,” wrote Iwo Szapar, the creator of a platform that ranks organisations’ “AI maturity”.

KPMG said it had adopted measures to identify the use of AI by its staff and would keep track of how many of its workers misused the technology.

Andrew Yates, the chief executive of KPMG Australia, said: “Like most organisations, we have been grappling with the role and use of AI as it relates to internal training and testing. It’s a very hard thing to get on top of given how quickly society has embraced it.

“Given the everyday use of these tools, some people breach our policy. We take it seriously when they do. We are also looking at ways to strengthen our approach in the current self-reporting regime.”

Read More
Loading...
Claude Code deletes developers' production setup, including its database and snapshots — 2.5 years of records were nuked in an instant

Incident 1424: Claude Code Agent Reportedly Deleted DataTalks.Club Production Infrastructure, Database, and Snapshots via Terraform

“Claude Code deletes developers' production setup, including its database and snapshots — 2.5 years of records were nuked in an instant”
tomshardware.com2026-03-19

Everyone loves a good story about agent bots gone wrong, and those often come with a bit of schadenfreude towards our virtual companions. Sometimes, though, the errors can be attributed to improper supervision, as was the case of Alexey Grigorev, who was brave enough to detail how he got Claude Code to wipe years' worth of records on a website, including the recovery snapshots.

The story begins when Grigorev wanted to move his website, AI Shipping Labs, to AWS and have it share the same infrastructure as DataTalks.Club. Claude itself advised against that option, but Grigorev considered it wasn't worth the hassle or cost of keeping two separate setups.

Gregory uses Terraform, an infrastructure management utility that can create (or destroy) entire setups, including networks, load balancing, databases, and, naturally, the servers themselves. He had Claude run a Terraform plan to set up the new website, but forgot to upload a vital state file that contains a full description of the setup as it exists at any moment in time.

Article continues below

You may like

Claude did what Gregory wanted and created a setup for the Shipping Labs site, however, the operator stopped it halfway. Because it was missing the state file, it created duplicate resources. Gregory had Claude identify the duplicate resources to correct the situation, then uploaded the state file, believing he had the situation sussed out.

Unfortunately, Gregory assumed at this point that the bot would continue cleaning up duplicate resources and only then look into the state file to see how it was meant to be set up in the first place. Terraform and similar tools can be very unforgiving, particularly when coupled with blind obedience. As Claude now had the state file, it logically followed it, issuing a Terraform "destroy" operation in preparation to set up things correctly this time.

Given that the infrastructure description included the DataTalks.Club website, this resulted in a full wipe of the setup for both sites, including a database with 2.5 years of records, and database snapshots that Grigorev had counted on as backups. The operator had to contact Amazon Business support, which helped restore the data within about a day.

In the post-mortem, Gregory describes a few measures he's taking to avoid similar incidents in the future, including setting up a period test for database restoring, applying delete protections to Terraform and AWS permissions, and moving the Terraform state file to S3 storage instead of his local machine. He also admitted he "over-relied on the AI agent to run Terraform commands", and is now stopping the agent from doing so, and will manually review every plan Claude presents so he can run any destructive actions himself.

Get Tom's Hardware's best news and in-depth reviews, straight to your inbox.

It's tempting to mark this story as another one of "dumb bot gone wrong," but it's a fair guess that most sysadmins will spot the baseline issues with Grigorev's approach, including granting wide-ranging permissions to what's effectively a subordinate of his, as well as not scoping permissions in a production environment to begin with.

Perhaps the biggest lesson is assuming that Claude would even have the context (pun unintended) to understand what the existence of the second website meant, just like a junior sysadmin wouldn't.

Read More
Quick Add New Report URL

Submitted links are added to a review queue to be resolved to a new or existing incident record. Incidents submitted with full details are processed before URLs not possessing the full details.
About the Database

The AI Incident Database is dedicated to indexing the collective history of harms or near harms realized in the real world by the deployment of artificial intelligence systems. Like similar databases in aviation and computer security, the AI Incident Database aims to learn from experience so we can prevent or mitigate bad outcomes.

You are invited to submit incident reports, whereupon submissions will be indexed and made discoverable to the world. Artificial intelligence will only be a benefit to people and society if we collectively record and learn from its failings. (Learn More)

post-image
AI Incident Roundup – November and December 2025 and January 2026

By Daniel Atherton

2026-02-02

Le Front de l'Yser (Flandre), Georges Lebacq, 1917 🗄 Trending in the AIID Between the beginning of November 2025 and the end of January 2026...

Read More
The Database in Print

Read about the database at Time Magazine, Vice News, Venture Beat, Wired, Bulletin of the Atomic Scientists , and Newsweek among other outlets.

Arxiv LogoVenture Beat LogoWired LogoVice logoVice logo
Incident Report Submission Leaderboards

These are the persons and entities credited with creating and submitted incident reports. More details are available on the leaderboard page.

New Incidents Contributed
  • 🥇

    Daniel Atherton

    743
  • 🥈

    Anonymous

    156
  • 🥉

    Khoa Lam

    93
Reports added to Existing Incidents
  • 🥇

    Daniel Atherton

    845
  • 🥈

    Anonymous

    238
  • 🥉

    Khoa Lam

    230
Total Report Contributions
  • 🥇

    Daniel Atherton

    3127
  • 🥈

    Anonymous

    986
  • 🥉

    1

    587
The AI Incident Briefing
An envelope with a neural net diagram on its left

Create an account to subscribe to new incident notifications and other updates.

Random Incidents
Loading...
Predictive Policing Biases of PredPol
Predictive Policing Biases of PredPol
Loading...
Aledo High School Student Allegedly Generates and Distributes Deepfake Nudes of Seven Female Classmates
Aledo High School Student Allegedly Generates and Distributes Deepfake Nudes of Seven Female Classmates
Loading...
ChatGPT Reportedly Introduces Errors in Critical Child Protection Court Report
ChatGPT Reportedly Introduces Errors in Critical Child Protection Court Report
Loading...
Child Sexual Abuse Material Taints Image Generators
Child Sexual Abuse Material Taints Image Generators
Loading...
ChatGPT Reportedly Produced False Court Case Law Presented by Legal Counsel in Court
ChatGPT Reportedly Produced False Court Case Law Presented by Legal Counsel in Court
The Responsible AI Collaborative

The AI Incident Database is a project of the Responsible AI Collaborative, an organization chartered to advance the AI Incident Database. The governance of the Collaborative is architected around the participation in its impact programming. For more details, we invite you to read the founding report and learn more on our board and contributors.

View the Responsible AI Collaborative's Form 990 and tax-exempt application. We kindly request your financial support with a donation.

Organization Founding Sponsor
Database Founding Sponsor
Sponsors and Grants
In-Kind Sponsors

Research

  • Defining an “AI Incident”
  • Defining an “AI Incident Response”
  • Database Roadmap
  • Related Work
  • Download Complete Database

Project and Community

  • About
  • Contact and Follow
  • Apps and Summaries
  • Editor’s Guide

Incidents

  • All Incidents in List Form
  • Flagged Incidents
  • Submission Queue
  • Classifications View
  • Taxonomies

2024 - AI Incident Database

  • Terms of use
  • Privacy Policy
  • Open twitterOpen githubOpen rssOpen facebookOpen linkedin
  • e1b50cd