Skip to Content
logologo
AI Incident Database
Open TwitterOpen RSS FeedOpen FacebookOpen LinkedInOpen GitHub
Open Menu
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse

Welcome to the
AI Incident Database

Loading...
AI 'fake applicant' case raises North Korea job scam fears

Incident 1421: Purported Deepfake Applicant Reportedly Impersonated Tokyo IT Executive Kenbun Yoshii During Online Job Interview

“AI 'fake applicant' case raises North Korea job scam fears”Latest Incident Report
upi.com2026-03-21

March 19 (Asia Today) -- A suspected deepfake job applicant infiltrated an online hiring interview at a Japanese IT company, raising concerns about possible links to North Korean schemes to secure overseas employment and generate foreign currency.

According to a report Thursday by Yomiuri Shimbun, the applicant used artificial intelligence to impersonate a real individual by altering facial features and personal credentials during a remote interview conducted earlier this month in Tokyo.

The man, who identified himself under a false name, claimed he had been raised in the United States and requested fully remote work. When told that in-person attendance was required, he ended the interview after about two minutes.

The applicant had submitted an English-language résumé through a Japanese recruitment platform, listing experience at a major company and claiming native-level Japanese proficiency. However, the recruiter later discovered that the profile and career details matched those of Kenbun Yoshii, the chief executive of a Tokyo-based IT firm.

Yoshii said publicly available images and videos of him appeared to have been used to create the fake identity, describing the incident as "creepy and frightening." He later received multiple reports that similar applicants using his identity had applied to other companies.

Analysis of the interview footage by several organizations, including Okta and a Tokyo-based deepfake detection startup, found a high likelihood the video was generated using AI. Investigators cited irregularities such as unnatural hairline boundaries, brief misalignment of the eyes and mismatched lip movements and audio.

Okta said more than 6,500 similar cases have been identified globally in recent years, involving individuals believed to be North Korean IT workers using fake identities to obtain remote jobs at foreign companies. Some cases involved earnings being transferred back to North Korea, potentially supporting its weapons programs.

A separate analysis by Trend Micro found evidence that North Korean cyber groups have been experimenting with deepfake technology and producing large volumes of falsified résumés, often claiming full-stack engineering expertise.

Security experts warned that such tactics, once concentrated in the United States and Europe, are now spreading to Japan. They urged companies to strengthen identity verification procedures, including multi-factor authentication and in-person interviews.

Researchers also noted that rapid advances in deepfake technology have made detection increasingly difficult without technical tools, recommending layered verification methods and in-depth technical questioning during hiring processes.

-- Reported by Asia Today; translated by UPI

Read More
Loading...
Hungary's opposition leader to file criminal complaint over alleged deepfake video

Incident 1420: Balázs Orbán Allegedly Published Purported Deepfake of Péter Magyar Claiming He Would Cut Pensions

“Hungary's opposition leader to file criminal complaint over alleged deepfake video”
reuters.com2026-03-21

BUDAPEST, Oct 29 (Reuters) - Hungarian opposition leader Peter Magyar said on Wednesday that he would file a criminal complaint against Prime Minister Viktor Orban's chief political aide for publishing what Magyar said was a deepfake video of him generated by artificial intelligence.

Magyar's centre-right Tisza party leads most opinion polls ahead of elections due in April 2026.

Aide Balazs Orban, who is not related to the prime minister, published a 38-second video on Facebook on Tuesday that purported to show Magyar saying that he would cut pensions.

The allegedly AI-generated video was published less than six months before a parliamentary election that could be Viktor Orban's toughest since he entered office in 2010, with the economy stagnating.

AIDE ACCUSES MAGYAR OF TRYING TO SILENCE THE TRUTH

In the video, Magyar seemingly says, among other things, that the current pension system was too generous and that some pensions needed to be taxed.

The video was not labeled as AI-generated. The European Union's AI Act will mandate such disclosures when it takes effect in 2026.

Magyar said in a statement that he would file a criminal complaint against Balazs Orban, accusing him of creating and publishing a fake video that amounts to defamation.

Balazs Orban did not immediately reply to emailed Reuters questions. But on Facebook he accused Magyar of wanting to "silence those who uncover their real intentions."

Analysts said the incident risked widening Hungary's deep political divisions.

"Things get dangerous when there is a video in which Magyar says things that he never did," said Gabor Polyak, head of the School of Media and Communication at Eotvos Lorand University.

Polyak added that Hungarian civil and criminal laws provide protection against defamatory fake videos, though court rulings on such cases remain untested.

Read More
Loading...
Tennessee grandmother jailed after AI facial recognition error links her to fraud

Incident 1416: Purported Facial Recognition Error Reportedly Led to Arrest and Jailing of Tennessee Woman in North Dakota Fraud Case

“Tennessee grandmother jailed after AI facial recognition error links her to fraud”
theguardian.com2026-03-16

A Tennessee grandmother says she is trying to rebuild her life after an incident of mistaken identity by an artificial intelligence (AI) facial recognition system tied her to a North Dakota bank fraud investigation.

Angela Lipps, 50, spent nearly six months in jail after Fargo police identified her as a suspect in an organized bank fraud case using facial recognition software, according to south-east North Dakota news outlet InForum. Lipps told the outlet she had never been to North Dakota and did not commit the crimes.

Lipps, a mother of three and grandmother of five, said she has lived most of her life in north-central Tennessee. She had never been on an airplane until authorities flew her to North Dakota last year to face charges.

In July, US marshals arrested Lipps at her Tennessee home while she was babysitting four children. She said she was taken away at gunpoint and booked into a county jail as a fugitive from justice from North Dakota.

"I've never been to North Dakota, I don't know anyone from North Dakota," Lipps told WDAY News.

She remained in a Tennessee jail for nearly four months without bail while awaiting extradition. She was charged with four counts of unauthorized use of personal identifying information and four counts of theft.

According to Fargo police records obtained by WDAY News, detectives investigating bank fraud cases in April and May 2025 reviewed surveillance video of a woman using a fake US army military ID to withdraw tens of thousands of dollars.

The officers allegedly used facial recognition software to identify the suspect as Lipps. A detective reportedly wrote in court documents that Lipps appeared to match the suspect based on facial features, body type and hairstyle.

Lipps told WDAY News that no one from the Fargo police department contacted her before the arrest.

Authorities in North Dakota did not transport Lipps from Tennessee until the end of October, 108 days after her arrest, according to InForum. She appeared in a North Dakota courtroom the next day.

Her attorney, Jay Greenwood, told the outlet: "If the only thing you have is facial recognition, I might want to dig a little deeper."

Lipps was later released on Christmas Eve after Greenwood obtained her bank records and presented them to investigators. The records showed Lipps was more than 1,200 miles away in Tennessee at the time investigators said the fraud occurred in Fargo.

But Lipps said Fargo police did not pay for her trip home, leaving her stranded. Local defense attorneys helped cover a hotel room and food on Christmas Eve and Christmas Day, and a local non-profit, the F5 Project, was able to help her return to Tennessee, InForum reported.

Lipps is now back home but says the experience has had lasting consequences. While jailed and unable to pay bills, Lipps lost her home, her car and her dog, she said. She also told WDAY News no one from the Fargo police department had apologized.

This is far from the first case of an AI error flagging the wrong suspect. In October, an AI system apparently mistook a Baltimore high school student's bag of Doritos for a firearm and called local police to tell them the pupil was armed. Taki Allen was sitting with friends outside the Kenwood high school in Baltimore when police officers with guns approached him, made him get on his knees, and handcuffed and searched him -- finding nothing.

Earlier this year, police arrested a man in the UK for a burglary in a city he had never visited after face-scanning software confused him with another person of south Asian heritage. Authorities had used automated facial recognition software which matched him with footage of a suspect in a £3,000 burglary 100 miles away.

Read More
Loading...
Subcontractors See Intimate Meta AI Visual Queries From Smart Glasses

Incident 1418: Meta AI Smart Glasses Reportedly Exposed Intimate User Imagery and Video to Human Reviewers in Kenya

“Subcontractors See Intimate Meta AI Visual Queries From Smart Glasses”
uploadvr.com2026-03-16

Subcontractors see intimate Meta AI visual queries from the company's smart glasses, sometimes accidentally triggered, a report from two Swedish newspapers revealed.

Svenska Dagbladet and Göteborgs-Posten's joint report has led to widespread worry about the privacy of smart glasses for not only others nearby, but also the owners themselves.

To be clear, the issue here is not with the intentional photo and videos capture feature of the Ray-Ban and Oakley smart glasses. Photos and videos you intentionally capture with the glasses sync to your phone and are not viewed by Meta or subcontractors, nor are they used to train AI models.

Instead, the report refers to the visual query functionality of Meta AI on the devices, and its propensity for accidental activation.

The Meta AI visual queries feature started rolling out in early 2024, around six months after the Ray-Ban Meta glasses launched, and originally explicitly required saying "Hey Meta, look and tell me <query>", upon which the AI captures a frame to provide a response.

A portion of these responses are sent to outsourced contractors, in countries where labor is cheap such as Kenya, who rate the response based on whether it's useful or accurate. Over time, Meta uses this human review data to improve the quality of Meta AI responses.

Late in 2024, as announced at Meta Connect that year, Meta AI was updated to be able to more naturally infer from the context of your query whether it required a camera capture. For example "Hey Meta, what kind of plant is this?" or "Hey Meta, translate this menu" would trigger it.

This update made Meta AI more natural and useful. But it also had the side effect of making it far more likely for the AI to capture a frame when you do not intend it to, following the device incorrectly thinking it heard you say "Hey Meta". Further, with the Live AI feature available in the US and Canada, this can even include video clips. The Live AI feature lets you start a continuous conversation with the AI, similar to Google's Gemini Live on smartphones.

Combined with the contractor review system, this creates the nightmare scenario the report uncovered, wherein human beings can see what are essentially accidentally-captured intimate photos from inside the homes of smart glasses owners.

The Kenyan data reviewers who spoke to the Swedish newspapers reported seeing images and video clips of naked people going to the bathroom, changing clothes, and having sex, as well as watching porn and holding up sensitive documents and bank cards.

The facility these data reviewers work at does include strict security practices that prevent them from bringing any recording devices to work, or otherwise exfiltrating these clips. But if a data breach were to somehow occur it would trigger an "enormous scandal", the report suggests.

In response to the Swedish report, Meta points out that the LED on the front of the glasses will always illuminate when capturing imagery (which is true), and the company issued a general statement noting that visual data is "first filtered to protect people's privacy", including blurring faces and license plates. But the Kenyan workers claim this filter is not perfect, and that other intimate details still remain in imagery they review.

The practice of having subcontractors review AI interaction is not unique to Meta. Amazon does this for Alexa, Google for Gemini, and Apple for Siri, for example. And in 2019 a report from Bloomberg revealed how these subcontractors heard intimate bedside conversations from Amazon Echo devices, while another report from The Guardian revealed that the same was happening with Apple's Siri.

Following the backlash from The Guardian's report, Apple made human review of Siri conversations an opt-in system, while Google allows an opt-out for Gemini.

For Meta AI, however, there is no ability to opt out. And for Meta, the smart glasses form factor, with an egocentric camera, presents unique privacy concerns for this model that could prompt those aware of the implications to never want to purchase smart glasses, or to stop using a pair they already own.

Will this report prompt Meta to change its data review practices, as Apple did following The Guardian's 2019 report? Or will it be ignored so that the company can improve Meta AI faster to catch up to stronger AI models like Google's Gemini 3? And how will Google and Amazon handle this issue as they launch consumer smart glasses in coming months?

Read More
Loading...
This Morning star hits back at ‘disgraceful’ fake weight loss pill claims

Incident 1417: Purported Deepfake of Ashley James Reportedly Used to Promote Weight Loss Pills

“This Morning star hits back at ‘disgraceful’ fake weight loss pill claims”
metro.co.uk2026-03-16

This Morning star Ashley James has responded after AI was used to create a disturbing advert with her likeness, promoting weight loss pills.

Ashley has been left 'devastated' at the thought of anyone buying such pills upon her so-called recommendation, branding the fake ad a 'violation'.

Taking to Instagram this weekend, the presenter and activist began in a video: 'I have a confession. I've been taking weight loss pills.

'At least, that's what you've been led to believe...'

Footage then cuts to a digitally generated version of Ashley being interviewed on the This Morning sofa by Ben Shephard, where she appears to reveal her weight loss results and the benefits of taking such drugs.

It's incredibly realistic, featuring the This Morning colour scheme, ITV logo, and an AI character that both looks and sounds exactly like her.

‘I’ve tried everything. Seriously, everything. Each new diet was hopeful and disappointing,’ begins her AI persona.

‘I thought I should just give up, but then I saw an interview with Doctor Rangan Chatterjee, where he explained being overweight is not your fault – it’s a metabolic failure caused by age, and he’s developed a formula that restarts that metabolism, so I decided to try it.’

As text on-screen boldly states, ‘She lost 27 pounds in just one month!’, the character adds: ‘One week later, I was down nine pounds. Three weeks later, I’m down 27.

‘I feel light again. I love my reflection in the mirror again.’

In her own expert takedown of the clip, the real Ashley then informs her followers: ‘So many of you have sent me this advert, so I just want to be really clear – that is not me. It is completely AI-generated.’

‘Not only did I never say this, I’ve never taken these pills, I’ve never heard of these pills, and most importantly, nor would I ever promote them,’ she insists.

‘I’m honestly devastated that anybody might buy these products believing that I recommended them.’

The former Made in Chelsea star, who is known and loved for her body-positive content on social media, added that she ‘always turns down’ any sponsorship opportunities involving diets or weight loss pills.

‘So not only does this feel like a total violation, but the message behind it makes me incredibly angry.

‘We already live in a world where women are constantly told to shrink themselves, be smaller, be thinner, take up less space. And that’s only getting worse with the rise of weight loss injections.’

Continuing her rant in the caption, Ashley admitted that, ‘if [she] didn’t know better,’ she would assume the AI ad was real too.

‘Someone has taken my face and my voice and turned it into an advert telling women they should lose weight. If you know anything about me, you’ll know that is the exact kind of messaging I’ve spent years fighting against.’

She further cited other examples of public figures being targeted by deepfakes, with Money Saving Expert Martin Lewis forced to call out similar content in the past after members of the public were scammed out of thousands of pounds from following bogus financial advice.

‘I do not support these products and I would never tell you, or anyone else, that you need to shrink yourself or diet,’ Ashley concluded in her written caption. ‘And if you see this advert please report it. Because how social media platforms are allowing this is disgraceful! It’s scary when you think about it.’

In the comments, famous friends were eager to offer their support, expressing shock over the advert.

‘This is absolutely shocking’, wrote Carol Vorderman. ‘What is the recourse in law?’

Sarah Jayne Dunn commented: ‘😮 this is so scary!!’

‘This is terrifying!!!’, echoed Dani Harmer, while Faye Tozer raged: ‘Nothing about this is ok 🤬’

Ashley has long been a public advocate for body acceptance, particularly when it comes to motherhood.

In a post discussing the harmful rhetoric surrounding postpartum bodies, Ashley wrote in January: ‘The world looks at a mum’s body and sees something to fix, but our children look at it as their first home and love it.’

She proudly stated that, ‘babies or not, we should never have learned to hate something so magical. Our bodies ARE magical.’

The mum-of-two also often posts bikini snaps from various angles to encourage other people to feel confident.

Earlier this year, she wrote: ‘I have more confidence in my body now than I ever did before. And I’m proud of that, and I never want my daughter to see me hating on my body.’

The TV personality recently published her first book, titled Bimbo in a nod to the labels she’s been given online and in an attempt to reclaim them.

It became a bestseller, detailing her own raw experiences and unpacking the oppression and expectations of women throughout their lives.

While promoting it, she told BBC Woman’s Hour: ‘Often, if people don’t agree with me, they’ll go online and say, “She’s just a bimbo.”

‘But it’s not just “bimbo,” it’s all the labels that I feel like women are given, whether that’s “bossy,” “frigid,” “tarty,” and even into elderhood, like “crone” or “hag.” I really wanted to explore how these labels shrink us and keep us small.’

Read More
Quick Add New Report URL

Submitted links are added to a review queue to be resolved to a new or existing incident record. Incidents submitted with full details are processed before URLs not possessing the full details.
About the Database

The AI Incident Database is dedicated to indexing the collective history of harms or near harms realized in the real world by the deployment of artificial intelligence systems. Like similar databases in aviation and computer security, the AI Incident Database aims to learn from experience so we can prevent or mitigate bad outcomes.

You are invited to submit incident reports, whereupon submissions will be indexed and made discoverable to the world. Artificial intelligence will only be a benefit to people and society if we collectively record and learn from its failings. (Learn More)

post-image
AI Incident Roundup – November and December 2025 and January 2026

By Daniel Atherton

2026-02-02

Le Front de l'Yser (Flandre), Georges Lebacq, 1917 🗄 Trending in the AIID Between the beginning of November 2025 and the end of January 2026...

Read More
The Database in Print

Read about the database at Time Magazine, Vice News, Venture Beat, Wired, Bulletin of the Atomic Scientists , and Newsweek among other outlets.

Arxiv LogoVenture Beat LogoWired LogoVice logoVice logo
Incident Report Submission Leaderboards

These are the persons and entities credited with creating and submitted incident reports. More details are available on the leaderboard page.

New Incidents Contributed
  • 🥇

    Daniel Atherton

    738
  • 🥈

    Anonymous

    155
  • 🥉

    Khoa Lam

    93
Reports added to Existing Incidents
  • 🥇

    Daniel Atherton

    840
  • 🥈

    Anonymous

    235
  • 🥉

    Khoa Lam

    230
Total Report Contributions
  • 🥇

    Daniel Atherton

    3113
  • 🥈

    Anonymous

    982
  • 🥉

    1

    587
The AI Incident Briefing
An envelope with a neural net diagram on its left

Create an account to subscribe to new incident notifications and other updates.

Random Incidents
Loading...
Predictive Policing Biases of PredPol
Predictive Policing Biases of PredPol
Loading...
Aledo High School Student Allegedly Generates and Distributes Deepfake Nudes of Seven Female Classmates
Aledo High School Student Allegedly Generates and Distributes Deepfake Nudes of Seven Female Classmates
Loading...
ChatGPT Reportedly Introduces Errors in Critical Child Protection Court Report
ChatGPT Reportedly Introduces Errors in Critical Child Protection Court Report
Loading...
Child Sexual Abuse Material Taints Image Generators
Child Sexual Abuse Material Taints Image Generators
Loading...
ChatGPT Reportedly Produced False Court Case Law Presented by Legal Counsel in Court
ChatGPT Reportedly Produced False Court Case Law Presented by Legal Counsel in Court
The Responsible AI Collaborative

The AI Incident Database is a project of the Responsible AI Collaborative, an organization chartered to advance the AI Incident Database. The governance of the Collaborative is architected around the participation in its impact programming. For more details, we invite you to read the founding report and learn more on our board and contributors.

View the Responsible AI Collaborative's Form 990 and tax-exempt application. We kindly request your financial support with a donation.

Organization Founding Sponsor
Database Founding Sponsor
Sponsors and Grants
In-Kind Sponsors

Research

  • Defining an “AI Incident”
  • Defining an “AI Incident Response”
  • Database Roadmap
  • Related Work
  • Download Complete Database

Project and Community

  • About
  • Contact and Follow
  • Apps and Summaries
  • Editor’s Guide

Incidents

  • All Incidents in List Form
  • Flagged Incidents
  • Submission Queue
  • Classifications View
  • Taxonomies

2024 - AI Incident Database

  • Terms of use
  • Privacy Policy
  • Open twitterOpen githubOpen rssOpen facebookOpen linkedin
  • e1b50cd