Skip to Content
logologo
AI Incident Database
Open TwitterOpen RSS FeedOpen FacebookOpen LinkedInOpen GitHub
Open Menu
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse

Incident 1421: Purported Deepfake Applicant Reportedly Impersonated Tokyo IT Executive Kenbun Yoshii During Online Job Interview

Description: In Tokyo, a Japanese IT company reportedly interviewed a job applicant who used purportedly AI-generated video manipulation to impersonate real IT executive Kenbun Yoshii during a remote hiring interview. Investigators cited visual and audio irregularities suggesting a deepfake, and Yoshii said his publicly available images and career details appeared to have been misused.
Editor Notes: See also Incident 1118: Ongoing Purported AI-Assisted Identity Fraud Enables Unauthorized Access to Western Companies by North Korean IT Workers. Current reporting suggests this incident may be consistent with suspected North Korean IT worker tactics, but does not definitively attribute the specific incident to the North Korean government.

Tools

New ReportNew ReportNew ResponseNew ResponseDiscoverDiscoverView HistoryView History

Entities

View all entities
Alleged: Unknown deepfake technology developers and Unknown voice cloning technology developers developed an AI system deployed by Unknown fake job applicant(s), which harmed Kenbun Yoshii , Japanese IT company recruiter(s) , Unnamed Japanese company , Epistemic integrity and National security and intelligence stakeholders.
Alleged implicated AI systems: Unknown deepfake technology developers , Unknown voice cloning technology developers and Unknown video call technology

Incident Stats

Incident ID
1421
Report Count
3
Incident Date
2026-03-19
Editors
Daniel Atherton

Incident Reports

Reports Timeline

+2
[아시아 이슈]AI로 얼굴 바꾼 ‘가짜 지원자’…일본 IT기업 면접 침투, 北 위장취업 의혹
AI 'fake applicant' case raises North Korea job scam fears
Loading...
[아시아 이슈]AI로 얼굴 바꾼 ‘가짜 지원자’…일본 IT기업 면접 침투, 北 위장취업 의혹

[아시아 이슈]AI로 얼굴 바꾼 ‘가짜 지원자’…일본 IT기업 면접 침투, 北 위장취업 의혹

asiatoday.co.kr

Loading...
国内で発生「AIなりすまし面談」被害の実態。取材で見えた“驚くほどカジュアルな手口”と潜在的リスク

国内で発生「AIなりすまし面談」被害の実態。取材で見えた“驚くほどカジュアルな手口”と潜在的リスク

businessinsider.jp

Loading...
AI 'fake applicant' case raises North Korea job scam fears

AI 'fake applicant' case raises North Korea job scam fears

upi.com

Loading...
[아시아 이슈]AI로 얼굴 바꾼 ‘가짜 지원자’…일본 IT기업 면접 침투, 北 위장취업 의혹
asiatoday.co.kr · 2026

실제 인물의 얼굴사진과 경력 정보를 악용한 것으로 보이며, 전문가들은 북한 IT 인력이 해외 기업에 위장 취업해 외화를 벌어들이는 사례와 연관될 가능성에 주목하고 있다.

요미우리신문 19일 보도에 따르면 이달 초 도쿄 도내 한 IT기업이 실시한 중도 채용 온라인 면접에서 '요시타케 케후미'라고 자신을 소개한 남성이 등장했다. 그는 "태어나고 자란 곳이 미국이라 일본어가 서툴다"고 말하며 해외에서의 완전 원격근무를 희망했다. 회…

Loading...
国内で発生「AIなりすまし面談」被害の実態。取材で見えた“驚くほどカジュアルな手口”と潜在的リスク
businessinsider.jp · 2026

日本でも、生成AIを使ったとみられる「なりすまし面接」が現実のものになった。

物流向けSaaSを手掛ける国内ベンチャー企業A社が3月4日に実施した、エンジニア職のカジュアル面談に、AIの映像を使って「実在する別人になりすました応募者」が現れた。事前に提出された履歴書には、実在するエンジニア・吉井健文さんの経歴が使われており、オンラインでの面談実施時も同じ実名、同じ見た目の人物との面談が実施されたという。

違和感を感じた面談担当者が、面談後に応募者へ確認を取ったところ、申し込み…

Loading...
AI 'fake applicant' case raises North Korea job scam fears
upi.com · 2026

March 19 (Asia Today) -- A suspected deepfake job applicant infiltrated an online hiring interview at a Japanese IT company, raising concerns about possible links to North Korean schemes to secure overseas employment and generate foreign cu…

Variants

A "variant" is an AI incident similar to a known case—it has the same causes, harms, and AI system. Instead of listing it separately, we group it under the first reported incident. Unlike other incidents, variants do not need to have been reported outside the AIID. Learn more from the research paper.
Seen something similar?

Similar Incidents

Selected by our editors

North Korea-Linked Actors Allegedly Use AI Executive Deepfakes in Zoom Phishing Targeting Web3 Employee

Jun 2025 · 1 report
Previous Incident

Similar Incidents

Selected by our editors

North Korea-Linked Actors Allegedly Use AI Executive Deepfakes in Zoom Phishing Targeting Web3 Employee

Jun 2025 · 1 report

Research

  • Defining an “AI Incident”
  • Defining an “AI Incident Response”
  • Database Roadmap
  • Related Work
  • Download Complete Database

Project and Community

  • About
  • Contact and Follow
  • Apps and Summaries
  • Editor’s Guide

Incidents

  • All Incidents in List Form
  • Flagged Incidents
  • Submission Queue
  • Classifications View
  • Taxonomies

2024 - AI Incident Database

  • Terms of use
  • Privacy Policy
  • Open twitterOpen githubOpen rssOpen facebookOpen linkedin
  • e1b50cd