Skip to Content
logologo
AI Incident Database
Open TwitterOpen RSS FeedOpen FacebookOpen LinkedInOpen GitHub
Open Menu
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse

Report 6334

Associated Incidents

Incident 12541 Report
Purported AI Deepfake Reportedly Impersonated Thai PBS World Anchor and Miss Universe CEO in Fraudulent Investment Video

Loading...
Verified: Thai PBS news anchor's clip reporting "Anne Jakrapob" inviting investors to invest and receive a return of 500,000 baht in 1 month was actually created from Deepfake.
thaipbs.or.th · 2025

A fake AI Deepfake clip impersonates a Thai PBS World news anchor, reporting an interview with "Anne Jakkaphong" inviting people to invest online, claiming to start with 8,000 baht and get hundreds of thousands of baht back. After investigating this matter, Thai PBS Verify found that it is fake news using AI Deepfake technology to impersonate the image and voice of a famous person, "Anne Jakkaphong", CEO of the Miss Universe beauty pageant and a Thai PBS World news anchor. The content of the clip claims to invite Thai youth to invest in a platform certified by the government and receive returns of hundreds of thousands of baht. Thai PBS Verify found the source of the fake news from: Facebook -------------------------------------------------- Thai PBS Verify found a fake [page] (https://perma.cc/ZQ7Y-M7W3) impersonating and using the logo of a famous news agency. A 2-minute video clip was released showing Ms. Dolyana Bunnag, a news anchor of Thai PBS World, interviewing Ms. Anne Jakkaphong, the CEO of the Miss Universe Organization. The content invites online investment in English. It was published as an advertisement through the Facebook platform. It has received 12,000 views (links saved here(https://perma.cc/9C3V-CJLT) and here(https://perma.cc/VUQ9-CP7W)). Details of the content in the video clip are images of the anchor Thai PBS World reporting the news of Anne Jakkaphong. The content invites the public, including recent graduates, to invest in an automated financial platform. Which is claimed to be a platform certified by the Thai government! A screenshot from a clip that uses AI-deepfake to modify the video and audio of Anne Jakapong. The clip states that investors can start with as little as 8,000 baht and have a chance to receive returns of up to 800,000 baht. It also claims that the system has been tested on a small group of people. It was found that all participants received returns of at least 500,000 baht within just 1 month. It is confirmed that all investors received a 100% profit. If there is no profit, the full amount will be refunded. There is also an additional claim that if the platform is opened for wider use, it will enable Thai people to become millionaires nationwide. Therefore, the number of participants is limited. Those who contact or register first will have the right first. They will have the opportunity to get rich first. There is also a warning that if you finish watching the video, the link to join or apply for investment will disappear immediately If the viewer does not hurry to decide to take action Original clip ----------- From our investigation, we found ClipThe original was taken from a news report on the Thai PBS World Tonight program that was released on June 24, 2025 (archived link) Left is an image created from AI-deepfake. Right is an image recorded from the original clip. While ClipThe original of Anne Jakrapob was found to have been taken from an interview with a foreign media. This is a clip that was published on February 11, 2023 (archived link) The left is an image created from AI-deepfake. The right is an image recorded from the original clip. Meanwhile, we asked Mr. Sathapon Phatthanakuha (https://x.com/banksathapon?lang=en), the president of Guardian AI Company, who confirmed that the clip was created from AI-deepfake. He noted that the movements of the person's mouth in the clip did not match the sound and the quality of the image in the video clip. Facial muscles and mouth shape are blurry. It was confirmed that the clip was created from AI-deepfake (archived link). It was also found that the page in question included a website link and a contact number that could not be reached. and false address while the profile picture of the account was found to use an image of Thairath TV News Agency as well A screenshot of a fake page claiming to be the logo of Thairath TV News Agency. (Recorded link) Previously, Ms. Dolyana Bunnag, a news anchor of Thai PBS World, was a victim of having her video clip used to create Deepfake to report false news. From the case [Found: AI clip claiming a famous doctor was arrested for revealing a high blood pressure medicine formula that cured people in 3 days] (https://www.thaipbs.or.th/verify/content/391) including the case of being impersonated. Interview with Panya Nirankul Thai PBS Verify interviewed Ms. Dolyana, who said that in just one year, there have been at least 3-4 similar incidents, and this time is considered the most realistic when compared to previous incidents! [] (https://files.wp.thaipbs.or.th/verify/2025/07/S__55410694.jpg) For the most recent case, Ms. Dolyana revealed that the clip forged by AI this time is highly realistic, with the tone of voice, speaking rhythm, and reporting style that can be imitated smoothly. Although the accent is still slightly different, general viewers who do not know the facts may mistake it for a real news clip. In particular, the latest clip, which was made into an English version, appears more polished than before. Furthermore, Thai PBS confirmed that it has no policy to report news or advertisements that would encourage the public to invest. "I am concerned about this issue because anyone can fall victim to AI-Deepfake, and as a woman, I am concerned about it being used in an obscene manner." Initially, after being the first victim, the news anchor filed a police report, but so far there has been no progress in the case. He stated that this case not only causes personal damage to the anchor but also affects the image of Thai PBS as an official representative of the station. Having images and sounds forged to present deceptive content inevitably affects the credibility of public media. The laws regarding Deepfake in Thailand are as follows: - Computer Crime Act B.E. 2560, Section 16 states that "Whoever imports computer data into a computer system accessible to the public that appears to be an image of another person, and that image is created, edited, or modified by electronic or other means, in a manner that is likely to damage the reputation of that other person, cause contempt, hatred, or embarrassment, shall be punished with imprisonment not exceeding three years." and a fine not exceeding two hundred thousand baht" - Criminal Code, Sections regarding defamation, insult, dissemination of obscene media - Personal Data Protection Act (PDPA) if personal information (face, name and voice) is used without consent - If the victim is a minor, it may be an offense under the Prevention and Suppression of Human Trafficking Act or the Child Protection Act. Verification process ---------------- - Check the transparency of the Facebook page. The date the page was created. Page information that was found to be irregular includes the use of the Thai Rath News Agency logo, attaching a fake website link, and unreliable addresses and phone numbers. - Ask an expert in AI image technology to confirm that the clip was created using AI Deepfake. - Confirmation from those involved in the clip, confirmation from the victim whose personal information was impersonated using AI-Deepfake technology. Impact of receiving this type of information ------------------------------- 1. Destroys the credibility of the media and news anchors. If a group of people believe or misunderstand, it may reduce the credibility of the news station or media as a whole. 2. Violates personal rights. Using another person's face or voice without permission is a violation of personal rights, which is illegal. 3. Opens a channel for criminals to further use. Fake clips can be used to create and further use widely, such as for fraud, selling products. Recommendations when receiving this false information? ------------------------------- 1. Collect evidence immediately. Notify the person whose personal image was impersonated by criminals. - Collect edited images, videos, or related text. When it was published, if it was published in a chat group or on social media Keep the conversation history and URL of the post. 2. Press Report on the platform (Facebook, YouTube, TikTok) - Press Report the content to the platform where the content is published, Facebook, X or TikTok, to remove the content. 3. If you are a victim whose image or audio has been used, file a report or consult the relevant agencies. - File a report at the police station or the Technology Crime Suppression Division (TCSD) and additional relevant agencies, such as the Ministry of Digital Economy and Society (DE) to ask for help coordinating the deletion of the data, or call the Cyber Threat Complaint Center hotline 1212. 4. Observe the video created with AI-Deepfake as follows: - Notice the movement of the lips that are not synchronized with the speech. - Notice that the movement will not be smooth because the video has been edited. - Notice that the movement gestures are repetitive like a loop. - Observe the use of accents of the narrator in the clip.

Read the Source

Research

  • Defining an “AI Incident”
  • Defining an “AI Incident Response”
  • Database Roadmap
  • Related Work
  • Download Complete Database

Project and Community

  • About
  • Contact and Follow
  • Apps and Summaries
  • Editor’s Guide

Incidents

  • All Incidents in List Form
  • Flagged Incidents
  • Submission Queue
  • Classifications View
  • Taxonomies

2024 - AI Incident Database

  • Terms of use
  • Privacy Policy
  • Open twitterOpen githubOpen rssOpen facebookOpen linkedin
  • e1b50cd