Description: Chinese law enforcement has targeted a rise in AI-driven crimes. The crimes include deepfake and voice synthesis used for fraud, identity theft, and unauthorized personality rights usage. In particular, "AI undressing" scams, fake relationships using synthesized voices, and game hacking software make up many of these cases. In response, authorities have prosecuted multiple cases and implemented stricter regulations to control AI misuse.
Editor Notes: Incident 834 represents an editorial challenge in that the reporting represents multiple discrete AI incidents that are also indicative of many other similar harm events, all collected in singular news reports. Names such as "Mouhe" and "Moumou" refer to practices in Chinese-language reporting that partially anonymize names. Here is a reconstruction of the incidents mentioned in this reporting: (1) In 2022, Wang Mouhe, a former gaming software developer, engineered and sold AI-driven cheating software that enabled automated aiming and shooting in games, which disrupted fair play and generated over ¥6.29 million in profits. On May 6, 2024, Wang received a three-year prison sentence, suspended for five years, marking China’s first recognized AI-related gaming cheat case. (2) On April 13, 2023, Zhu, a business owner, reported unauthorized access to her government app account, where her company’s electronic business license was downloaded. Investigation revealed that suspect Zeng Moumou had used Zhu’s personal information to create AI-generated face animations, allowing him to bypass the app’s facial recognition security and fraudulently access the account. In other words, this incident indicates the rise of other similar AI-enabled identity theft incidents. (3) In May 2023, a group began using AI to fabricate sensational viral videos, producing over 3,000 unverified clips to attract views and online engagement. (4) In June 2023, police in Shangyu, Zhejiang, identified a fabricated AI video depicting a fire at the Shangyu Industrial Park, with flames and explosions, which went viral online. Police later arrested Tang Mou, a network technology company employee, along with two accomplices, who were responsible for creating the composite video using AI. (5) On April 12, 2024, the Ministry of Public Security announced ten AI-related cybercrime cases, including four that involved AI-facilitated disinformation. (6) On April 23, 2024, Beijing Internet Court ruled on China’s first AI voice rights infringement case, ordering damages of ¥250,000 for unauthorized use of a plaintiff’s voice in an AI-based text-to-speech product. (7) On June 25, 2024, the Zhangjiagang Procuratorate in Jiangsu prosecuted a fraud ring that used AI to create fake images, posing as overseas Chinese students on dating platforms and extorting over ¥1 million from victims. (8) Law enforcement has responded through comprehensive measures: the Ministry of Public Security has expanded the “Clean Net” campaign to target AI-driven identity theft and disinformation schemes, collaborating with research institutions to enhance face recognition and liveness detection technology to defend against AI manipulation. (9) The incident date of July 4, 2024 is set because that is when the Legal Daily report published their comprehensive summary of AI-related criminal cases and the Chinese government’s crackdown on AI misuse. This report documents various cases of AI-driven fraud, disinformation, and identity theft, and it emphasizes law enforcement’s active measures escalating official responses to AI-enabled crimes. An earlier report from October 26, 2023 contains some information that makes up this incident ID as well.
Entities
View all entitiesAlleged: Unknown voice synthesis technology developers , Unknown game cheating technology developers and Unknown deepfake technology developers developed an AI system deployed by Zeng Moumou , Wang Mouhe , Unknown deepfake creators , Tang Mou , scammers , Fraudsters , Bai Moumou , AI fraud rings in China and , which harmed Chinese general public and Chinese citizens.
Incident Reports
Reports Timeline
Variants
A "variant" is an incident that shares the same causative factors, produces similar harms, and involves the same intelligent systems as a known AI incident. Rather than index variants as entirely separate incidents, we list variations of incidents under the first similar incident submitted to the database. Unlike other submission types to the incident database, variants are not required to have reporting in evidence external to the Incident Database. Learn more from the research paper.
unreviewed
Incident Date:
2023-10-26
unreviewed
Incident Date:
2024-07-04
Similar Incidents
Did our AI mess up? Flag the unrelated incidents
Similar Incidents
Did our AI mess up? Flag the unrelated incidents