Skip to Content
logologo
AI Incident Database
Open TwitterOpen RSS FeedOpen FacebookOpen LinkedInOpen GitHub
Open Menu
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse

Report 4052

Associated Incidents

Incident 7777 Report
South Korea Experiences a Surge of Explicit Deepfake Pornography

Loading...
They invested $1 in AI, and millions of women were dragged into hell|Deep Krypton Lite
baijiahao.baidu.com · 2024

More than a year ago, Wu Hua was traveling in South Korea and posted a photo with a Korean location on social media. Less than three days after returning to China, a friend sent her a screenshot of an account with almost the same profile and avatar as hers. The difference was that the text in the account was replaced with Korean, and a strange website appeared in the personal profile. What shocked Wu Hua was that clicking on the URL in the high-imitation account would lead to a pornographic channel, where obscene paid videos were displayed------the protagonists were all replaced with her face by AI. It was not until the end of August this year that I paid attention to the Korean Deepfake incident. "I was swept by a huge fear." Wu Hua told "Intelligence Emergence", "It's too disgusting and too uncomfortable." Recalling her first feeling when she clicked on these AI videos, Wu Hua still choked up. Although the high-imitation account has been reported and deleted by her, she is not sure whether her face has been used elsewhere. The "Nth Room 2.0 Incident" spawned by Deepfake (specifically referring to AI-based human image synthesis technology) technology is plunging South Korea, which is across the sea, into a huge whirlpool. Countless women have become the "prey" captured by men using AI. A male student at Inha University set up a chat room with more than 1,200 members. They used Deepfake technology to place the faces of girls in the school on various pornographic images, and publicly shared their contact information in the group chat. Starting from Inha University, a huge conspiracy was quickly uncovered, and Deepfake was therefore called the "Nth Room 2.0 Incident". Starting from August 20, Korean media and police have successively uncovered a large number of similar chat rooms in schools, hospitals, and military systems. 220,000, this is the largest number of Deepfake chat room members disclosed in South Korea. In South Korea, almost every middle school and university has such a "humiliation room". In the group chat, men use women they are familiar with in their lives as objects of humiliation, and use various Deepfake tools to replace women's faces with various obscene images and videos. Then, the men in the group will begin to verbally humiliate together, and even develop to offline threats and harassment. Large-scale humiliation rooms are even divided into different small rooms, such as family humiliation rooms. Men even secretly take photos of their mothers, sisters, and younger sisters and upload them to group chats. △ In the chat room, a man uploaded a photo of his younger sister (the photo has been blocked by a red mosaic), with the text being a chat record translated into Chinese. In 2019, when the Nth Room incident that shocked the world occurred, some netizens said sarcastically: "There are only 260,000 taxis in South Korea, which means that the probability of you running into a member of the Nth Room on the street is the same as running into a taxi." This sentence can not only be used in today's Deepfake chat rooms, but the real situation is even more shocking. Nth Room 1.0 is basically based on real candid content, which means that criminals still need to shoot and obtain materials by themselves. But as Deepfake technology becomes more and more mature, the cost of Deepfake room crime has dropped to an extremely low level - AI-generated images and videos are so realistic that they make people dazed. In May this year, a 40-year-old Seoul University graduate named Park was arrested. He used photos of 48 female victims to produce 1,852 pieces of sexual exploitation content. The victims were all women he knew in life. A teacher in Seoul found that she had also become the protagonist of the "acquaintance abuse room". The initiator was a student in her class, but she could not find out who it was. *"Will that photo follow me forever?" * She broke down and told the media. After the Deepfake incident broke out, volunteers verified the media reports and anonymous submissions and made a map of Deepfake perpetrators. The red marks were so dense that they almost submerged the central area of South Korea. △The "perpetrator school map" compiled by volunteers, the red dots almost submerge the map of the central area of South Korea Source: https://deepfakemap.kr According to statistics from the network security monitoring agency SecurityHero, in 2023, South Korea's Deepfake porn industry accounted for 53% of the world's total. In contrast, South Korea's GDP in 2023 was less than 2% of the world's total. △ Distribution of the global Deepfake pornography industry in 2023 Mapping: Intelligent Emergence Korean women, who are hit by despair and anger, have to run around in the repressive public opinion. "My life is not your porn!" "Women have no country!" They shouted angrily on social media. Silent siege ----------- No one knows the exact number of female victims. College student Zhiying drinks large cups of coffee every day just to stay awake. "I'm afraid of missing any submissions from the victims." Zhiying dare not stop. Since late August, she has almost lost sleep. A siege is underway. On Twitter, Jiyoung kept dealing with Korean men. Her Twitter account had long been monitored by men in real time. The URL of the misogynistic content she had just uploaded and called on everyone to report was changed the next minute. This means that misogynists have been watching Jiyoung's tweets. During her communication with us, she also tried to use a voice changer - all female bloggers who were active on Twitter for this incident were the top targets of Korean men's "opening the box" (human flesh search), and she could not reveal any information about herself. In Korean society where women have a low status, feminists have long been a dangerous label. Queens Archive, the largest feminist Twitter account in South Korea, was reported by Korean men and had its account deleted because it was the first to disclose the Deepfake chat room incident. "Sometimes we didn't take a shower or sleep at all, and stayed up until three or four in the morning." Jiyoung told "Intelligence Emergence". She and her "comrades" on Twitter eagerly translated all Deepfake-related information, victim protection guidelines, and feminist science popularization into English and spread them continuously on Twitter. The age range of both the perpetrators and victims involved in the Deepfake incident is younger than that of Nth Room 1.0. From January to July 2024, South Korea filed 297 cases of deep fake sexual exploitation crimes and arrested a total of 178 suspects, of which 131 were teenagers, accounting for 73.6%. Many people compare it to the "Nth Room 2.0 incident", but compared with it, the current public opinion heat is simply not the same. In the eyes of many women, the attitude of the South Korean government and the police can be described as "negative". The sexual humiliation content produced by AI did not even enter the field of vision of the police and the government at the beginning because it was "not real". "The government did not deal with the Nth Room incident properly. Therefore, many of the men in the Deepfake incident are minors, and many of the victims are minors. The perpetrators will not feel any consequences and will not go to jail. "Zhiying told "Intelligence Emergence". In this Deepfake incident, the South Korean police have only arrested about 15 people so far, and some perpetrators have directly fled overseas to evade punishment by immigrating. "The Deepfake incident, together with previous digital sex crimes such as Room N and Soranet, is expected to have more than 1 million victims, and no one knows the exact number." Jiying regretted, "So if we don't put pressure on the government as soon as possible, the government will not do anything." South Korean women began to seek help from outside, relaying the content of the Deepfake incident to social media in China and Japan. △Source: Weibo Eunjeong, a Korean college student, went to Xiaohongshu and asked for attention and spread the message in a machine-translated and unfamiliar voice. She never gave up even when her account was blocked and her posts were deleted. The Korean girl's Weibo post for help was forwarded nearly 150,000 times in just two days. △Source: Xiaohongshu No one knows where these efforts will eventually lead. Eunjeong told "Intelligence Emergence" that her sister is now also a middle school student. In addition to simply telling girls to protect themselves, the school has no other measures------she and her sister can only delete all photos with faces on social platforms and wear long pants every day when going out. After the outbreak of overseas public opinion, the South Korean government finally followed up on this incident seriously. The recent public speeches of different political parties in South Korea are inseparable from the Deepfake incident. "The incident must be resolved in the short term." They said. "Deepfake humiliation room", everywhere ---------------------- When a Chinese man who has no AI technology foundation and stays in "inside the wall" becomes a Deepfake perpetrator, what kind of cost does he have to pay in the early stage? In the report of SecurityHero, the number of Deepfake pornographic videos in China accounts for 3% of the world. Compared with South Korea (53%) and the United States (20%), this is not a high number. However, since most of the data in the report comes from overseas websites, the actual scale of China's Deepfake pornography industry is far more than that. Searching for "Deepfake" on the overseas social platform X (formerly Twitter), a Chinese Deepfake tool seller soon appeared. His post blatantly wrote words such as "AI stripping", "bondage", and "bikini", but the homepage stated: "Only provide AI tools, no responsibility for use." X, Instagram, Facebook and other overseas social platforms are just a key to open Pandora's box. Many Chinese practitioners use words such as "vx", "kou", and "zfb" to implicitly refer to the final place of the transaction: WeChat, QQ, Xianyu, and even the transaction interface of Alipay. Through the QQ number left on X, we contacted a seller. In order to evade the supervision of the platform, his signature is "Prohibited words are strictly prohibited, for entertainment purposes only, and the consequences of spreading are at your own risk." From beginning to end, he behaved like a general software salesperson, and the entire conversation did not mention any "sex" related words. But "sex" has become the tacit purpose of this transaction. Soon, the other party used the QQ "flash photo (the photo will disappear after 3 seconds of clicking)" function to send two mosaic Deepdake effect pictures. △The QQ chat record between the author and the seller of Deepfake tools has been reported to the official Tencent platform. Click on the large picture, and the mosaic disappears immediately------a Deepfake photo with an unnatural concave appears on the screen. 3 seconds later, the woman's photo, along with the evidence used to defend her rights, disappeared from the chat record. Even in China, it is so easy to make women experience cyber humiliation and escape from legal sanctions. As a new weapon to take off women's outer clothes, the emergence of AI makes online sexual assaults cheaper and faster. In the product catalog sent by practitioners, a 10-second face-changing video generated by AI costs 30 yuan, and an AI nude photo costs only 10 yuan. The price list also specifically states: buy 5 and get 1 free. But at the beginning, the technology behind Deepfake, called "generative adversarial network (GAN)", was almost harmless to humans and animals. GAN, which was born in the Google Brain Laboratory in 2014, has all the factors that are difficult to popularize: the software is difficult to install, the technical threshold for operation is high, and it also requires expensive training data sets and computing resources. This made Deepfake stay in the laboratory for a time and was not widely used by the public. Things went out of control in 2019. In 2019, DeepNude, an application that claims to "undress in 30 seconds with one click", was born. Ironically, this software was trained with 10,000 nude photos of women. When the user enters a photo of a man, he cannot take off the man's clothes, but only adds a female genitals to it. ** Behind DeepNude is the GAN technology, which has rapidly improved its performance in the past few years, and the open source ecosystem that has greatly reduced the difficulty of use. This allowed an anonymous male programmer, Alberto (pseudonym), to quickly develop DeepNude based on the open source GAN architecture of the University of California, Berkeley. On one hand, there is the rapidly developing AI technology and the gradually prosperous open source ecosystem; on the other hand, there is the wild growth of Deepfake and the unprepared social awareness and regulatory system. DeepNude makes the production of involuntary nude photos as simple as clicking a button. Under pressure from public opinion, DeepNude chose to be removed from the shelves two weeks after its release. Alberto (pseudonym) explained in an interview that the development of this software was entirely out of "interesting" and "curiosity": "I am not a voyeur. I am a technology enthusiast." DeepNude has been removed from the shelves, but thousands of "DeepNude" have begun to flood the Internet with a more secretive attitude, lower operating threshold, and more realistic generation quality. Data related to Deepfake and the pornography industry Mapping: Intelligence Emergence On the forum Discord, the DeepNude installation package is sold for $20. The anonymous seller stated on the homepage that they have improved the stability of DeepNude. At the same time, they have also deleted the watermark added by the genuine DeepNude to the generated pictures-all of which makes Deepfake nude photos look more realistic. Less than three weeks after being removed from the shelves, DeepNude has been developed into a more user-friendly mode, becoming a one-click stripping bot embedded in Telegram. Since then, the direction of the incident has been very similar to the current "Nth Room 2.0" in South Korea. Bot users spontaneously established a community on Telegram with more than 100,000 members, becoming the first generation of residents of "Nth Room". Sensity, the cybersecurity company that first monitored this Bot, found that within less than a year of its launch, this Bot had generated more than 680,000 fake nude photos of women, and about 104,852 photos of women were publicly posted on this app. Downward rights protection spiral ----------- From 2019 to the present, the campaign to encircle DeepNude has been ongoing. In July 2019, Virginia, the United States, passed a legal amendment on pornography prohibition, prohibiting the sale or dissemination of computer-generated pornographic content. At the end of the same year, platforms such as Twitter, Google, and Facebook began to rectify Deepfake videos, including watermarking Deepfake videos and sponsoring related research projects to detect Deepfake. However, whether it is South Korea's "Nth Room 2.0" or the underground gray market hidden in QQ, Alipay, and Xianyu, it confirms a frustrating reality: Deepfake has never really disappeared. In 2019, the Citizen Lab at the University of Toronto published a study that attempted to explore how WeChat automatically censors pictures in massive chats. The results showed that WeChat's censorship of Moments and group chats is greater than one-on-one private chats. At the same time, political and social news are the core themes of WeChat picture censorship. This means that there are still loopholes in the platform's supervision of pornographic content. Deepfake pornographic transactions in private chats are difficult to disclose through the platform unless they are reported by people. But can people definitely take the initiative to stand up? Zhang Jing, deputy director of the Beijing Lianggao Law Firm, has handled many cases of infringement of women's rights in the past 20 years. Over the years, she has helped women fight against individual perpetrators, unfair structures, and even loopholes in the legal system. But the occurrence of Deepfake sexual assaults makes her feel that women's struggle has reached a more complicated level. "In the past when technology was not so advanced, when women suffered sexual crimes, at least they could know who the specific victims were, and the harm they suffered could be quantified through forensic appraisal." Zhang Jing told "Intelligence Emergence". But now, whether it is the creators or the disseminators of Deepfake content, their faces are vague and the number is difficult to determine, which has caused great difficulties in evidence collection and sentencing. "Wouldn't it be better to call the police?" On the Internet, many bystanders have raised questions, and even many victimized women have thought that calling the police would be enough. But before calling the police, women still have to go through a long road of evidence collection. Zhang Jing told "Intelligence Emergence", "If you want to file a criminal case, the premise of reporting to the judicial authorities is to have basic evidence." And these basic evidences include: the specific website where Deepfake is spread, the number of infringing pictures (the number is too small to be filed), and the scope of dissemination (the scope of dissemination is too small to be filed). "To be honest, it is not easy for people who don't go online often to complete this step. They may not even know how to enter the website." Zhang Jing concluded. Even if women successfully reach the step of calling the police and filing a case, the handling results of the perpetrators are often unsatisfactory. Entering "AI face-changing" in the China Judgment Documents Network, there are only 16 documents in total, and the earliest case occurred in 2022. In the past two years, the number of Deepfake infringements that can be resorted to criminal cases is 0. Under layers of obstacles, women's rights protection against Deepfake has become a downward spiral: social pressure and strict case filing procedures have led to few female victims willing to stand up; the lack of female voices and the scarcity of case filings have made the advancement of legislation more difficult. In December 2023, there was a case in Taiwan Province of China that gave some hope to women victims of Deepfake. Zhu Yuchen, a male Internet celebrity with the online name "Xiaoyu", used Deepfake technology to synthesize more than 100 well-known women into A-films, and sold them online for a profit of more than 13 million Taiwan dollars (about 2.88 million yuan). In the end, he was prosecuted for violating the "Personal Data Protection Act" and other crimes and sentenced to 5 years in prison. But, is five years enough? "In the past, pornographic videotapes and 3D films were a specific niche industry, and it was difficult for their dissemination to cause harm to a living person." Zhang Jing said, "But the essential difference between today's Deepfake and the pornography industry in the past is that its dissemination will cause harm to specific people in life." This means that, **"the sentencing of Deepfake according to the dissemination of obscene and pornographic content will definitely be lighter," she concluded. ** The perpetrator faces a 5-year sentence with a foreseeable end, but the trauma may accompany women for a longer period of time. At the age of 15, after experiencing Deepfake harassment from the boy behind her, Xiaoyu began to not want summer to come. Once the white short-sleeved school uniform is soaked with sweat, the girl's underwear and body curves will be revealed. In the high temperature of nearly 40 degrees Celsius, Xiaoyu still chose to wear two short-sleeved shirts to block the boy's sight. Xiaoyu told "Intelligence Emergence" that she felt that telling teachers and parents was useless. She remembered that some boys in her class had made dirty jokes about girls or pulled the straps of their underwear, but in the end the teacher responded that they were just jokes between children. Now, more than a year has passed since she was harassed. After entering high school, Xiaoyu lost contact with the boy behind her, and her life returned to normal. However, Xiaoyu still keeps the habit of wearing two short-sleeved shirts in summer. Humans are not ready to deal with Deepfake ------------------------- For a long time, the reality may still be bleak. After seeing the Korean Deepfake incident, Li Qian, co-founder of Luying Technology, was furious and immediately pulled the founder to brainstorm until late at night. In addition to being dry-mouthed, the two were very frustrated. "I may have to bring you bad news: there is no way to kill Deepfake sexual violence directly from a technical point of view." Li Qian told "Intelligence Emergence". Although it cannot be eradicated directly, "giving someone a taste of their own medicine" is a common path. Xiao Zihao, co-founder of Ruilai Wisdom, told "Intelligence Emergence" that the company regularly organizes "red and blue confrontations" and divides engineers into two factions, the "red army" and the "blue army". The "red army" simulates real attacks, while the "blue army" is responsible for responding and handling. In order to understand the means of black production, the engineers of the "red army" will even lurk in the black production group to find clues and continue to innovate attack methods. For those who use AI to fight Deepfake, their daily work is to fight against various unexpected new cases. It is almost standard to tune a new version in 2 weeks and iterate a large model in 2 months. But this is more of an early warning before and after the problem occurs. After the real infringement occurs, in order to protect women, more reliance is needed on the network woven by the judiciary. In November 2022, China issued the first regulation explicitly targeting Deepfake: the "Regulations on the Management of Deep Synthesis of Internet Information Services". This regulation regulates Deepfake content producers, technical service providers, and production, dissemination and release behaviors. But there is still a long way to go from the formulation of regulations to their actual implementation. In the past 20 years, Zhang Jing has observed that most of the lawyers handling criminal cases, the police who receive reports, and the procuratorates that are ultimately responsible for trials are men. "This is a structural problem. Even the legislature is basically male." Zhang Jing once participated in a seminar on the abolition of the crime of soliciting underage girls (which is lighter than the sentence for rape). Many male lawyers present gave many reasons to support the retention of the crime of soliciting underage girls. One of them is that if the crime of rape is used instead, the criminal may take risks and put the female victim in danger. The existing successful cases of perpetrators being detained and convicted are mostly the result of women's active self-help. In the Huang Yao and AI face-changing incidents at Suzhou University in 2023, the victim sneaked into the chat room that spread AI forged images, collected evidence, and used the elimination method to post photos with different tags in the circle of friends in groups, eliminating them one by one, and finally caught the perpetrator. To further promote the legislation of Deepfake sexual assault and the advancement of social awareness, a real precedent is needed. *"With precedents, this history of women's persecution is more likely to be written into compulsory education textbooks, allowing gender education to move forward." *A feminist activist in Japan told "Intelligence Emergence". The current AI industry is actually still a field dominated by men. To some extent, from research and development to product implementation, it is very easy for women to become a group "oppressed" by AI and become a link in the industry. One data observed by Li Qian is that 40% of the video model training data obtained from the Internet involves adult content, and 80% of the content is about women's bodies. *"Pornography is the most successful business model for current AI C-end products." *An investor commented on "Intelligence Emergence". In the end, the proliferation of Deepfakes was attributed to a business reality: using AI to sell women's bodies will bring rich commercial returns to the beneficiaries. It is very difficult to fight against business inertia. During the financing process, some investors, both men and women, told Li Qian that if YoYo (Luying Technology's AI video creation platform) was a little more tolerant of pornographic content, commercialization would be smoother. Female-friendly from day one. For AI startups with limited manpower and funds, whether to take this step is a particularly difficult decision. Li Qian told "Intelligence Emergence" that removing 40% of pornographic data from training data means spending extra costs to complete it; if manpower and time are spent on setting banned words for user input prompts, the speed of product iteration will be at least two months slower. "As an entrepreneur, do you have to look at the business world without gender?" But after experiencing the Deepfake incident, Li Qian no longer thinks so. "Once the floodgates are opened, the industry will have another piece of bad money that drives out good money." If women can gain more technical dominance in the industry, the future will be different. On September 4, 2024, Zhang Xinyi, a female engineer at the Chinese Academy of Sciences, opened the Deepfake adversarial model developed by her team to the world. Once someone needs help, they can use the model for free to detect Deepfake content. In her Weibo, Zhang Xinyi wrote: "If one person's torch is too small, we will pass the torch to everyone in the world." When new technologies sweep the world and the judiciary and technology have not yet fully caught up, the voices and actions of ordinary people are the reason why social progress can move forward. South Korea is a living example - in an extremely masculinist country, every step forward in women's rights is made by a struggle full of blood and sweat. Since 2015, Korean women have launched a series of feminist movements on issues such as #MeToo, eliminating online sexual violence, and decriminalizing termination of pregnancy. Through discussions on issues, rallies and marches, these activities have achieved valuable results in terms of social awareness and legislative systems. The two female reporters Jiang Jingyun and Park Xiaoshi who once exposed the Nth Room scandal have endured long-lasting online abuse, insults and mental torture. One of the reporters even had multiple miscarriages. But Korean women have never stopped. After the Nth Room incident, South Korea established a women's group "RESET" that focuses on opposing sexual exploitation crimes, and promoted the process of formulating laws including those related to cyber sexual crimes. In May of this year, one of the suspects of Deepfake was sent to prison by one of the two college student journalists who had previously exposed the Nth Room. We can't stop, we can't give up. This is the idea of Korean women who are on the front line of the Deepfake incident that we have come into contact with. "This is not just a fight for our generation, but for the fate of everyone," said Jiying, **"I want a world where my friends, family and all women feel safe." * * Women of different races and nationalities have begun to support Korean women. On September 3rd local time, a group of Chinese feminists living in London walked from Trafalgar Square to the Korean Embassy, singing the iconic song of the Korean feminist movement "The World We Meet Again", denouncing illegal acts such as Korean candid photography and AI forgery. Red words were printed on various banners: "My life is not your porn." △On September 3rd, local time, a group of Chinese feminists living in London held an anti-secret filming and anti-Deepfake march. Source: Chinese Feminists in London (To protect the interviewees, Zhiying, Enjing, and Xiaoyu are all pseudonyms. Thanks to Xingxing, xwf, October, lakyo, Aji, and London Sisters for their assistance in the interview.)

Read the Source

Research

  • Defining an “AI Incident”
  • Defining an “AI Incident Response”
  • Database Roadmap
  • Related Work
  • Download Complete Database

Project and Community

  • About
  • Contact and Follow
  • Apps and Summaries
  • Editor’s Guide

Incidents

  • All Incidents in List Form
  • Flagged Incidents
  • Submission Queue
  • Classifications View
  • Taxonomies

2024 - AI Incident Database

  • Terms of use
  • Privacy Policy
  • Open twitterOpen githubOpen rssOpen facebookOpen linkedin
  • e1b50cd