State Sen. James Maroney says he is going to target deepfake porn created with artificial intelligence systems in this upcoming legislative cycle that starts next week. The Milford Democrat, who is co-chairman of the task force that includes computer and public policy experts and state agency heads who have been studying the issue, confirmed that he was going to try and tighten Connecticut's anti-revenge porn statute so that it includes nonconsensual, sexually explicit, AI-generated images of real people.
"There's been a lot of concern about the use of deepfakes in elections," Maroney said this week. "However over 98 percent of deepfakes are created for either revenge porn or nonconsensual intimate images. I think it's critical that we pass some protections."
The news comes as deepfake porn is making waves on social media after Elon Musk's social media platform X --- formerly known as Twitter --- banned searching for Taylor Swift on its site. Fake, AI-generated nude images of the popstar had been posted and widely circulated on X. One image, before it was taken down, was seen 45 million times and reposted 24,000 times before the original poster was suspended for violating platform policy, The Verge reports.
"They can take any picture of you and turn it into a naked image of you, and potentially blackmail you," said Maroney. "Or just maliciously spread naked images, as we saw in the case of Taylor Swift this past weekend."
But the problem is much larger than generating nonconsensual nude images of public figures. While some AI image generators have restrictions in place that prevent photorealistic, nude, pornographic images of celebrities from being produced, many others do not. In fact some were explicitly created with the express purpose of producing explicit images. Since 2017, "nudifier" apps have grown exponentially in AI image generation space, according to an analysis by independent researcher Genevieve Oh. The majority of those targeted are female public figures.
Many targets aren't public figures, though, according to a review article by deepfake researcher Sophie Maddocks. Those who produce this kind of imagery are driven by similar motivators as people who perpetrate intimate partner violence, power, control, and punishment. Most people targeted are girls, women, and people who identify as LGBTQ.
The problem with these images isn't entirely that of "mistaken identity," says digital philosophy researcher Keith Harris. The danger isn't simply that people think AI-generated, nonconsensual nude images are real, he added. The problem is the repeated violation and objectification of the victim, which causes both reputational and psychological harm.
"It's a new way of controlling someone else's body or controlling their image," said Harris. "It can be put to all sorts of defamatory purposes."
Blackmail is also a major concern with deepfake porn. Maddocks said that is where herterosexual men or boys are more frequently targeted. Maroney cited several cases of young boys committing suicide because of threatened leaks of nude images.
In an interview with Slate, Maddocks said that while many states have laws on the books to punish "revenge porn" --- the nonconsensual spread of nude images online --- victims often lacked the power to demand that platforms remove those images. CT Insider asked Maroney if this was going to be a component of the proposed legislation, and he said it was being considered.
At the federal level, a bipartisan group of senators in the judiciary committee are pushing for a proposed law that would create a civil penalty for nonconsensual AI-generated pornographic images.
Locally, Maroney wants to ensure that whatever AI legislation gets adopted allows for positive use of AI, while mitigating the "known harms" of the technology.
"We want to mitigate the downsides without putting a ceiling on the upsides," he said. The task force, which should be submitting its recommendations to the General Assembly on Feb. 1, cited computer literacy, health care, and job growth as potential benefits from AI.