Skip to Content
logologo
AI Incident Database
Open TwitterOpen RSS FeedOpen FacebookOpen LinkedInOpen GitHub
Open Menu
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse

Report 4547

Associated Incidents

Incident 79926 Report
Aledo High School Student Allegedly Generates and Distributes Deepfake Nudes of Seven Female Classmates

The rise of deepfakes: teens subjected to AI-generated pornography
highschool.latimes.com · 2024

In February, students in Beverly Vista Middle School were investigated by the Beverly Hills police department for creating fake nude pictures of their classmates. School and district officials expressed their awareness of the "AI-generated nude photos."

Deepfakes are artificial-intelligence-generated images, videos, and audio clips that depict real or non-existent people. These AI-produced images are used to spread misinformation, especially about celebrities.

Elliston Berry, a 15-year-old girl from Aledo, Texas, found artificial nude pictures of her and her friends posted on social media. "I had woken up with many, many text messages from my friends just telling me that there were these images of mine circling," Elliston said.

In recent years, the rise of Deepfake nudes among young adults has become an alarming issue. There have been numerous incidents around the country about teenagers creating fake sexual content for their peers using AI as a tool of bullying. 

"It was so realistic. It is child porn," Elliston's mom, Anna McAdams said in an interview with WFAA. "We really could not take care of her. We, in that moment, were helpless ... more and more pictures were coming out throughout that first day into the second day."

The fake sexual content disproportionately harms young girls, who make up 90% of the deep counterfeit victims. Deepfakes can also target historically marginalized groups as a student in New York made an artificial video of their principal shouting racist slurs and threatening to hurt students of color. The violation of reputation and privacy can trigger serious issues in student mental health, making them feel unsafe in others and school environments. Identity-based attacks make students feel humiliated and detached from themselves.

Regulations have been implemented in some states to address the harassment by deep-fake images.

In June, Texas Senator Ted Cruz introduced the Tools to Address Known Exploitation by Immobilizing Technological Deepfakes on Websites and Networks (Take It Down) Act, which makes publishing realistic, artificial pornographic images illegal. Other states including Mississippi, Louisiana, South Dakota, and New Mexico also passed bills to terminate revenge porn. 

In 2019, Congress passed the Deepfake Report Act, which requires U.S. technology companies to report digital forgeries. The House also introduced the Deepfake Accountability Act dedicated to providing helpful resources for victims of Deepfakes. Other federal bills have also been passed to mitigate the influences of artificial images, but there are no official federal legislations that ban or regulate Deepfakes. 

Before the problem of Deepfake pornography can be efficiently regulated, teens are subjected to the danger of the new form of cyberbullying, and victims are scarred by their experiences. 

"I was a freshman, and I was only fourteen," said Elliston. "Even today, I'm still fearful that these images will resurface."

Read the Source

Research

  • Defining an “AI Incident”
  • Defining an “AI Incident Response”
  • Database Roadmap
  • Related Work
  • Download Complete Database

Project and Community

  • About
  • Contact and Follow
  • Apps and Summaries
  • Editor’s Guide

Incidents

  • All Incidents in List Form
  • Flagged Incidents
  • Submission Queue
  • Classifications View
  • Taxonomies

2024 - AI Incident Database

  • Terms of use
  • Privacy Policy
  • Open twitterOpen githubOpen rssOpen facebookOpen linkedin
  • 69ff178