Skip to Content
logologo
AI Incident Database
Open TwitterOpen RSS FeedOpen FacebookOpen LinkedInOpen GitHub
Open Menu
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse

Report 4013

Associated Incidents

Incident 7655 Report
22 Students at Richmond-Burton Community High School in Illinois Targeted by Deepfake Nudes

Loading...
Students at Illinois high school say photos were altered by AI to be explicit
cbsnews.com · 2024

Police in Richmond, Illinois - in McHenry County near the Wisconsin state line -- have launched an investigation after students said their images were altered into sexually explicit photos and sent to other classmates.

One sophomore at Richmond Burton Community High School said she did not even know it was possible to have her own image manipulated in such a fashion. The young woman, Stevie Hyder, is now sharing her story as a warning to others.

Hyder said she never imagined a photo taken before a school dance would be used against her.

"We all feel extremely violated," she said. "Actually, after I saw my photo personally, I felt so nauseous."

Hyder said AI-generated nude photos were created from the innocent before-the-dance picture, and circulated among classmates.

Richmond police are now investigating, along with the McHenry County Sheriff's Department.

"We know ourselves they are fake," said Hyder, "but... if they get out to an employer or college applications -- if somebody sends that, they won't know it's fake."

Hyder's mom, Stephanie Essex, said other students were also targeted.

"When I finally did speak with the principal, he let me know that my daughter was number 22 on the list," said Essex.

Earlier this year, AI-generated sexually explicit images of Taylor Swift went viral.

After that incident, the White House addressed the dangers of AI images and the disproportionate impact on abuse of the technology on women and girls.

For some, this underscores the need to regulate potential nefarious uses of AI.

"This was not at all the intended use of generative AI techniques, but unfortunately, you know, the tools out there are right now so good that, you know, a kid can generate these kinds of videos," said V.S. Subrahmanian, a computer science professor at Northwestern University. "This is very dangerous. If you are one of the people who is depicted in this way, it's very freighting - and getting rid of these kinds of content is very difficult."

"We are just really determined that something is going to get done about this, and we want more awareness spread about this," added Hyker, "and we hope this doesn't become a more common thing."

The school declined to comment due to pending litigation.

The U.S. Department of Justice has set up a 24/7 hotline for survivors of image-based sexual abuse, at 844-878-CCRI (2274). More resources are available through the Cyber Civil Rights Initiative.

Read the Source

Research

  • Defining an “AI Incident”
  • Defining an “AI Incident Response”
  • Database Roadmap
  • Related Work
  • Download Complete Database

Project and Community

  • About
  • Contact and Follow
  • Apps and Summaries
  • Editor’s Guide

Incidents

  • All Incidents in List Form
  • Flagged Incidents
  • Submission Queue
  • Classifications View
  • Taxonomies

2024 - AI Incident Database

  • Terms of use
  • Privacy Policy
  • Open twitterOpen githubOpen rssOpen facebookOpen linkedin
  • e1b50cd