Skip to Content
logologo
AI Incident Database
Open TwitterOpen RSS FeedOpen FacebookOpen LinkedInOpen GitHub
Open Menu
発見する
投稿する
  • ようこそAIIDへ
  • インシデントを発見
  • 空間ビュー
  • テーブル表示
  • リスト表示
  • 組織
  • 分類法
  • インシデントレポートを投稿
  • 投稿ランキング
  • ブログ
  • AIニュースダイジェスト
  • リスクチェックリスト
  • おまかせ表示
  • サインアップ
閉じる
発見する
投稿する
  • ようこそAIIDへ
  • インシデントを発見
  • 空間ビュー
  • テーブル表示
  • リスト表示
  • 組織
  • 分類法
  • インシデントレポートを投稿
  • 投稿ランキング
  • ブログ
  • AIニュースダイジェスト
  • リスクチェックリスト
  • おまかせ表示
  • サインアップ
閉じる

レポート 4013

関連インシデント

インシデント 7655 Report
22 Students at Richmond-Burton Community High School in Illinois Targeted by Deepfake Nudes

Loading...
Students at Illinois high school say photos were altered by AI to be explicit
cbsnews.com · 2024

Police in Richmond, Illinois - in McHenry County near the Wisconsin state line -- have launched an investigation after students said their images were altered into sexually explicit photos and sent to other classmates.

One sophomore at Richmond Burton Community High School said she did not even know it was possible to have her own image manipulated in such a fashion. The young woman, Stevie Hyder, is now sharing her story as a warning to others.

Hyder said she never imagined a photo taken before a school dance would be used against her.

"We all feel extremely violated," she said. "Actually, after I saw my photo personally, I felt so nauseous."

Hyder said AI-generated nude photos were created from the innocent before-the-dance picture, and circulated among classmates.

Richmond police are now investigating, along with the McHenry County Sheriff's Department.

"We know ourselves they are fake," said Hyder, "but... if they get out to an employer or college applications -- if somebody sends that, they won't know it's fake."

Hyder's mom, Stephanie Essex, said other students were also targeted.

"When I finally did speak with the principal, he let me know that my daughter was number 22 on the list," said Essex.

Earlier this year, AI-generated sexually explicit images of Taylor Swift went viral.

After that incident, the White House addressed the dangers of AI images and the disproportionate impact on abuse of the technology on women and girls.

For some, this underscores the need to regulate potential nefarious uses of AI.

"This was not at all the intended use of generative AI techniques, but unfortunately, you know, the tools out there are right now so good that, you know, a kid can generate these kinds of videos," said V.S. Subrahmanian, a computer science professor at Northwestern University. "This is very dangerous. If you are one of the people who is depicted in this way, it's very freighting - and getting rid of these kinds of content is very difficult."

"We are just really determined that something is going to get done about this, and we want more awareness spread about this," added Hyker, "and we hope this doesn't become a more common thing."

The school declined to comment due to pending litigation.

The U.S. Department of Justice has set up a 24/7 hotline for survivors of image-based sexual abuse, at 844-878-CCRI (2274). More resources are available through the Cyber Civil Rights Initiative.

情報源を読む

リサーチ

  • “AIインシデント”の定義
  • “AIインシデントレスポンス”の定義
  • データベースのロードマップ
  • 関連研究
  • 全データベースのダウンロード

プロジェクトとコミュニティ

  • AIIDについて
  • コンタクトとフォロー
  • アプリと要約
  • エディタのためのガイド

インシデント

  • 全インシデントの一覧
  • フラグの立ったインシデント
  • 登録待ち一覧
  • クラスごとの表示
  • 分類法

2024 - AI Incident Database

  • 利用規約
  • プライバシーポリシー
  • Open twitterOpen githubOpen rssOpen facebookOpen linkedin
  • e1b50cd