Skip to Content
logologo
AI Incident Database
Open TwitterOpen RSS FeedOpen FacebookOpen LinkedInOpen GitHub
Open Menu
Découvrir
Envoyer
  • Bienvenue sur AIID
  • Découvrir les incidents
  • Vue spatiale
  • Vue de tableau
  • Vue de liste
  • Entités
  • Taxonomies
  • Soumettre des rapports d'incident
  • Classement des reporters
  • Blog
  • Résumé de l’Actualité sur l’IA
  • Contrôle des risques
  • Incident au hasard
  • S'inscrire
Fermer
Découvrir
Envoyer
  • Bienvenue sur AIID
  • Découvrir les incidents
  • Vue spatiale
  • Vue de tableau
  • Vue de liste
  • Entités
  • Taxonomies
  • Soumettre des rapports d'incident
  • Classement des reporters
  • Blog
  • Résumé de l’Actualité sur l’IA
  • Contrôle des risques
  • Incident au hasard
  • S'inscrire
Fermer

Problème 3701

Incidents associés

Incident 63231 Rapports
Significant Increase in Deepfake Nudes of Taylor Swift Circulating on Social Media

Loading...
After deepfake porn images target Taylor Swift, is it safe to post photos of kids online?
kslnewsradio.com · 2024

SALT LAKE CITY — AI-generated deepfake porn victimized Taylor Swift, the biggest pop star in the world, last week as sexually explicit images of her swept across the internet and X — formerly Twitter. An expert advises parents not to post images of their kids in online public forums.

One of the most prominent examples of Swift on X attracted more than 45 million views, 24,000 reposts and hundreds of thousands of likes and bookmarks before the verified user who shared the images had his or her account suspended for violating platform policy as reported by The Verge.

Posting Non-Consensual Nudity (NCN) images is strictly prohibited on X and we have a zero-tolerance policy towards such content. Our teams are actively removing all identified images and taking appropriate actions against the accounts responsible for posting them. We're closely…

— Safety (@Safety) January 26, 2024

‘I should be mad,’ says victim, 14

CEO of Nexus IT Earl Foote joins the discussion about deepfake images online.

“Recent studies show that about 95% of all deepfake videos and images that are created are created of celebrities and not of the random populace but that doesn’t mean it doesn’t happen,” Foote said, referencing the case last year of a 14-year-old New Jersey high school girl  who was one of the victims of fake AI-generated nude images circulating among students.

“I realized I should not be sad, but I should be mad. So, I came home, and I told my mom, and I told her that we have to do something about this because it is unfair to girls, and it’s just not right,” Francesca Mani told “Good Morning America.”

Teen and mother speak out after alleged AI-generated photos sent around high school

Don’t share images of kids in online public forums

Minors should not be sharing content with public audiences, Foote advised. Furthermore, parents should not be sharing photos, image and videos of their children with public audiences, either, he said.

Don’t take the risk of content falling into the hands of predators, Foote warned.

“In today’s world, there’s just too many risks involved with that. So, my recommendation is that parents and minors use the functions within social-media applications to narrow down the group they’re sharing content with to close friends and families that they know and trust,” he said.

Lire la source

Recherche

  • Définition d'un « incident d'IA »
  • Définir une « réponse aux incidents d'IA »
  • Feuille de route de la base de données
  • Travaux connexes
  • Télécharger la base de données complète

Projet et communauté

  • À propos de
  • Contacter et suivre
  • Applications et résumés
  • Guide de l'éditeur

Incidents

  • Tous les incidents sous forme de liste
  • Incidents signalés
  • File d'attente de soumission
  • Affichage des classifications
  • Taxonomies

2024 - AI Incident Database

  • Conditions d'utilisation
  • Politique de confidentialité
  • Open twitterOpen githubOpen rssOpen facebookOpen linkedin
  • e1b50cd