Skip to Content
logologo
AI Incident Database
Open TwitterOpen RSS FeedOpen FacebookOpen LinkedInOpen GitHub
Open Menu
Descubrir
Enviar
  • Bienvenido a la AIID
  • Descubrir Incidentes
  • Vista espacial
  • Vista Tabular
  • Vista de lista
  • Entidades
  • Taxonomías
  • Enviar Informes de Incidentes
  • Ranking de Reportadores
  • Blog
  • Resumen de noticias de IA
  • Control de Riesgos
  • Incidente aleatorio
  • Registrarse
Colapsar
Descubrir
Enviar
  • Bienvenido a la AIID
  • Descubrir Incidentes
  • Vista espacial
  • Vista Tabular
  • Vista de lista
  • Entidades
  • Taxonomías
  • Enviar Informes de Incidentes
  • Ranking de Reportadores
  • Blog
  • Resumen de noticias de IA
  • Control de Riesgos
  • Incidente aleatorio
  • Registrarse
Colapsar

Problema 3701

Incidentes Asociados

Incidente 63231 Reportes
Significant Increase in Deepfake Nudes of Taylor Swift Circulating on Social Media

Loading...
After deepfake porn images target Taylor Swift, is it safe to post photos of kids online?
kslnewsradio.com · 2024

SALT LAKE CITY — AI-generated deepfake porn victimized Taylor Swift, the biggest pop star in the world, last week as sexually explicit images of her swept across the internet and X — formerly Twitter. An expert advises parents not to post images of their kids in online public forums.

One of the most prominent examples of Swift on X attracted more than 45 million views, 24,000 reposts and hundreds of thousands of likes and bookmarks before the verified user who shared the images had his or her account suspended for violating platform policy as reported by The Verge.

Posting Non-Consensual Nudity (NCN) images is strictly prohibited on X and we have a zero-tolerance policy towards such content. Our teams are actively removing all identified images and taking appropriate actions against the accounts responsible for posting them. We're closely…

— Safety (@Safety) January 26, 2024

‘I should be mad,’ says victim, 14

CEO of Nexus IT Earl Foote joins the discussion about deepfake images online.

“Recent studies show that about 95% of all deepfake videos and images that are created are created of celebrities and not of the random populace but that doesn’t mean it doesn’t happen,” Foote said, referencing the case last year of a 14-year-old New Jersey high school girl  who was one of the victims of fake AI-generated nude images circulating among students.

“I realized I should not be sad, but I should be mad. So, I came home, and I told my mom, and I told her that we have to do something about this because it is unfair to girls, and it’s just not right,” Francesca Mani told “Good Morning America.”

Teen and mother speak out after alleged AI-generated photos sent around high school

Don’t share images of kids in online public forums

Minors should not be sharing content with public audiences, Foote advised. Furthermore, parents should not be sharing photos, image and videos of their children with public audiences, either, he said.

Don’t take the risk of content falling into the hands of predators, Foote warned.

“In today’s world, there’s just too many risks involved with that. So, my recommendation is that parents and minors use the functions within social-media applications to narrow down the group they’re sharing content with to close friends and families that they know and trust,” he said.

Leer la Fuente

Investigación

  • Definición de un “Incidente de IA”
  • Definición de una “Respuesta a incidentes de IA”
  • Hoja de ruta de la base de datos
  • Trabajo relacionado
  • Descargar Base de Datos Completa

Proyecto y Comunidad

  • Acerca de
  • Contactar y Seguir
  • Aplicaciones y resúmenes
  • Guía del editor

Incidencias

  • Todos los incidentes en forma de lista
  • Incidentes marcados
  • Cola de envío
  • Vista de clasificaciones
  • Taxonomías

2024 - AI Incident Database

  • Condiciones de uso
  • Política de privacidad
  • Open twitterOpen githubOpen rssOpen facebookOpen linkedin
  • e1b50cd