Description: In 2017, Noelle Martin discovered explicit deepfake videos online that used AI technology to superimpose her face onto pornographic scenes. This incident was a continuation of the abuse she had experienced since at least 2012, when she first found doctored still images of herself in similar contexts. Despite the initial lack of legal protections, her advocacy efforts were instrumental in making image-based abuse a criminal offense in Australia.
Editor Notes: Incidents 771 and 772 are closely related in terms of narrative overlap and discussion.
Entités
Voir toutes les entitésAlleged: Stanford University , Max Planck Institute , University of Erlangen-Nuremberg , Face2Face , FaceApp et Zao developed an AI system deployed by Unknown deepfake creators, which harmed Noelle Martin.
Statistiques d'incidents
ID
771
Nombre de rapports
1
Date de l'incident
2020-02-06
Editeurs
Daniel Atherton
Rapports d'incidents
Chronologie du rapport
elle.com · 2020
- Afficher le rapport d'origine à sa source
- Voir le rapport sur l'Archive d'Internet
'There's deepfakes of you,' the email read. Instantly, my pulse quickened. Who was this? How did they get my email address? What was a deepfake?
As panic began to set in, I Googled the term and watched, horrified, as clips of celebrities in…
Variantes
Une "Variante" est un incident qui partage les mêmes facteurs de causalité, produit des dommages similaires et implique les mêmes systèmes intelligents qu'un incident d'IA connu. Plutôt que d'indexer les variantes comme des incidents entièrement distincts, nous listons les variations d'incidents sous le premier incident similaire soumis à la base de données. Contrairement aux autres types de soumission à la base de données des incidents, les variantes ne sont pas tenues d'avoir des rapports en preuve externes à la base de données des incidents. En savoir plus sur le document de recherche.