Skip to Content
logologo
AI Incident Database
Open TwitterOpen RSS FeedOpen FacebookOpen LinkedInOpen GitHub
Open Menu
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse

Report 3395

Associated Incidents

Incident 60615 Report
Deepfaked Advertisements Using the Likenesses of Celebrities Such as Tom Hanks and Gayle King Without Their Consent

Celebs Warn About AI-Powered Deepfake Videos of Them Advertising Products on Social Media
mymodernmet.com · 2023

AI is everywhere. Many artists use AI-powered tools to create mind-blowing works or enhance some interesting concepts. Unfortunately, these resources have also had some questionable uses, including the creation of deepfakes. These altered videos show people doing or saying something that they actually didn't. They're computer-generated clips that look entirely real due its repurposing of existing footage. And while celebrities seem to be the most common target for this, AI presents risings concerns that everyone who consumes digital media should be aware of.

Two of the latest figures to deal with an unauthorized AI-generated version of themselves are actor Tom Hanks and broadcast journalist Gayle King. Hanks first posted a warning to Instagram, letting his followers know that there was an AI scam going around that features his face. “BEWARE!! There’s a video out there promoting some dental plan with an AI version of me. I have nothing to do with it,” the actor wrote over a screenshot of the video.

King also alerted her Instagram followers about an AI-generated video showing her advertising a weight-loss product. “I’ve never heard of this product or used it! Please don’t be fooled by these AI videos,” she wrote. King also shared the original clip the altered version was based on. “I posted this video promoting my radio show on August 31, and they’ve manipulated my voice and video to make it seem like I’m promoting it.”

While this misuse of AI tools seems to come from fraudulent companies, there have also been concerns of its application in more established companies. One of the grievances SAG-AFTRA, the union representing Hollywood actors, has raised during its current strike regards how AI could be used by studios to create digital replicas of actors without proper compensation or approval. When an AI-generated collaboration between Drake and The Weekend hit streaming services earlier this year, music labels raised concerns about how this seemingly harmless bit could become a serious worry for artists in a frail music industry.

To keep harmful AI-generated content at bay, a concerted effort between all parts would be needed—purveyors of these tools could offer ways of tracking the origins of the content, while social media platforms could patch their algorithms to deter its reach, as well as acting quickly in the face of content that has been flagged by users. Should it be needed, even international law could get involved and set some parameters regarding the use of these tools.

AI tools and creations appear to be here to stay, and while a framework of rules set by the key players would be a big step toward protecting the public, platforms that bypass them could always pop up. The days of a video or a picture being proof or something could long be gone, eroding public trust and changing the digital landscape as we know it.

Read the Source

Research

  • Defining an “AI Incident”
  • Defining an “AI Incident Response”
  • Database Roadmap
  • Related Work
  • Download Complete Database

Project and Community

  • About
  • Contact and Follow
  • Apps and Summaries
  • Editor’s Guide

Incidents

  • All Incidents in List Form
  • Flagged Incidents
  • Submission Queue
  • Classifications View
  • Taxonomies

2024 - AI Incident Database

  • Terms of use
  • Privacy Policy
  • Open twitterOpen githubOpen rssOpen facebookOpen linkedin
  • 86fe0f5