Skip to Content
logologo
AI Incident Database
Open TwitterOpen RSS FeedOpen FacebookOpen LinkedInOpen GitHub
Open Menu
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse

Report 5122

Associated Incidents

Incident 104819 Report
Tennessee Meteorologist's Likeness Reportedly Used in Sextortion Campaign Involving Purported AI-Generated Content

A Nashville meteorologist was a victim of deepfakes. Here's what to know about the technology in Tennessee
finance.yahoo.com · 2025

This story has been updated with new information.

Former NewsChannel 5 Meteorologist Bree Smith delivered an emotional testimony before the state House Criminal Justice Subcommittee Wednesday, after being a victim of deepfakes --- sexually explicit, AI generated videos.

Smith detailed the challenges she faced in getting the images removed, as well as the trauma both she and her family endured, The Tennessean reported last week.

"When I asked my employer for help, I was told that nothing could be done, it was not illegal and I had no recourse," Bree said during her testimony. "I felt humiliated and scared. I didn't know what to do or how to fight it, and I didn't know how to protect the viewers and the people that trusted me online from being subject to this kind of extortion."

According to Smith, the people behind the fake images and videos were using them to try to convince Smith's fans to send them money. In one case, a viewer received a few fake videos where it appeared Smith "promised many sexual acts and asked the viewer to send them money to book a two-night stay at the Conrad Hotel."

Smith departed from NewsChannel 5 in January after nine years, due to her contract ending. She did not comment on whether the station attempted to renew her contract or if there were negotiations.

Here's what to know about deepfakes and their legality in Tennessee.

What are deepfakes?

A deepfake is an image, video or audio recording generated by artificial intelligence. The technology can be used to replicate faces and voices, often depicting false or misleading scenarios.

According to the United States Government Accountability Office, deepfakes are commonly used for exploitation. With much of deepfake online content being pornographic, the technology disproportionately victimizes women, said a report from the company Deeptrace.

While some uses of deepfakes are harmless, such as in entertainment, e-commerce, and communication, they can also be used to spread misinformation, including influencing elections. During the 2024 presidential election, both Biden and Trump supporters used the technology to generate false images and audio clips.

How do deepfakes work?

Deepfakes rely on artificial neural networks, which are computer systems modeled loosely on the human brain that recognize patterns in data, said the accountability office. Developing a deepfake photo or video normally involves feeding hundreds or thousands of images into the artificial neural network, "training" it to identify and reconstruct patterns --- typically faces.

Anyone with basic computer skills and a home computer can create a deepfake, said the accountability office, using readily available applications and tutorials online. However, creating realistic deepfakes typically requires hundreds or thousands of training images, making celebrities and government leaders common subjects.

Are deepfakes legal? What to know in Tennessee

In Tennessee, the Ensuring Likeness, Voice and Image Security (ELVIS) Act prohibits the creation or distribution of deepfakes that mimic someone's voice or appearance without their consent.

As previously reported by the Tennessean, the ELVIS Act adds artist's voices to the state's current Protection of Personal Rights law and can be criminally enforced by district attorneys as a Class A misdemeanor. Artists, and anyone else with exclusive licenses, like labels and distribution groups, can sue civilly for damages.

The Volunteer State made history in March 2024 by becoming the first state in the nation to enact protections for artists against the misuse of artificial intelligence.

Tennessee is further enhancing its protection against deepfakes with HB1299, known as the Preventing Deep Fake Images Act.

The bill, sponsored by Rep. Jason Powell, D-Nashville, makes it a felony "to disclose or threaten to disclose or solicit the disclosure of an intimate digital depiction with the intent to harass, annoy, threaten, alarm, or cause substantial harm to the finances or reputation of the depicted individual."

The bill also lets people sue and recover financial damages from those who post pictures or videos of "intimate digital depiction ... without the consent of the individual" or those who "recklessly disregards whether the individual has not consented to such disclosure."

The subcommittee voted in a 7-0 vote to advance the bill on Wednesday.

Read the Source

Research

  • Defining an “AI Incident”
  • Defining an “AI Incident Response”
  • Database Roadmap
  • Related Work
  • Download Complete Database

Project and Community

  • About
  • Contact and Follow
  • Apps and Summaries
  • Editor’s Guide

Incidents

  • All Incidents in List Form
  • Flagged Incidents
  • Submission Queue
  • Classifications View
  • Taxonomies

2024 - AI Incident Database

  • Terms of use
  • Privacy Policy
  • Open twitterOpen githubOpen rssOpen facebookOpen linkedin
  • a9df9cf