Skip to Content
logologo
AI Incident Database
Open TwitterOpen RSS FeedOpen FacebookOpen LinkedInOpen GitHub
Open Menu
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse

Report 6100

Associated Incidents

Incident 6756 Report
High School Athletic Director in Baltimore County Allegedly Created Racist Deepfake Audio Impersonating Principal

Loading...
How a century-old legal principle could rid the internet of deepfakes
bostonglobe.com · 2025

Last year, a viral recording of the local high school principal shocked the people of Pikesville, Md.

Listeners could hear the voice of Eric Eiswert spouting spiteful and racist beliefs about Jewish people and the school's Black students. These comments coming from a respected local educator were unthinkable. The consequences were swift and dire. Community members were outraged, and the school district placed him on immediate administrative leave. Justice seemed to have carried the day --- except that the recording was fake.

The recording of Eiswert was a deepfake, a portrayal of an individual in an image, video, or audio recording that has been digitally altered, typically to depict that person saying or doing something they did not say or do. An investigation by law enforcement revealed that the deepfake recording had likely been created by the high school's disgruntled athletic director.

Eiswert's predicament is becoming more common. As generative AI technology has improved markedly in the last few years, it has made creating remarkably realistic deepfakes easy. And the internet allows these AI-generated deepfakes to spread like wildfire. Victims have included everyone from Taylor Swift to middle school students to lawmakers like Alexandria Ocasio-Cortez.

In response to this new threat, dozens of states have passed new laws, and recently Congress overwhelmingly passed the TAKE IT DOWN Act, which makes it a crime to share explicit images without consent. These measures, although well intended, deal only with sexually explicit images, not the kind of deepfake that targeted Eiswert. For that, policy makers must think more broadly.

Fortunately, there is a legal precedent that emerged over a century ago in response to a similar problem: the portable camera.

In the late 19th century, the Kodak portable camera horrified the sensibilities of Americans and Europeans. Newspaper articles lamented the capture of our every mistake on film. English vigilantes assaulted photographers for surreptitiously taking photos of women swimming along English beaches. US President Theodore Roosevelt accosted a boy for taking his photo as he left church, calling it a "disgrace." While turn-of-the-century society did not like appearing in others' photography without permission, their greater concern was the ability of photographers to share their photos with others. The development of mass media during the same period meant that these images could be shared with millions.

In response to this threat, two leading legal minds of the day, Samuel Warren and the future US Supreme Court justice Louis Brandeis, proposed a new legal right, the right to privacy. In an 1890 Harvard Law Review article, Warren and Brandeis warned that, together, "instantaneous photographs and newspaper enterprise have invaded the sacred precincts of private and domestic life." They were concerned that losing control over one's likeness harmed not only one's financial interests but also one's human dignity, with photos potentially causing "mental pain and stress ... far greater than could be inflicted by mere bodily injury."

The interests that Warren and Brandeis described have since become known as the right of publicity. Now recognized in most states, the right of publicity protects an individual's name, image, and likeness against harmful unauthorized uses by others for their own benefit. Crucially, the protection is not limited to sexualized images, unlike much of the current legislation that aims to crack down on deepfakes. And it could be an invaluable weapon against the worst harms of deepfakes.

Warren and Brandeis's concerns are salient in our current moment. Like the camera, AI tools allow anyone to capture another's likeness. Like earlier forms of mass media, the internet allows millions to access the image. Of course, the power and scope of AI and the internet increase the potential harm. Whereas the camera was limited to capturing actions that actually occurred, AI tools allow us to create deepfakes of others doing things they never even contemplated.

Today, one of the greatest limits on the right of publicity in many states is that it is restricted to commercial uses of one's likeness such as in advertising or marketing. Yet Warren and Brandeis never intended the right to be so limited. They and other concerned parties in 1890 also wrote about the harms that cameras could inflict on human dignity.

Legislatures and scholars have challenged this limited understanding of the right of publicity. For example, California merely requires that a person's likeness be used for another's advantage (whether commercial or otherwise) to qualify as a right of publicity claim.

The right of publicity may also offer a way for victims to remove deepfakes from the internet. Online platforms are typically not responsible for their users' actions, such as a Facebook post or Amazon listing, because Section 230 of the Communications Decency Act protects them from liability. But despite that section, platforms still can be held liable for intellectual property violations. And courts across the country have increasingly referred to the right of publicity as an intellectual property right. More should do so in cases involving deepfakes.

Policy makers can help judges out by adopting laws that define the right of publicity as an intellectual property right, to counter Section 230. In a few state or federal court jurisdictions, judges have recognized this very principle, ruling that online platforms are liable if they do not remove content like deepfakes that misappropriate a person's likeness. But most cases have been settled out of court before a final ruling, so it is unclear how widely this principle would be applied --- unless new laws are passed to give courts and the public better guidance.

Congress is considering a national law that would make deepfakes a violation of a person's intellectual property rights. While a victim could still pursue litigation against the creator of the deepfake or even the online platform, compelling platforms to take down deepfakes could offer a more efficient solution for most people.

Deepfakes underscore why a broad right of publicity is necessary. Deepfakes pose harm to everyone. Deepfakes have victimized celebrities and politicians, and they have also targeted ordinary individuals like Eric Eiswert. It's time for lawmakers to fight back against the harms of this technology --- and thankfully, we already have a tool to do that.

Read the Source

Research

  • Defining an “AI Incident”
  • Defining an “AI Incident Response”
  • Database Roadmap
  • Related Work
  • Download Complete Database

Project and Community

  • About
  • Contact and Follow
  • Apps and Summaries
  • Editor’s Guide

Incidents

  • All Incidents in List Form
  • Flagged Incidents
  • Submission Queue
  • Classifications View
  • Taxonomies

2024 - AI Incident Database

  • Terms of use
  • Privacy Policy
  • Open twitterOpen githubOpen rssOpen facebookOpen linkedin
  • e1b50cd