Skip to Content
logologo
AI Incident Database
Open TwitterOpen RSS FeedOpen FacebookOpen LinkedInOpen GitHub
Open Menu
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse

Report 6296

Associated Incidents

Incident 12481 Report
Google's Bard, Gemini, and Gemma AI Systems Allegedly Generated Defamatory Claims About Activist Robby Starbuck, Prompting Lawsuit

Loading...
Activist Robby Starbuck Sues Google Over Claims of False AI Info
wsj.com · 2025

Conservative activist Robby Starbuck filed a defamation lawsuit against Google alleging its artificial-intelligence tools falsely connected Starbuck to sexual-assault claims and to a white nationalist.

Starbuck said he became aware of the inaccuracies in 2023 while using Bard, an early Google AI tool. Bard said that Starbuck had ties to Richard Spencer, a once-prominent white nationalist, according to the lawsuit. At the time, Starbuck took to social-media platform X and tagged Google and its CEO in a post about the details: 

"Imagine a future where Bard is used to decide whether you get a loan, if you're approved for adoption," he asked his hundreds of thousands of followers at the time. The lawsuit says newer Google AI tools produced other falsehoods about him earlier this year, including claims that Starbuck had been accused of sexual assault. 

Starbuck's lawsuit, filed in Delaware Superior Court on Wednesday, seeks more than $15 million in damages. Many of the claims related to inaccurate information in Bard that Google addressed in 2023, said José Castañeda, a spokesman for Alphabet unit Google.

Inaccurate information is a "well-known issue for all LLMs, which we disclose and work hard to minimize," said Castañeda, using the acronym for large language model, a type of AI data system that is used to create services such as Bard or ChatGPT. "If you're creative enough, you can prompt a chatbot to say something misleading." 

This is the second AI-related defamation lawsuit Starbuck has filed this year against a large technology company. In April, Starbuck sued Meta Platforms, alleging its AI tool falsely asserted he participated in the Jan. 6 riot at the U.S. Capitol in 2021. Starbuck and Meta settled the lawsuit this summer in an undisclosed agreement that included hiring Starbuck as a Meta adviser. Starbuck declined to share the terms of his settlement with Meta.

Starbuck has gained attention for his successful efforts to pressure large companies to abandon their diversity, equity and inclusion policies and retreat from environmental goals and other corporate sustainability policies.

In January, WSJ visited conservative activist Robby Starbuck at his Tennessee home for a behind-the-scenes look at how he and his team do their work. Photo: Alexander Hotz

So far, no U.S. court has awarded damages to someone defamed by an AI chatbot. In May, a Georgia court ruled in favor of OpenAI's ChatGPT in an AI-related defamation case. In that case, conservative talk radio host Mark Walters claimed that ChatGPT said he was the subject of a lawsuit accusing him of embezzling funds from a gun-rights organization.

The court ruled in favor of OpenAI, in part because ChatGPT warns users that it sometimes produces inaccurate information and because Walters didn't show actual malice behind the AI chatbot results, according to a court document.   

The facts of that case weren't a good test of how courts may eventually rule in AI defamation cases, said Clare Norins, director of University of Georgia School of Law's First Amendment Clinic. More cases are moving through courts that present new legal questions about how AI tools will be held accountable for the information they produce, she said. 

"I do think it's a significant question given that generative AI is here to stay," said Norins.  

Starbuck's suit alleges that Gemma, an open AI model, and Gemini, one of Google's primary consumer-facing AI systems, asserted earlier this year that Starbuck has been accused of sexual assault and that he had participated in the Jan. 6 riot. Gemma, according to the suit, listed false media links as sources for those claims.

"We will review the complaint when we receive it," said the Google spokesman. The Gemma model is intended for developers to customize and build, while Gemini is a consumer-facing application, he said. 

Over the summer, Starbuck's lawyer sent cease-and-desist letters to Google, according to documents viewed by The Wall Street Journal. Starbuck said he hasn't been accused of or charged with sexual assault.

"We felt that it was futile to continue with written correspondence when we weren't getting the type of engagement that this matter would require," said Krista Baughman, a partner at Dhillon Law Group and Starbuck's lawyer on the case.

Read the Source

Research

  • Defining an “AI Incident”
  • Defining an “AI Incident Response”
  • Database Roadmap
  • Related Work
  • Download Complete Database

Project and Community

  • About
  • Contact and Follow
  • Apps and Summaries
  • Editor’s Guide

Incidents

  • All Incidents in List Form
  • Flagged Incidents
  • Submission Queue
  • Classifications View
  • Taxonomies

2024 - AI Incident Database

  • Terms of use
  • Privacy Policy
  • Open twitterOpen githubOpen rssOpen facebookOpen linkedin
  • e1b50cd