Skip to Content
logologo
AI Incident Database
Open TwitterOpen RSS FeedOpen FacebookOpen LinkedInOpen GitHub
Open Menu
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse

Report 4192

Associated Incidents

Incident 8252 Report
AI News Site Hoodline San Jose Erroneously Misidentifies San Mateo District Attorney as Murder Suspect

Loading...
AI-Powered News Site Accidentally Accuses District Attorney of Murder
futurism.com · 2024

A controversial AI-powered local news site really stepped in it this week — by accidentally accusing a regional district attorney of murder.

As Techdirt first reported, an article appeared on Hoodline San Jose — one of the media network's many local news sites that span the US — earlier this month with a gruesome headline: "SAN MATEO COUNTY DA CHARGED WITH MURDER AMID ONGOING SEARCH FOR VICTIM'S REMAINS."

That's bleak stuff, not to mention a crime that probably would have made national news. But there's an important reason why it didn't hit national airwaves: there was a murder, but the DA didn't commit it. They just charged the guy who allegedly did.

As Techdirt found, Hoodline's AI seems to have taken an X-formerly-Twitter post from the San Mateo DA office's official account, in which the office shared a run-of-the-mill press release about a local man — who was not the DA — finally being charged by the DA's office for the awful crime and garbled it so badly that it said the DA himself committed the grisly murder. We're all innocent until accused by AI of crimes.

After Techdirt called Hoodline out for the cataclysmic error, a word soup of an editor's note appeared at the top of the article. It attributed the flub to a simple "typo" that "unfortunately changed the meaning" of its content, "thereby creating ambiguity about the fact that the DA and the accused are two different people."

Mistakes happen in journalism. But as far as journalistic errors go, boldly accusing the wrong person of murder — let alone levying such an accusation at a high-ranking local official — is pretty high up there in terms of seriousness.

Add that Hoodline, which is currently owned by a murky media outfit called Impress3, is openly using AI to find and cough up synthetic "news," and it's the latest significant blunder by a media organization or a paid third-party provider attempting to use AI to cheaply churn out content.

The error also calls into question Hoodline's florid promises that its editorial content is crafted with a meaningful level of human oversight.

"We view journalism as a creative science and an art that necessitates a human touch," reads the company's AI disclaimer. "In our pursuit of delivering informative and captivating content, we integrate artificial intelligence (AI) to support and enhance our editorial processes."

The article was attributed to the byline Eileen Vargas, one of the website's many fake, AI-generated reporter personas. As Nieman Lab reported earlier this year, Hoodline's vast lineup of made-up journalists has drawn wide scrutiny for feigning racial diversity in an industry that, in reality, is overwhelmingly white and male.

There might be some implications for Google — an AI company reportedly testing AI news products itself — as well. According to Techdirt's Mike Masnick, he discovered the false AI accusation when the Hoodline piece cropped up in his Google News tab. And again, though journalists make mistakes, Google's algorithm platforming erroneous accusations of murder put forth by an explicitly AI-powered news network raises a different set of eyebrows.

How much leash should an AI news site with clearly low editorial standards be given? And can we trust news-sorting algorithms to sift through the rising tide of algorithm-generated content?

What we can say for sure is this: mistakes like these, which a human reporter working under a sound editorial process would be very unlikely to make, are likely to become more and more common as publishers give control over to cheap and barely-supervised AI systems.

Read the Source

Research

  • Defining an “AI Incident”
  • Defining an “AI Incident Response”
  • Database Roadmap
  • Related Work
  • Download Complete Database

Project and Community

  • About
  • Contact and Follow
  • Apps and Summaries
  • Editor’s Guide

Incidents

  • All Incidents in List Form
  • Flagged Incidents
  • Submission Queue
  • Classifications View
  • Taxonomies

2024 - AI Incident Database

  • Terms of use
  • Privacy Policy
  • Open twitterOpen githubOpen rssOpen facebookOpen linkedin
  • e1b50cd