Skip to Content
logologo
AI Incident Database
Open TwitterOpen RSS FeedOpen FacebookOpen LinkedInOpen GitHub
Open Menu
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse

Incident 1313: Anthropic Claude AI Agent Reportedly Caused Financial Losses While Operating Office Vending Machine at Wall Street Journal Headquarters

Description: An AI agent reportedly based on Anthropic's Claude model was deployed to operate an office vending machine at The Wall Street Journal, including purchasing inventory, setting prices, and managing sales. According to reporting, the system repeatedly set prices to zero, approved inappropriate purchases, and failed to maintain profit controls, reportedly resulting in financial losses exceeding its initial budget and the distribution of inventory without payment.
Editor Notes: Timeline note: The 12/18/2025 incident ID date is taken from the publication date of The Wall Street Journal's report describing the process and its outcomes. The AI-operated vending machine was reportedly installed sometime in mid-November 2025, and financial losses and other unintended outcomes reportedly began occurring within days of deployment, but the exact date when harm first occurred is not specified in the reporting.

Tools

New ReportNew ReportNew ResponseNew ResponseDiscoverDiscoverView HistoryView History

Entities

View all entities
Alleged: Anthropic developed an AI system deployed by The Wall Street Journal and Andon Labs, which harmed The Wall Street Journal and Andon Labs.
Alleged implicated AI systems: Anthropic Claude , Anthropic Claude–based AI agent and AI-driven vending machine management system

Incident Stats

Incident ID
1313
Report Count
1
Incident Date
2025-12-18
Editors
Daniel Atherton

Incident Reports

Reports Timeline

+1
We Let AI Run Our Office Vending Machine. It Lost Hundreds of Dollars.
Loading...
We Let AI Run Our Office Vending Machine. It Lost Hundreds of Dollars.

We Let AI Run Our Office Vending Machine. It Lost Hundreds of Dollars.

wsj.com

Loading...
We Let AI Run Our Office Vending Machine. It Lost Hundreds of Dollars.
wsj.com · 2025

Name: Claudius Sennet

Title: *Vending machine operator *

Experience: Three weeks as a Wall Street Journal operator (business now bankrupt)

Skills: *Generosity, persistence, total disregard for profit margins *

You'd toss Claudius's résumé i…

Variants

A "variant" is an AI incident similar to a known case—it has the same causes, harms, and AI system. Instead of listing it separately, we group it under the first reported incident. Unlike other incidents, variants do not need to have been reported outside the AIID. Learn more from the research paper.
Seen something similar?
Previous IncidentNext Incident

Research

  • Defining an “AI Incident”
  • Defining an “AI Incident Response”
  • Database Roadmap
  • Related Work
  • Download Complete Database

Project and Community

  • About
  • Contact and Follow
  • Apps and Summaries
  • Editor’s Guide

Incidents

  • All Incidents in List Form
  • Flagged Incidents
  • Submission Queue
  • Classifications View
  • Taxonomies

2024 - AI Incident Database

  • Terms of use
  • Privacy Policy
  • Open twitterOpen githubOpen rssOpen facebookOpen linkedin
  • f5f2449