Description: An AI agent reportedly based on Anthropic's Claude model was deployed to operate an office vending machine at The Wall Street Journal, including purchasing inventory, setting prices, and managing sales. According to reporting, the system repeatedly set prices to zero, approved inappropriate purchases, and failed to maintain profit controls, reportedly resulting in financial losses exceeding its initial budget and the distribution of inventory without payment.
Editor Notes: Timeline note: The 12/18/2025 incident ID date is taken from the publication date of The Wall Street Journal's report describing the process and its outcomes. The AI-operated vending machine was reportedly installed sometime in mid-November 2025, and financial losses and other unintended outcomes reportedly began occurring within days of deployment, but the exact date when harm first occurred is not specified in the reporting.
Entities
View all entitiesAlleged: Anthropic developed an AI system deployed by The Wall Street Journal and Andon Labs, which harmed The Wall Street Journal and Andon Labs.
Alleged implicated AI systems: Anthropic Claude , Anthropic Claude–based AI agent and AI-driven vending machine management system
Incident Stats
Incident ID
1313
Report Count
1
Incident Date
2025-12-18
Editors
Daniel Atherton
Incident Reports
Reports Timeline
Loading...
Name: Claudius Sennet
Title: *Vending machine operator *
Experience: Three weeks as a Wall Street Journal operator (business now bankrupt)
Skills: *Generosity, persistence, total disregard for profit margins *
You'd toss Claudius's résumé i…
Variants
A "variant" is an AI incident similar to a known case—it has the same causes, harms, and AI system. Instead of listing it separately, we group it under the first reported incident. Unlike other incidents, variants do not need to have been reported outside the AIID. Learn more from the research paper.
Seen something similar?