Skip to Content
logologo
AI Incident Database
Open TwitterOpen RSS FeedOpen FacebookOpen LinkedInOpen GitHub
Open Menu
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse

Report 6920

Loading...
U.S. Strikes in Middle East Use Anthropic, Hours After Trump Ban
wsj.com · 2026

Within hours of declaring that the federal government will end its use of artificial-intelligence tools made by tech company Anthropic, President Trump launched a major air attack in Iran with the help of those very same tools.

Commands around the world, including U.S. Central Command in the Middle East, use Anthropic's Claude AI tool, people familiar with the matter confirmed. Centcom declined to comment about specific systems being used in its ongoing operation against Iran.

The command uses the tool for intelligence assessments, target identification and simulating battle scenarios even as tension between the company and Pentagon ratcheted up, the people said, highlighting how embedded the AI tools are in military operations.

The administration and Anthropic have been feuding for months over how its AI models can be used by the Pentagon. Trump on Friday ordered agencies to stop working with the company and the Defense Department designated it a security threat and risk to its supply chain.

That came after Anthropic refused to let the Pentagon use its tools in all lawful scenarios during their contract negotiation. Anthropic's lobbying against the administration's AI policies and ties to organizations that are big Democratic donors have also upset administration officials.

Claude's use in high-profile missions, such as the U.S. military operation that captured Venezuelan President Nicolás Maduro, shows why the administration said it would take six months to phase out the technology---a complicated process given how it is used by partners including data-mining firm Palantir.

As Anthropic's relationship with the Pentagon soured, the Defense Department reached deals to use competitors ChatGPT maker OpenAI and Elon Musk's xAI in classified settings. It will take months to replace Claude with the other models, AI experts said.

Read the Source

Research

  • Defining an “AI Incident”
  • Defining an “AI Incident Response”
  • Database Roadmap
  • Related Work
  • Download Complete Database

Project and Community

  • About
  • Contact and Follow
  • Apps and Summaries
  • Editor’s Guide

Incidents

  • All Incidents in List Form
  • Flagged Incidents
  • Submission Queue
  • Classifications View
  • Taxonomies

2024 - AI Incident Database

  • Terms of use
  • Privacy Policy
  • Open twitterOpen githubOpen rssOpen facebookOpen linkedin
  • e1b50cd