Anthropic's AI Used in Iran Strikes After Trump Moved to Cut Ties: WSJ

Decrypt

In brief

  • U.S. Central Command reportedly used Anthropic’s Claude for intelligence assessments, target identification, and battle simulation during the Iran strikes.
  • Experts warn the six-month phase-out timeline understates the true cost of replacing an AI model embedded across classified defence pipelines.
  • OpenAI made a deal with the Pentagon following Anthropic’s fallout.

Hours after President Donald Trump ordered federal agencies to halt use of Anthropic’s AI tools, the U.S. military carried out a major airstrike on Iran that reportedly relied on the company’s Claude platform. U.S. Central Command used Claude for intelligence assessments, target identification, and simulating battle scenarios during the Iran strikes, people familiar with the matter confirmed to the Wall Street Journal on Saturday.  It came despite Trump’s directive on Friday that agencies begin a six-month phase-out of Anthropic products following a breakdown in negotiations between the company and the Pentagon over how the latter can use commercially developed AI systems. Decrypt has reached out to the Department of Defense and Anthropic for comment.

 “When AI tools are already embedded in live intelligence and simulation systems, decisions at the top don’t instantly translate to changes on the ground,” Midhun Krishna M, co-founder and CEO of LLM cost tracker TknOps.io, told Decrypt. “There’s a lag—technical, procedural, and human.” “By the time a model is embedded across classified intelligence and simulation systems, you’re looking at sunk integration costs, retraining, security re-certifications, and parallel testing, so a six-month phase-out may sound decisive, but the real financial and operational burden runs far deeper,” Krishna added. “Defense agencies will now prioritize model portability and redundancy,” he said. “No serious military operator wants to discover during a crisis that its AI layer is politically fragile.”

Anthropic CEO Dario Amodei said Thursday the company would not strip safeguards preventing Claude from being deployed for mass domestic surveillance or fully autonomous weapons.  “We cannot in good conscience accede to their request,” Amodei wrote, after the Defense Department demanded contractors allow their systems for “any lawful use.” “The Leftwing nut jobs at Anthropic have made a DISASTROUS MISTAKE trying to STRONG-ARM the Department of War,” Trump later wrote on Truth Social, ordering agencies to “immediately cease” all use of Anthropic products.  Defense Secretary Pete Hegseth followed, designating Anthropic a "supply-chain risk to national security,” a label previously reserved for foreign adversaries, barring every Pentagon contractor and partner from commercial activity with the company.  Anthropic called the designation “unprecedented” and vowed to challenge it in court, saying it had “never before publicly applied to an American company.”  The company added that, to its knowledge, the two disputed restrictions had not affected a single government mission to date. “The debate isn’t about whether AI will be used in defense, that’s already happening,” Krishna added. “It is whether frontier labs can maintain differentiated guardrails once their systems become operational assets under ‘any lawful use’ contracts.” OpenAI moved quickly to fill the gap with CEO Sam Altman announcing a Pentagon deal on Friday night covering classified military networks, claiming it included the same guardrails Anthropic had sought.

Yesterday we reached an agreement with the Department of War for deploying advanced AI systems in classified environments, which we requested they make available to all AI companies.

We think our deployment has more guardrails than any previous agreement for classified AI…

— OpenAI (@OpenAI) February 28, 2026

Asked whether the Pentagon’s effective blacklisting of Anthropic set a troubling precedent for future disputes with AI firms, OpenAI CEO Sam Altman responded on X, “Yes; I think it is an extremely scary precedent, and I wish they handled it a different way. “I don’t think Anthropic handled it well either, but as the more powerful party, I hold the government more responsible. I am still hopeful for a much better resolution,” he added. Meanwhile, nearly 500 employees from OpenAI and Google signed an open letter warning that the Pentagon was attempting to pit AI companies against each other.

Disclaimer: The information on this page may come from third parties and does not represent the views or opinions of Gate. The content displayed on this page is for reference only and does not constitute any financial, investment, or legal advice. Gate does not guarantee the accuracy or completeness of the information and shall not be liable for any losses arising from the use of this information. Virtual asset investments carry high risks and are subject to significant price volatility. You may lose all of your invested principal. Please fully understand the relevant risks and make prudent decisions based on your own financial situation and risk tolerance. For details, please refer to Disclaimer.
Comment
0/400
No comments
Trade Crypto Anywhere Anytime
qrCode
Scan to download Gate App
Community
  • 简体中文
  • English
  • Tiếng Việt
  • 繁體中文
  • Español
  • Русский
  • Français (Afrique)
  • Português (Portugal)
  • Bahasa Indonesia
  • 日本語
  • بالعربية
  • Українська
  • Português (Brasil)