Anthropic's fight with the federal government went to court on Tuesday. The government had a rough day.
A federal judge in San Francisco sharply questioned the Pentagon's decision to blacklist Anthropic, suggesting the move looks more like retaliation than a legitimate national security action.
"It looks like defendants went further than that because they were trying to punish Anthropic," Judge Rita Lin said during the hearing. She referenced an amicus brief that called the blacklisting "attempted corporate murder." "I don't know if it's murder," Lin added, "but it does appear that this order was designed to cripple the company."
Here's the backstory. In July 2025, Anthropic signed a $200 million agreement with the Department of Defense to bring Claude onto classified military networks. But things fell apart when the Pentagon pushed the company to strip out certain safety guardrails, including restrictions on use in autonomous weapons and domestic surveillance.
Anthropic wouldn't budge. CEO Dario Amodei laid it out in a public statement in February: "Today, frontier AI systems are simply not reliable enough to power fully autonomous weapons. We will not knowingly provide a product that puts America's warfighters and civilians at risk."
The Pentagon's response was to designate Anthropic a "supply chain risk" — a legal classification originally designed to protect military systems from foreign adversaries who might tamper with defense technology. The designation doesn't just block Claude from military contracts. It covers all of Anthropic's products and subsidiaries, effectively banning any federal contractor from doing business with the company.
That scope is what Judge Lin zeroed in on. She characterized the government's legal theory as arguing a company can be branded a supply chain risk because it is "stubborn" and "asks annoying questions." She also pointed out the obvious: "If the worry is about the integrity of the operational chain of command, DOW could just stop using Claude."
DOJ attorney Eric Hamilton argued that Anthropic's negotiating posture made the Pentagon unable to trust the company and raised concerns about a "risk of future sabotage."
Anthropic's counsel, Michael Mongan, had a response that landed: "A saboteur is not going to get into a public spat. They're just going to accept the contractual term proposed by the government and then go and do nefarious things."
A court filing revealed by TechCrunch makes the timeline even harder to square. Pentagon officials told Anthropic the two sides were "very close" the day after the blacklisting came down. Senator Elizabeth Warren has since called the designation what a lot of people in the industry are thinking: retaliation.
The Pentagon's frustration may be easier to understand through a comment from Emil Michael, the Undersecretary of Defense for Research and Engineering. On a recent Kleiner Perkins podcast, he compared AI vendors to software companies: "If you buy the Microsoft Office Suite, they don't tell you what you could write in a Word document, or what email you can send." The Pentagon wants to buy a tool and decide how to use it.
Anthropic isn't the only AI lab that drew the line on safety. OpenAI published its own statement days after Amodei's, saying it was "unwilling to remove key technical safeguards to enhance performance on national security work." But OpenAI hasn't faced the same consequences, which suggests the trigger wasn't the safety stance itself — it was Anthropic's refusal to go quietly.
The commercial fallout has been swift and ironic. ChatGPT uninstalls on the App Store reportedly surged after the ban went public, while Claude surged to No. 1 by the end of February. A Pentagon official acknowledged it could take 12 months or longer to replace what Claude provides on classified networks.

The government's theory boils down to this: a company becomes a national security threat by negotiating too hard on safety. If that logic survives, every AI lab weighing a defense contract just got a clear message — play ball or get blacklisted. Judge Lin's skepticism suggests the courts won't let that stand. But the chilling effect doesn't wait for a ruling. The precedent isn't the legal outcome. It's the spectacle.
