A sweeping show of support from the technology industry has emerged around Anthropic’s legal battle with the Pentagon, with Microsoft leading the charge by filing an amicus brief in a San Francisco federal court. The brief argued that a temporary restraining order was critical to prevent serious harm to the ecosystem of companies and government programs that depend on Anthropic’s AI. The coordinated response from some of the world’s most powerful tech companies signals a rare moment of industry-wide unity on the question of AI ethics and government overreach.
Anthropic found itself in the Pentagon’s crosshairs after refusing to agree to terms that would allow its Claude AI model to be used for mass surveillance or autonomous lethal weapons during negotiations over a $200 million military contract. Defense Secretary Pete Hegseth labeled the company a supply-chain risk following the collapse of those talks, effectively cutting it off from government work. Anthropic responded by filing two separate lawsuits, one in California and one in Washington DC, arguing that the designation was both unprecedented and unconstitutional.
Microsoft’s decision to file in support of Anthropic carries significant weight given its status as one of the Pentagon’s most important technology partners. The company is a shareholder in the $9 billion Joint Warfighting Cloud Capability contract and has separate agreements covering software and enterprise services across federal agencies. Microsoft publicly stated that reliable access to leading technology and responsible AI governance were not competing goals but complementary ones that required cooperation between industry and government.
In its legal filings, Anthropic argued that the supply-chain risk designation, typically reserved for companies with ties to China or other foreign adversaries, was being weaponized as ideological punishment. The company disclosed that it does not currently trust Claude to function safely in contexts involving autonomous lethal decision-making, which it said was the foundation of its usage restrictions. The Pentagon’s chief technology officer dismissed any possibility of renegotiation, stating publicly that there was no chance the agency would return to the table.
This case has broader implications for the future of AI governance in the United States. House Democrats have written to the Pentagon seeking clarification on whether AI tools were used in military strikes in Iran that reportedly caused mass civilian casualties. The questions being asked in Congress and in the courts are fundamentally the same: who decides how AI is used in warfare, and what safeguards exist to ensure accountability when things go wrong.
