A legal battle that began as a contract dispute has gone national, with Microsoft throwing its considerable weight behind Anthropic in a federal court case that could redefine the relationship between the US military and the AI industry. Microsoft filed an amicus brief in a San Francisco federal court calling for a temporary restraining order against the Pentagon’s supply-chain risk designation. The brief was accompanied by a joint filing from Amazon, Google, Apple, and OpenAI, making the case a vehicle for the entire technology industry’s concerns about government overreach in AI governance.
The supply-chain risk designation was applied to Anthropic after it refused to allow its Claude AI to be deployed for mass domestic surveillance or autonomous lethal weapons as part of a $200 million contract with the Pentagon. Defense Secretary Pete Hegseth made the designation official, and within days Anthropic’s government contracts began to be cancelled. The company responded with two simultaneous lawsuits, one in California and one in Washington DC, arguing the designation was both unprecedented and unconstitutional.
Microsoft’s legal filing is anchored in its direct commercial relationship with Anthropic: the company integrates Anthropic’s AI into military systems and holds a share of the Pentagon’s $9 billion cloud computing contract. Additional contracts worth billions more span defense, intelligence, and civilian government agencies. Microsoft stated publicly that the government and technology sector needed to work together to ensure that AI served national security without enabling surveillance or unauthorized military action.
Anthropic’s court filings argued that the supply-chain risk label, traditionally reserved for companies with ties to China or other adversaries, was being misused as a political tool to punish the company for its publicly stated positions on AI safety. The company disclosed that it does not believe Claude is currently safe or reliable enough for autonomous lethal operations, and said this was the genuine reason for the restrictions it sought in the contract. The Pentagon’s technology chief publicly dismissed any possibility of renewed negotiations.
Congressional Democrats have separately written to the Pentagon seeking information about whether AI was used in a strike in Iran that reportedly killed more than 175 people at an elementary school. Their letter asked specifically whether AI targeting tools were used and whether a human verified the target before the strike was executed. These questions are adding legislative urgency to what is already an extraordinarily consequential legal and policy confrontation over the future of AI in American warfare.
Microsoft Throws Its Weight Behind Anthropic as Legal Battle Over Pentagon’s AI Blacklist Goes National
14