Microsoft backs Anthropic in Pentagon AI blacklist dispute
The company claims the move was retaliation after it refused to allow its AI model
NEW YORK (Web Desk): Microsoft has warned a judge that the United States Department of Defense decision to blacklist Anthropic could harm American military operations and slow the country’s progress in artificial intelligence.
In a legal brief, Microsoft supported Anthropic’s request for a court order to temporarily stop the Pentagon from enforcing the ban until the dispute is resolved in court.
Anthropic recently filed a lawsuit in federal court in San Francisco challenging the government’s decision to label the company a “national security supply-chain risk.” The designation blocks the Pentagon from using Anthropic’s technology and requires defense contractors to certify that they are not using its AI models in their work with the military.
The company claims the move was retaliation after it refused to allow its AI model, Claude, to be used for autonomous lethal weapons or mass surveillance of Americans.
Austria Orders Microsoft to Stop Tracking Students
Microsoft argued that immediately enforcing the blacklist could disrupt existing military technology systems that rely on Anthropic’s AI. The company said contractors might have to rapidly redesign products and contracts currently used by the Pentagon, which could hamper U.S. warfighters at a critical time.
The dispute comes amid broader competition among major AI developers. Several industry experts, including researchers from OpenAI and Google, have also submitted court briefs supporting Anthropic’s position.
Anthropic maintains that its technology should be used responsibly and safely, stating that AI should not enable domestic mass surveillance or fully autonomous warfare.


Comments are closed, but trackbacks and pingbacks are open.