Friday, March 27, 2026
HomeNewsJudge sides with Anthropic in temporarily blocking Pentagon ban

Judge sides with Anthropic in temporarily blocking Pentagon ban

The Pentagon and Anthropic: A Legal Battle Over AI

In a clash between Anthropic, an AI tech company, and the Pentagon, a significant legal development has unfolded. Amidst weeks of confrontation, a judge has granted Anthropic a preliminary injunction in its lawsuit, which aims to overturn the company’s government blacklist until the legal proceedings conclude. The final verdict is expected to take months or weeks.

Why Did the Confrontation Start?

As per War Department records, Anthropic was tagged as a supply chain risk due to its ‘hostile manner through the press.’ District judge for the Northern District of California, Judge Rita F. Lin, opined that punishing Anthropic for raising awareness about the government’s contractual position was an illegal retaliation against the First Amendment.

Anthropic’s Stance

In response to the decision, Anthropic spokeswoman Danielle Cohen expressed gratitude to the court for the swift action. She mentioned that their goal remains to work productively with the government to ensure all Americans benefit from safe and reliable AI. Importantly, Anthropic has stated that its AI product, Claude, is not safe for use in lethal autonomous weapons and domestic mass surveillance. They insist that the government should agree not to use the technology for these purposes if they wish to use it.

The Pentagon’s Argument

Contrarily, the War Department believes that the decision about the safety of AI should be in the hands of military commanders. They claim that the department should decide which AI product they want to use and purchase. Judge Lin clarified that the case’s issue is whether the government violated the law when it exceeded its boundaries.

The Triggering Memo

The conflict started with a memo from Defense Secretary Pete Hegseth, asking for “any lawful use” to be included in any AI services contract within 180 days. This requirement would also apply to existing contracts with companies like Anthropic, OpenAI, xAI, and Google. Anthropic’s negotiations with the Pentagon lasted weeks and revolved around two points that the company wanted to prevent the military from using its AI for: domestic mass surveillance and lethal autonomous weapons.

Consequences of the Dispute

The fallout from the confrontation led to a series of events including social media insults, a supply chain risk designation that could adversely affect Anthropic’s business, other AI companies rushing to make deals, and a subsequent lawsuit. Anthropic claims that it was punished for speech protected by the First Amendment and seeks to overturn the supply chain risk designation. The company’s operations were significantly affected by the designation, as indicated in its court filings, putting revenues ranging from hundreds of millions to several billions at risk.

The Ongoing Legal Proceedings

In a recent court filing, the Defense Department alleged that Anthropic could potentially “attempt to disable its technology or preemptively modify the behavior of its model before or during ongoing war operations.” However, the judge’s preliminary questions seem to challenge this statement, asking for more concrete evidence.

As the lawsuit unfolds, it appears to be an important debate upon the use of AI technology and the rights of companies to limit its applications. The final verdict could significantly impact the relationships between tech companies and government entities.

Source: Here

Must Read
Related News

LEAVE A REPLY

Please enter your comment!
Please enter your name here