Saturday, February 28, 2026
HomeAIAnthropic hits back after US military labels it a “supply chain risk.”

Anthropic hits back after US military labels it a “supply chain risk.”

In an unexpected move, US Secretary of Defense attorney Pete Hegseth issued an order on Friday, designating the artificial intelligence (AI) company Anthropic as a “supply chain risk.” This directive has sent waves of shock and concern throughout Silicon Valley, leading numerous businesses to question the feasibility of continuing to utilize one of the tech industry’s most widely used AI models.

In a declaration on social media, Hegseth stated, “Effective immediately, no contractor, supplier or partner doing business with the U.S. military may conduct commercial activities with Anthropic.” This sudden announcement came in the wake of weeks of intense discussions between the Pentagon and Anthropic regarding the conditions under which the US military could utilize the startup’s AI models.

Tensions Between the Pentagon and Anthropic

Earlier in the week, Anthropic had voiced its concerns in a blog post, arguing that its contracts with the Pentagon should not permit its technology to be used for mass domestic surveillance or for fully autonomous weaponry. The Pentagon, however, sought a more open-ended agreement, requesting that Anthropic consent to the US military using its AI for “any lawful purpose” without any specific exceptions.

The Pentagon’s designation of a supply chain risk allows it to limit or exclude certain vendors from defense contracts if they are perceived to have security vulnerabilities. This could include risks related to foreign ownership, control, or influence, and is intended to safeguard sensitive military systems and data from potential threats.

Anthropic’s Response

Anthropic promptly responded to this move in a blog post on Friday evening. The company proclaimed its intent to “challenge any designation as a supply chain risk in court,” claiming that such a designation would establish a dangerous precedent for any American company negotiating with the government.

Anthropic also clarified that it had not received any direct communication from either the Defense Department or the White House regarding the terms of use for its AI models. The company disputed Hegseth’s authority to enforce this designation, stating, “The Secretary does not have the legal authority to substantiate this statement.”

The Pentagon, however, declined to comment on the situation.

Silicon Valley Reacts

The tech industry was quick to voice its shock and disapproval. Dean Ball, a senior fellow at the Foundation for American Innovation and a former senior policy adviser on AI at the White House, commented, “This is the most shocking, damaging and arrogant thing I have ever seen from the United States government. We essentially just sanctioned an American company. If you’re an American, you should think about whether or not you should be living here in 10 years.”

Similar sentiments were echoed by other influential figures in Silicon Valley. Paul Graham, the founder of startup accelerator Y Combinator, criticized the government’s rashness and vindictiveness, while OpenAI researcher Boaz Barak expressed hope that this decision would be reversed.

OpenAI Steps In

Meanwhile, OpenAI CEO Sam Altman announced that the company had reached an agreement with the Department of Defense to deploy its AI models in classified environments. Altman assured that the agreement incorporated key security principles, including prohibitions on domestic mass surveillance and human responsibility for the use of force, including for autonomous weapon systems.

Implications for Anthropic’s Customers

Anthropic sought to clarify the implications of the supply chain risk designation, stating that it applies only to Department of Defense contracts directly with suppliers. It does not impact how contractors use its Claude AI software to serve other customers. However, the ultimate impact on Anthropic’s customers remains unclear, with legal experts expressing uncertainty over the full extent of Hegseth’s directive.

As the tech industry grapples with the implications of this unexpected move, the future of Anthropic and the broader use of AI in military applications remain in flux. This incident serves as a stark reminder of the complex ethical and legal issues surrounding the use of advanced technology in the realm of national security.

For the full story and more details, click here.

Must Read
Related News

LEAVE A REPLY

Please enter your comment!
Please enter your name here