Sunday, March 1, 2026
HomeAITrump wants to ban Anthropic from the US government

Trump wants to ban Anthropic from the US government

Trump Orders Federal Agencies to Stop Using Anthropic’s AI Tools

US President Donald Trump announced on Friday an order for all federal agencies to cease utilizing AI tools developed by Anthropic, a cutting-edge AI startup. This announcement comes after escalating tension between Anthropic and top government officials regarding the military application of artificial intelligence. Trump made this declaration public on the Truth Social platform, criticizing the tech company for what he perceived as an ill-advised attempt to outfit the War Department with AI-powered armaments.

Anthropic and the Pentagon, both at the heart of this dispute, have not yet responded to requests for comments regarding the president’s directive. Trump’s pronouncement also includes a “six-month opt-out period” for those agencies currently employing Anthropic’s services, presumably leaving a window open for further negotiation between the government and the startup.

Anthropic Labeled as a “Supply Chain Risk”

Shortly following Trump’s announcement, Defense Secretary Pete Hegseth revealed that Anthropic would also be categorized as a “supply chain risk.” This label is traditionally reserved for foreign companies perceived to be threats to national security. As a result, this designation will bar the U.S. military and its contractors and suppliers from collaborating with the AI company.

Hegseth also condemned Anthropic and its CEO Dario Amodei for their refusal to comply with his demands, accusing the company of putting Silicon Valley ideology above American life. The Defense Department had sought to alter a deal struck last July with Anthropic, among other companies, to lift restrictions on the use of AI and permit “any lawful use” of the technology. Anthropic, on the other hand, resisted this change, arguing it could potentially enable AI to govern lethal autonomous weapons or facilitate mass surveillance of U.S. citizens.

Anthropic’s Relationship with the Pentagon

Anthropic gained recognition as the first significant AI lab to collaborate with the U.S. military under a $200 million contract with the Pentagon last year. Several custom models, known as Claude Gov, were developed under this contract, which had fewer restrictions than regular models. Other AI companies such as Google, OpenAI, and xAI entered similar deals at around the same time, but only Anthropic is currently working with classified systems.

Anthropic’s model, Claude Gov, is primarily used for mundane tasks such as report writing and document summarization. However, it is also utilized in intelligence analysis and military planning, according to an anonymous source who spoke to WIRED. The model is available through platforms provided by Palantir and Amazon’s cloud platform for classified military work.

Reaction from Silicon Valley

Recently, Silicon Valley has shifted from generally avoiding defense work to increasingly engaging in such tasks, becoming full-fledged military contractors. The current struggle between Anthropic and the Pentagon tests the limits of this transition. This week, hundreds of OpenAI and Google employees expressed their support for Anthropic by signing an open letter criticizing their companies’ decisions to lift restrictions on the military use of AI.

In a memo sent to OpenAI employees today, CEO Sam Altman stated that the company aligns with Anthropic’s view on mass surveillance and fully autonomous weapons as a “red line.” Altman indicated that the company would strive to reach an agreement with the Pentagon that would allow it to continue its collaboration with the military, reported by the Wall Street Journal.

For more on this developing story, click here.

Must Read
Related News

LEAVE A REPLY

Please enter your comment!
Please enter your name here