While civilian agencies must immediately stop using Anthropic systems, the Pentagon has six months to phase them out. Trump’s decision could benefit competitors like Elon Musk’s xAI, along with defense contractors working with Google and OpenAI.
BY PC Bureau
February 28, 2026: In a dramatic escalation of tensions between Silicon Valley and Washington, President Donald Trump on Friday ordered federal agencies to stop using artificial intelligence developed by Anthropic, effectively cutting off one of the government’s fastest-rising AI partners after the company refused to loosen safeguards on military use of its technology.
The directive came just over an hour before the Pentagon’s deadline for Anthropic to grant unrestricted military access to its AI systems, and less than a day after CEO Dario Amodei publicly declared that the company “cannot in good conscience accede” to the Defense Department’s demands.
Trump said most civilian agencies must immediately discontinue Anthropic’s tools, while allowing the Pentagon a six-month window to phase out systems already embedded in military platforms.
“We don’t need it, we don’t want it, and will not do business with them again!” Trump wrote.
At the heart of the dispute is a fundamental clash over the role of artificial intelligence in national security, particularly concerns over the use of AI in lethal operations, surveillance, and access to classified information.
The decision is expected to benefit competitors, including Elon Musk’s company xAI, whose Grok chatbot is being prepared for integration into classified military networks. It also sends a clear warning to other AI providers such as Google and OpenAI, both of which maintain defense contracts.
In a Social media post Trump said, “THE UNITED STATES OF AMERICA WILL NEVER ALLOW A RADICAL LEFT, WOKE COMPANY TO DICTATE HOW OUR GREAT MILITARY FIGHTS AND WINS WARS! That decision belongs to YOUR COMMANDER-IN-CHIEF, and the tremendous leaders I appoint to run our Military. The Leftwing nut jobs at Anthropic have made a DISASTROUS MISTAKE.”
Anthropic, best known for its Claude chatbot, faces significant financial and reputational risks. Earlier this week, Defense Secretary Pete Hegseth issued an ultimatum threatening not only contract termination but also the possibility of labeling the company a “supply chain risk”—a designation typically reserved for foreign adversaries and one that could jeopardize its partnerships across the tech sector.
READ: Court Clears Kejriwal, Sisodia in Excise Policy Case
Although Trump stopped short of formally applying that designation, he warned that Anthropic could face “major civil and criminal consequences” if it fails to cooperate during the transition period.
“THE UNITED STATES OF AMERICA WILL NEVER ALLOW A RADICAL LEFT, WOKE COMPANY TO DICTATE HOW OUR GREAT MILITARY FIGHTS AND WINS WARS! That decision belongs to YOUR COMMANDER-IN-CHIEF, and the tremendous leaders I appoint to run our Military.
The Leftwing nut jobs at Anthropic… pic.twitter.com/aIEx92nnyx
— The White House (@WhiteHouse) February 27, 2026
Anthropic did not immediately respond to Trump’s latest remarks.
The company had sought limited assurances that its AI would not be used for mass domestic surveillance or fully autonomous weapons. However, after months of private negotiations broke down, Anthropic said revised contract language presented as a compromise contained legal provisions that could allow those safeguards to be bypassed.
Pentagon spokesman Sean Parnell insisted the military had no intention of using AI for illegal surveillance or autonomous weapons without human oversight, stating that the Defense Department seeks to deploy AI “for all lawful purposes.” However, officials have provided few specifics about how the technology would ultimately be used.
The confrontation has reverberated across Silicon Valley, where employees from rival firms voiced support for Anthropic’s stance. In an open letter, tech workers warned that the Pentagon appeared to be pressuring companies individually in hopes of forcing compliance.
In a surprising show of solidarity, Sam Altman, CEO of OpenAI and a longtime rival of Amodei, publicly backed Anthropic’s position, calling the Pentagon’s approach “threatening” and emphasizing that AI companies broadly share similar safety red lines.
“I mostly trust them as a company, and I think they really do care about safety,” Altman said.
Meanwhile, Musk sided firmly with the administration, accusing Anthropic of ideological bias and criticizing its refusal to comply.
The dispute highlights a growing ideological and strategic divide over AI’s role in warfare and governance. Some lawmakers and defense officials warned that politicizing AI partnerships could ultimately undermine national security.
Sen. Mark Warner cautioned that the administration’s rhetoric raised serious concerns about whether critical national security decisions were being driven by careful analysis or political motivations.
Former Pentagon AI chief Jack Shanahan also warned against targeting Anthropic, noting that its technology is already widely used across government systems and that its safety concerns were reasonable.
The standoff leaves the Pentagon scrambling to secure alternative AI providers while raising broader questions about how far governments can compel private companies to deploy powerful technologies in military contexts—and whether Silicon Valley will comply.










