What the Anthropic Lawsuit Means for the Future of AI in Warfare

Tension has been building between the Trump administration and the tech firm Anthropic amid a heated debate over how and when the government should use their AI in war. 

The company and the government are at an impasse: the Department of Defense wants carte blanche on how they can use Claude, Anthropic’s large language model, at home and on the battlefield; Anthropic says it doesn’t think Claude is ready to be used for mass surveillance of Americans or to develop fully autonomous weapons. When Anthropic refused to give in on these usages, President Donald Trump and Secretary of Defense Pete Hegseth responded by restricting their usage within federal agencies, ultimately labeling Anthropic a supply chain risk to national security. 

The move was extreme, to say the least: The designation bans Anthropic from working on government contracts, marking the first time an American-owned company has been publicly marked as a supply chain risk, a designation intended to address concerns that adversaries to the United States may be maliciously introducing vulnerabilities into U.S. military systems. In the past, this has been reserved for Chinese and Russian companies suspected of espionage or sabotage.

On Monday, Anthropic filed two lawsuits against several federal agencies, including the Department of Defense and the Executive Office of the President, claiming that the government’s actions are “unprecedented and unlawful,” arguing that the designation was a retaliation following the breakdown of negotiations. “The Constitution does not allow the government to wield its enormous power to punish a company for its protected speech,” Anthropic’s attorneys argue in the lawsuit. 

Experts were similarly surprised by the government’s actions. “The way that the statute defines supply chain risk is that it must be an adversary that can end up compromising or introducing a malicious function in defense and national security systems,” says Amos Toh, a senior counsel at the Brennan Center’s Liberty and National Security program who is not connected to the Anthropic case. “In this case, there is no evidence that by including Claude in your defense and military systems, adversaries will seek to compromise it. If anything, the opposite argument could be made: The fact Anthropic is drawing these red lines and not letting DoD use this technology when it’s not ready for prime time, increases the safety of the system.”

And the problem goes far beyond one company’s relationship with the government. “There’s an extraordinarily deep and important issue lurking somewhere here, which is, what should the relationship be between the government and the AI industry?” says University of Minnesota Law School professor Alan Rozenshtein, who is also not involved with the lawsuit. “Should the military ever enter into contracts where the private company purports to limit what the military can do beyond just obeying the laws?”

‘The use of models like Claude supercharges the potential for abuse’

In July 2025, the Trump administration released its AI Action Plan, announcing the government’s intent in increasing the way it uses AI technology. That same month, Anthropic signed a deal with the Pentagon to incorporate Claude into military operations. The government has reportedly used Claude in the raid and capture of Venezuelan dictator Nicolas Maduro, and in its current attacks on Iran.

At the end of February, after months of negotiations, conversations to update the contract between Anthropic and the Dept. of Defense stalled when Anthropic reiterated that Claude was not yet ready to be used for lethal autonomous warfare or the mass surveillance of Americans. The Pentagon argued they wanted full control over how it was used, so long as the uses were “lawful.” 

Toh explains that the word lawful is important. “What the Defense Department is essentially doing is exploiting legal gray areas around its ability to conduct surveillance on Americans and to develop and field fully autonomous weapons,” says Toh. “To push for uses of AI models in ways that are either constitutionally suspect or kind of dubious under its obligations under the laws of war.”

Meaning, the Supreme Court has not directly grappled with what constitutional protections Americans have when the government uses AI. Because AI regulations and laws have not been able to keep up with advances in technology, something that is technically lawful today could still exist in a moral and ethical gray area. Experts argue these are complicated challenges that require deeper consideration, and would benefit from an informed Congress weighing in.

For mass surveillance, Toh explains, we have seen scenarios where law enforcement, intelligence agencies, and the military are used for joint missions, like in immigration enforcement. One of the red lines Anthropic drew was not allowing Claude to be used to analyze bulk commercial data sets that contain information about Americans. Toh says that while this is technically legal, an AI model like Claude can put together seemingly unrelated data points that would provide extremely sensitive insights into people’s lives, habits, and movements that could be used to surveil or target them. “You can see scenarios where the military may be tasked to perform certain types of surveillance in immigration enforcement activities and that’s a situation in which the use of models like Claude supercharges the potential for abuse.”

When it comes to weapons, Toh adds, Anthropic is not saying they are opposed to autonomous weapons, they are saying they are opposed to the use of their current model in developing fully autonomous weapons because the model is not there yet in terms of the technology.

“We do not believe, and have never believed, that it is the role of Anthropic or any private company to be involved in operational decision-making — that is the role of the military,” Anthropic CEO Dario Amodei said in a statement on March 5. “Our only concerns have been our exceptions on fully autonomous weapons and mass domestic surveillance, which relate to high-level usage areas, and not operational decision-making.”

‘Supply Chain Risk’ 

In response to Anthropic’s refusal to back down on these two specific red lines, Secretary of Defense Pete Hegseth threatened two seemingly contradictory actions against Anthropic: Hegseth first said he may invoke a law called the Defense Production Act, which would require Anthropic to sell Claude to the Dept. of Defense without the restrictions that Anthropic wanted. Then, he said he’d deem Anthropic a “supply chain risk,” which was followed by a Truth Social post by President Donald Trump expanding the threat, saying he was directing every federal agency to “immediately cease all use of Anthropic’s technology.” The president added that there would be a six month phase out period for agencies like the Department of Defense who are using Anthropic’s products. For example, the government is currently using Anthropic’s Claude in the military operation against Iran. 

On Feb. 27, Sam Altman announced his company, OpenAI, a competitor to Anthropic, had reached an agreement with the Pentagon to deploy their AI models in the classified military network.

The following week, on March 4, Anthropic was officially labeled as a supply chain risk, in a more narrow reading than Trump’s Truth Social post.

Rozenshtein explains how Hegseth’s initial threat could have decimated Anthropic. “The DOD supply chain designation only applies to DOD contracts, but Hegseth was also saying no one who does business with the DOD can do any business with Anthropic at all, which would have ended Anthropic as a company,” he says. The latter is not currently on the table, and if Trump follows through on his threat to stop Anthropic from selling to anyone in the government, Rozenshtein says that this is still only a fraction of Anthropic’s overall business.

“But the vibes are bad,” says Rozenshtein. “It’s bad to walk around being designated a supply chain risk by the U.S. government.”

Anthropic filed lawsuits in California and Washington D.C. making multiple arguments against both the Department of Defense’s actions and Trump’s threats. 

“The first argument they’re making is that this supply chain issue makes no sense, this is not what the law was for,” says Rozenshtein. “The law was for foreign companies who are trying to smuggle in threats, not a U.S. company that has a contract dispute.”

“You can’t simultaneously say, ‘We’re going to force you to sell to us and we’re going to use you to bomb Iran and you’re really scary,” he adds. “This seems very obvious to me and I think they will just win on these grounds.”

Anthropic’s suit is also alleging that the government is retaliating against them and infringing on their First Amendment rights, and that they’re not getting due process. The last part of their lawsuit specifically refers to Trump.

“They’re saying Trump can’t just do this, you can’t ban an entire company for no reason from the entire federal government,” adds Rozenshtein. “Congress wrote a whole, very elaborate procurement system and there are rules for this.”

He sums the lawsuit up: “If the government can’t come to some agreement with a company, do they shake hands and walk away like gentlemen? Or do they burn the company to the ground?”

‘A Pressure Tactic’ 

As Anthropic and the Pentagon continue their back and forth, the reality is that the government is still actively using Claude, and Anthropic is still in dialogue with them about the terms of their contract.

“Seeking judicial review does not change our longstanding commitment to harnessing AI to protect our national security, but this is a necessary step to protect our business, our customers, and our partners,” Anthropic said in a statement to Rolling Stone. “We will continue to pursue every path toward resolution, including dialogue with the government.”

This discourse has reinvigorated calls for stronger AI regulations and congressional laws to weigh in, rather than leaving massive ethical decisions and red lines to be drawn by private companies whom the public has little influence on.

“These are consequential policymaking decisions that have life and death implications,” says Toh. “You can’t just leave it up to the military to decide which kinds of weapons satisfy the U.S.’s obligations under the law of war, that is something Congress should investigate and set restrictions on.”

And, Toh adds, the First Amendment implications in this entire debate are extremely serious, whether we are talking about weapons, mass surveillance, or what text is outputted from an AI model. 

“If even the threat of canceling government contracts could be used to force companies to align their technology in ways that prioritize certain facts over others, or characterize things in a certain way, or suppress content, that has a serious impact on our access to information,” says Toh. “What we are seeing with the DOD and Anthropic is that kind of pressure tactic.”

Trending Stories

Rozenshtein says that while the feud between Anthropic and the government might be interesting to look at, he hopes people don’t lose sight of the bigger, underlying issues surrounding our relationship with technology, who is making these decisions, and how it impacts all of our lives.

“These are going to be the main issues that we’re gonna be debating for the next several years,” he says. “They are really complicated.”