A standoff between one of Silicon Valley’s most prominent artificial intelligence companies and the United States military came to a head this week, with Anthropic CEO Dario Amodei refusing to bend to Pentagon pressure over how its AI technology can be used in national security operations, essentially for AI-guided weapons imn war.

The dispute, which was months in the making, erupted into public view as Pentagon officials gave Anthropic until 5:01pm Eastern Time on Friday, (IST 3:31am, Saturday) to drop restrictions on its Claude AI model.
Anthropic CEO Amodei did not wait for the deadline, instead already replying with a firm no. “These threats do not change our position: we cannot in good conscience accede to their request,” he said in a statement.
Pact under pressure: Not about ‘if’, but ‘how much’
Anthropic has not been an unwilling partner to the military so far; just that it has some red lines over how much of its AI should be used for war and US national security.
In a statement, Amodei noted that his company was “the first frontier AI company to deploy our models in the US government’s classified networks, the first to deploy them at the National Laboratories, and the first to provide custom models for national security customers”.
Claude is currently deployed across the Department of Defense and other national security agencies for intelligence analysis, operational planning, cyber operations, and more, Anthropic has noted.
Citing a Chinese spectre
Amodei said the company has taken financial hits to protect American interests, choosing to “forgo several hundred million dollars in revenue to cut off the use of Claude by firms linked to the Chinese Communist Party”.
“Anthropic has also acted to defend America’s lead in AI, even when it is against the company’s short-term interest. We chose to forgo several hundred million dollars in revenue to cut off the use of Claude by firms linked to the Chinese Communist Party (some of whom have been designated by the Department of War as Chinese Military Companies), shut down CCP-sponsored cyberattacks that attempted to abuse Claude, and have advocated for strong export controls on chips to ensure a democratic advantage. Anthropic understands that the Department of War, not private companies, makes military decisions. We have never raised objections to particular military operations nor attempted to limit use of our technology in an ad hoc manner,” the statement read.
Amodei draws two red lines
But two specific uses have never been part of Anthropic’s contracts with the Pentagon, and Amodei says they never should be — mass domestic surveillance and fully autonomous weapons.
On surveillance, Amodei argued that using AI systems to monitor Americans at scale “is incompatible with democratic values”, even if technically legal.
“Under current law, the government can purchase detailed records of Americans’ movements, web browsing, and associations from public sources without obtaining a warrant,” he wrote, noting that powerful AI makes it possible to assemble that scattered data “into a comprehensive picture of any person’s life — automatically and at massive scale”.
On autonomous weapons, Amodei’s position was more technical than principled.
Partially autonomous weapons, he acknowledged, “are vital to the defense of democracy”.
But fully autonomous systems — those that remove humans entirely from the process of selecting and engaging targets — are simply beyond what current AI can reliably handle. “We will not knowingly provide a product that puts America’s warfighters and civilians at risk,” he said. He added that Anthropic had offered to work with the Pentagon on research and development to improve the reliability of such systems, but the offer had not been accepted.
Pentagon pushes back
Department of Defense officials framed the dispute as a matter of American sovereignty. Pentagon spokesman Sean Parnell posted on social media: “We will not let ANY company dictate the terms regarding how we make operational decisions.”
Parnell insisted the military had no interest in mass surveillance of Americans — “which is illegal” — nor in autonomous weapons operating without human involvement.
But the Pentagon will only contract with AI companies who agree to an “any lawful use” standard, free from restrictions set by the companies themselves, DoD officials have added.
Emil Michael, the under secretary of defense for research and engineering, went further in his response to Amodei’s statement, writing on X that the Anthropic CEO “has a God-complex” and “wants nothing more than to try to personally control the US Military and is ok putting our nation’s safety at risk.”
What’s at stake?
At stake is up to $200 million in military contracts, along with other government work, for Anthropic, news agency AP reported. Worse for the company, the Pentagon has threatened to designate Anthropic a “supply chain risk”, a label previously reserved for foreign enemies, which would effectively bar the company from working with other defence contractors too.
Officials also raised the possibility of invoking the Cold War-era Defense Production Act to compel use of Anthropic’s technology without the company’s consent.
Amodei pointed out the contradiction plainly. “These latter two threats are inherently contradictory: one labels us a security risk; the other labels Claude as essential to national security.”
Support for Anthropic’s position came from unexpected corners. Retired US Air Force General Jack Shanahan, who led Project Maven, the Pentagon’s controversial AI drone-targeting initiative, said Anthropic’s red lines were “reasonable”. He added that large language models are “not ready for prime time in national security settings”.
“They’re not trying to play cute here,” he wrote on social media, about Anthropic’s refusal to agree to Pentagone’s demands.
Tech workers from OpenAI and Google also voiced public support for Amodei in an open letter, warning that the Pentagon was “trying to divide each company with fear that the other will give in”.
Anthropic, for its part, said it hopes the Pentagon reconsiders. If it does not, Amodei pledged a smooth transition to another provider, with Claude remaining available “for as long as required”.
(inputs from AP, AFP, Bloomberg)