Should decisions regarding military technology be made by public officials or private companies? This question stands at the heart of a new legal fight between the Pentagon and the frontier AI company, Anthropic. 

In a democratic republic, it would seem obvious that only those accountable to the American people should wield this power. That principle is now being tested.

At issue is a dispute between the Department of War and Anthropic, the maker of the AI system Claude. After securing a $200 million contract to integrate its technology into sensitive military systems, Anthropic is pushing back on how that technology can be used and demanding limits on its deployment, citing internal policies and ethical concerns.

That may sound reasonable at first glance. After all, this sort of corporate social responsibility would seem to be an example of a CEO caring about more than just profits. However, this case isn’t about forcing a company to serve a powerful government or even endorse every military decision. It’s about something far more fundamental: whether a private company that has chosen to embed itself in national security operations can reserve the sole authority to override lawful government use. In the American system, that answer must be no.

According to reports, Anthropic objected to its AI allegedly being used to help plan the operation targeting Venezuelan strongman Nicolás Maduro. The company argued that such use could violate its terms of service, terms that include vague prohibitions on things like “mass surveillance” or certain military applications.

Those categories sound serious, but they are also ill defined. What qualifies as “mass surveillance?” What is considered a prohibited military use in a fast-moving conflict? And what should have greater weight to govern this activity: U.S. law, or Silicon Valley corporate culture? If the answer is “the company,” then they are effectively shaping the armed forces’ operational decisions. That flips the military chain of command on its head.

The Pentagon’s response was swift. Citing national security concerns, it took action to remove Anthropic’s system from sensitive networks and designated the company a potential supply chain risk. A federal appeals court has already signaled that the government’s interest in maintaining control during an active conflict outweighs the company’s financial concerns. That’s the right judgment. Because this fight is about far more than one contract. It’s about who ultimately controls the most powerful technologies shaping modern warfare.

AI companies like Anthropic have been clear about their ambitions. They argue their systems will transform everything in modern life, from economics, science, labor, and yes, even war. They seek government contracts, public trust, regulatory flexibility, and massive infrastructure buildouts to support that vision. Yet, they still demand a veto over public policy. 

It is notable then that Anthropic’s leadership has publicly warned that its own technology could pose existential risks, even suggesting a non-trivial chance of catastrophic outcomes. Their ostensibly philanthropic CEO, Dario Amodei, has publicly worried that his company could create “the single most serious national security threat we’ve faced in a century, possibly ever.” He believes that the systems his company has developed could end up “taking over the world.” Those concerns deserve serious attention. A company racing to build world-shaping technology cannot simultaneously insist the opposite to secure a legal defense at trial. 

To be clear, there are real and important debates to be had about the use of AI in military operations. Concerns about safeguards, oversight, and the ethical use of emerging technologies are all legitimate. But those debates belong in Congress, in the public arena, within the chain of command, and ultimately governed by federal law. They do not belong in the fine print of a private company’s terms of service.

American companies benefit from American markets, American law, American infrastructure, and American protection. Supporting lawful national defense missions, especially after voluntarily entering that space, is not a radical demand. It is a baseline responsibility. And rights come with obligations. What these Big Tech companies cannot do is claim all the benefits of being “essential” while disclaiming the responsibilities that come with it. 

If the government determines that a contractor is not reliable for mission-critical national security work, it has both the right and the obligation to act. That is not retaliation. It is risk management.

The bottom line is straightforward.

Private companies can choose whether to enter the national security arena. But once they do, by taking the government contracts, integrating into operations, and helping partner on the future of American power, they cannot claim a unilateral veto over how that power is exercised.

National security and American military operations are too important to be outsourced. Not just in execution, but in authority. And in the United States, that authority rests with the government, not multinational corporations. 

Joel Thayer serves as a senior fellow for AI & Emerging Technology at the America First Policy Institute, and Gina D’Andrea serves as general counsel at the America First Policy Institute.