Anthropic Sues Trump Administration After Pentagon Labels Company a “Supply Chain Risk”

Sam Altman, CEO of OpenAI; Dario Amodei, CEO of Anthropic.

On March 4, the U.S. Department of Defense (DoD) declared Anthropic and its products a “supply chain risk,” barring defense contractors from using the company’s models, including the Large Language Model (LLM) Claude, in their work with the Pentagon. This decision follows an escalating dispute between Anthropic and the federal government over how its artificial intelligence models may be used. 

Anthropic is an artificial intelligence company founded in 2021 by former researchers from OpenAI. The company develops large language models, such as Claude, designed for reasoning, writing, and data analysis. These tools have drawn interest from federal agencies as they explore how AI can improve U.S. defense operations. Anthropic wants its technology to be off-limits for use in autonomous weaponry and mass-surveillance operations, a restriction that the federal government has not guaranteed. The company also wants to limit the use of its models in operational decision-making, which it argues is the role of military personnel and should not be delegated to artificial intelligence. The DoD, however, sought unrestricted access to all of Anthropic’s models for any lawful purpose. 

Tensions escalated on February 27 when Defense Secretary Pete Hegseth posted on X that he was urging the DoD to designate Anthropic a supply chain risk. The designation became official on March 4, making Anthropic the first American company to receive a label typically reserved for foreign adversaries. The government took similar action against the Chinese tech giant Huawei in 2018, citing national security concerns about the company’s telecommunications equipment and potential ties to the Chinese government. Legislation passed by Congress effectively barred U.S. companies from using Huawei in federal contracts and limited access in American markets. Still, Anthropic’s case marks the first instance of the U.S. government designating a domestic company as a supply chain risk.

In response to the DoD’s action, Anthropic filed lawsuits against the federal government in the U.S. District Court for Northern California and the U.S. Circuit Court of Appeals for Washington, D.C., arguing the government’s actions are “unprecedented and unlawful.” Anthropic says the White House’s messaging and designation already “jeopardize hundreds of millions of dollars” in revenue.

Meanwhile, OpenAI signed a new agreement with the DoD featuring little to no restrictions on how its models could be used in defense projects. The partnership enabled the Pentagon to integrate OpenAI’s artificial intelligence tools into military operations, including data analysis, logistics planning, and other defense-related applications. OpenAI later revised the agreement to introduce clearer limitations on its use by the military. The added “red lines,” or use restrictions, primarily limit domestic surveillance. The revision came after criticism from anti-AI advocacy groups and the general public, who raised concerns that unrestricted access to AI could accelerate the development of autonomous warfare.

About the Author

Amelia Cole
Amelia Cole is a Sophomore at IMSA from Yorkville, IL. She is particularly interested in the intersection of engineering and healthcare. When she's not writing for the Acronym, Amelia can usually be found CAD modeling, playing tennis, or updating her notion page—which she'll admit she's a little too obsessed with.

Be the first to comment on "Anthropic Sues Trump Administration After Pentagon Labels Company a “Supply Chain Risk”"

Leave a comment

Your email address will not be published.


*