• Home
  • Blog
  • Android
  • Cars
  • Gadgets
  • Gaming
  • Internet
  • Mobile
  • Sci-Fi
Tech News, Magazine & Review WordPress Theme 2017
  • Home
  • Blog
  • Android
  • Cars
  • Gadgets
  • Gaming
  • Internet
  • Mobile
  • Sci-Fi
No Result
View All Result
  • Home
  • Blog
  • Android
  • Cars
  • Gadgets
  • Gaming
  • Internet
  • Mobile
  • Sci-Fi
No Result
View All Result
Blog - Creative Collaboration
No Result
View All Result
Home Gadgets

Pentagon labels Anthropic a supply-chain risk

March 6, 2026
Share on FacebookShare on Twitter

The US Department of War has issued the first such designation ever applied to an American company, demanding that defence contractors certify they do not use Claude. Anthropic says the action is unlawful and retaliatory.


For months, the conversations between Anthropic and the US Department of War were framed, at least publicly, as a negotiation. The San Francisco AI company wanted written assurances that its Claude models would not be used for mass domestic surveillance of Americans or for fully autonomous weapons with no human involvement in targeting decisions.

The Pentagon wanted unrestricted access to what it saw as a critical technology asset, and no private contractor setting the terms.

On March 5, 2026, that negotiation ended. The Department of War formally informed Anthropic that it and its products had been designated a supply-chain risk, effective immediately.

In doing so, the government applied, for the first time in the programme’s history, a designation previously reserved for companies from adversarial nations, most notably China’s Huawei, to an American firm founded in San Francisco.

“DOW officially informed Anthropic leadership the company and its products are deemed a supply chain risk, effective immediately,” a senior department official told Bloomberg News, using the acronym for the Department of War, the name that Defense Secretary Pete Hegseth now favours for the Department of Defense.

The practical consequences are immediate. Under 10 USC 3252, the supply-chain risk designation requires defence vendors and contractors to certify that they do not use Anthropic’s Claude models in their work with the Pentagon.

The provision was written to cut off technology from foreign adversaries, not domestic innovators.

What Anthropic refused, and why?

Anthropic’s position was never that the military should not use AI. Its public statement on March 5 was precise about the two specific uses it sought to exclude: “the mass domestic surveillance of Americans and fully autonomous weapons.”

The company argued these were reasonable limits on any AI deployment, not just its own.

The Pentagon’s counter-argument was equally direct: it contended that mass surveillance of Americans is already illegal under existing law, and that fully autonomous weapons are already restricted by internal Defence Department policy.

There was, in the government’s view, no need to enshrine limits in a commercial contract, and Anthropic’s insistence on doing so was read as an attempt to constrain military decision-making through a supplier relationship.

“No amount of intimidation or punishment from the Department of War will change our position on mass domestic surveillance or fully autonomous weapons,” Anthropic said in its public statement.

CEO Dario Amodei confirmed separately that he sees “no choice” but to challenge the designation in court, adding: “We do not believe this action is legally sound.”

Amodei has also indicated that his refusal to publicly praise or financially support President Donald Trump contributed to the deterioration of the relationship with the Pentagon.

The contradiction at the heart of the dispute

Even as the designation was announced, the US military was relying on Claude in its ongoing operations in Iran. According to reporting by CNBC on March 5, Claude is one of the primary tools installed in Palantir’s Maven Smart System, which military operators in the Middle East use to manage data for their operations.

The contradiction, blacklisting a company’s AI while using that same AI in active conflict, was not lost on observers. President Trump directed federal agencies to “immediately cease” all use of Anthropic’s technology, though it remains unclear how that directive would interact with active deployments through third-party contractors like Palantir.

Industry reacts, OpenAI moves in

The response from the broader tech industry was swift and divided. Hundreds of employees at Google and OpenAI signed an open letter urging their companies to support Anthropic in its standoff. Elon Musk, meanwhile, sided with the Trump administration, claiming on his social platform that “Anthropic hates Western Civilization.”

OpenAI, for its part, announced its own deal with the Department of War hours after Anthropic was blacklisted. Under the terms of that agreement, the military can use OpenAI models for “all lawful purposes”, phrasing that some OpenAI employees have since described as deliberately ambiguous and potentially encompassing exactly the uses Anthropic had sought to prohibit.

Dean Ball, a former Trump White House AI adviser, described the supply-chain risk designation as a “death rattle” of American strategic coherence, arguing that treating a domestic company worse than a foreign adversary reflected “thuggish tribalism” over sound policy.

What happens next?

Anthropic has stated publicly that the designation under 10 USC 3252 can only extend to Claude’s use in Department of War contracts, it cannot, the company argues, affect its commercial customers or other government agencies. The legal challenge, when filed, will test that reading against the government’s interpretation.

Amodei has also indicated, in remarks reported by CBS News, that he is attempting to “deescalate” and reach “some agreement that works for us and works for them.”

Bloomberg reported that talks had been quietly reopened even as the formal designation was announced. Whether those talks produce anything before a court battle begins is, at the moment of publishing, unknown.

What is certain is the precedent. For the first time, an American AI company built on the argument that safety and usefulness are complementary, that responsible AI is better AI, has been formally classified alongside China’s Huawei by the government it sought to serve.

You can read the full statement from Anthropic here.

Next Post

Tracy Morgan, Daniel Radcliffe, and the 'Reggie Dinkins' cast weigh in on the show's improv

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

No Result
View All Result

Recent Posts

  • ‘AI-sexual’ may be an emerging orientation
  • Back to the Dawn Review – Choose Your Own Prison Break | COGconnected
  • LELO Surfer 2 launch: Shop the new unisex butt plug
  • Samsung looks to make the Galaxy Store more addictive with new daily activities and rewards
  • GPT 5.4 arrives on ChatGPT: 5 improvements to know

Recent Comments

    No Result
    View All Result

    Categories

    • Android
    • Cars
    • Gadgets
    • Gaming
    • Internet
    • Mobile
    • Sci-Fi
    • Home
    • Shop
    • Privacy Policy
    • Terms and Conditions

    © CC Startup, Powered by Creative Collaboration. © 2020 Creative Collaboration, LLC. All Rights Reserved.

    No Result
    View All Result
    • Home
    • Blog
    • Android
    • Cars
    • Gadgets
    • Gaming
    • Internet
    • Mobile
    • Sci-Fi

    © CC Startup, Powered by Creative Collaboration. © 2020 Creative Collaboration, LLC. All Rights Reserved.

    Get more stuff like this
    in your inbox

    Subscribe to our mailing list and get interesting stuff and updates to your email inbox.

    Thank you for subscribing.

    Something went wrong.

    We respect your privacy and take protecting it seriously