• Home
  • Blog
  • Android
  • Cars
  • Gadgets
  • Gaming
  • Internet
  • Mobile
  • Sci-Fi
Tech News, Magazine & Review WordPress Theme 2017
  • Home
  • Blog
  • Android
  • Cars
  • Gadgets
  • Gaming
  • Internet
  • Mobile
  • Sci-Fi
No Result
View All Result
  • Home
  • Blog
  • Android
  • Cars
  • Gadgets
  • Gaming
  • Internet
  • Mobile
  • Sci-Fi
No Result
View All Result
Blog - Creative Collaboration
No Result
View All Result
Home Mobile

Pentagon signs classified AI deals with Nvidia, Microsoft, and AWS after ejecting Anthropic over safety limits

May 1, 2026
Share on FacebookShare on Twitter

TL;DR

The Pentagon signed classified AI agreements with Nvidia, Microsoft, AWS, and Reflection AI, bringing the total to seven companies (with SpaceX, OpenAI, and Google) operating on secret military networks under “lawful operational use” terms. The phrase deliberately replaces the safety restrictions Anthropic insisted on, which led to its ejection from Pentagon supply lines. The message: any AI company that sets limits on military use will be replaced by one that does not.

The Pentagon announced on 1 May that it has signed agreements with Nvidia, Microsoft, Amazon Web Services, and Reflection AI for expanded use of advanced artificial intelligence on classified military networks. The deals bring the total number of companies with such agreements to seven, following similar arrangements with SpaceX, OpenAI, and Google, which signed its own classified AI deal earlier this week. All seven agreements permit “lawful operational use,” a phrase that the Defense Department statement describes as enabling the transformation “toward establishing the United States military as an AI-first fighting force.” The phrase is not accidental. It is a deliberate replacement for the restrictions that Anthropic, the company behind Claude, attempted to impose on military use of its technology. Anthropic’s refusal to remove those restrictions led to its ejection from the Pentagon’s supply chain. The seven companies that remain have agreed to terms that Anthropic would not.

The terms

The distinction matters because it defines what “classified military AI” means in practice. Anthropic’s position, before the Pentagon designated it a supply chain risk in February, was that it would not permit its models to be used for mass domestic surveillance of American citizens or for fully autonomous weapons systems. These were not vague principles. They were contractual red lines that Anthropic insisted on including in its Pentagon agreement, which was worth $200 million and had been awarded in July 2025. The Pentagon refused to accept the restrictions during renegotiations in late 2025 and early 2026, and when Anthropic held firm, the Defense Department moved to eject the company entirely and replace it with competitors willing to sign broader terms.

“Lawful operational use” is the result: a formulation expansive enough to cover targeting assistance, intelligence synthesis, and operational planning on secret and top-secret networks, without the specific prohibitions Anthropic sought. The new agreements give the Pentagon “wide leeway to potentially use powerful advanced AI technologies for secret combat operations, including to assist with targeting,” according to defence officials briefed on the matter. The Pentagon negotiated its deal with AWS late into Thursday evening, suggesting urgency in assembling the full set of agreements. An AWS spokesperson, asked to comment on the deal, referred to the Defense Department as “the Department of War,” its pre-1947 name, and said AWS “looks forward to continuing to support” its modernisation efforts.

The companies

The 💜 of EU tech

The latest rumblings from the EU tech scene, a story from our wise ol’ founder Boris, and some questionable AI art. It’s free, every week, in your inbox. Sign up now!

The seven companies now operating on classified Pentagon networks represent the near-entirety of the American AI industry’s infrastructure layer. Nvidia provides the chips. Microsoft and AWS provide the cloud infrastructure. Google provides Gemini. OpenAI provides GPT. SpaceX provides satellite communications and, following its acquisition of xAI, AI models trained on data from X. Smaller defence-focused AI firms are also building for sovereign military applications, but the Pentagon’s priority is clearly the largest providers. Reflection AI, a less well-known company among the seven, builds AI specifically for classified and intelligence community applications.

The breadth of the arrangement is the point. Defence officials have said they are seeking to ensure the US military “avoids depending on any one single company or set of limitations,” a formulation that is itself a reference to the Anthropic fallout. The Pentagon does not want to be in a position where a single AI company’s ethical red lines can constrain military operations. The solution is diversification across seven providers, all of whom have agreed to terms that do not include the restrictions Anthropic insisted upon. The “AI-first fighting force” that the Pentagon envisions requires AI that is available for any lawful purpose the military defines, without prior constraints imposed by the companies that build it.

The exile

The Anthropic story runs in the opposite direction. The company was designated a supply chain risk, a label previously reserved for Chinese companies such as Huawei and ZTE. Its $200 million Pentagon contract was effectively voided. Senior defence officials publicly criticised the company, and the Trump administration has since expanded the dispute to include opposition to Anthropic’s Mythos model and restrictions on its deployment in government systems. The commercial consequences, so far, have been negligible. Anthropic’s valuation has risen to approximately $900 billion, up from $380 billion in February. Its largest compute deal, with Google and Broadcom, dwarfs the Pentagon contract it lost. The company’s revenue run rate is approximately $30 billion. Being ejected from the Pentagon’s classified networks has not, at least in the short term, damaged Anthropic’s business.

What it has done is establish a precedent. Any AI company that sets specific limits on military use of its technology will be replaced by one that does not. The Pentagon’s message, delivered through seven simultaneous agreements with competitors, is that the Department of Defense will not negotiate the scope of military AI use with the companies that build it. “Lawful operational use” means the military decides what is lawful and what is operational. The companies provide the technology. The question of whether AI should assist with targeting, or whether fully autonomous systems should make lethal decisions, is not one the Pentagon intends to resolve through commercial contracts. It is one the Pentagon intends to resolve by selecting vendors who do not ask it.

The trajectory

The practical implications are significant. AI deployed on Impact Level 6 and Impact Level 7 classified networks will be used for intelligence analysis, operational planning, and the synthesis of data from sources that are themselves classified. The Pentagon’s statement says these tools will “streamline data synthesis, elevate situational understanding, and augment warfighter decision-making in complex operational environments.” In less bureaucratic language: AI will help analysts process intelligence faster, help commanders understand battlefields in closer to real time, and help targeting teams identify and prioritise objectives. SpaceX’s expanding AI capabilities, acquired through its merger with xAI, add a dimension that did not exist when the Pentagon first began negotiating these deals: a satellite communications company that also builds AI models, operating on the same classified networks that process targeting data.

The speed of the Pentagon’s pivot is itself a statement. Five months ago, Anthropic held a $200 million contract and was the most prominent AI company working on classified military systems. Today, seven competitors have signed agreements that collectively render Anthropic’s military contribution replaceable. The Pentagon has answered a question that the AI industry has been debating since the first Google employee protested Project Maven in 2018: whether the companies that build the most powerful AI systems will have a say in how those systems are used by the military. The answer, delivered across seven contracts in a single week, is no.

Next Post

OpenAI explains why ChatGPT suddenly loved goblins

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

No Result
View All Result

Recent Posts

  • How To Unlock The Totenreich Easter Egg Song In Black Ops 7 Zombies
  • Best gaming monitor deal: 300Hz ASUS TUF model is a huge 33% off
  • Motorola’s Z Fold challenger finally has a US price, and it could be worse
  • OpenAI explains why ChatGPT suddenly loved goblins
  • Pentagon signs classified AI deals with Nvidia, Microsoft, and AWS after ejecting Anthropic over safety limits

Recent Comments

    No Result
    View All Result

    Categories

    • Android
    • Cars
    • Gadgets
    • Gaming
    • Internet
    • Mobile
    • Sci-Fi
    • Home
    • Shop
    • Privacy Policy
    • Terms and Conditions

    © CC Startup, Powered by Creative Collaboration. © 2020 Creative Collaboration, LLC. All Rights Reserved.

    No Result
    View All Result
    • Home
    • Blog
    • Android
    • Cars
    • Gadgets
    • Gaming
    • Internet
    • Mobile
    • Sci-Fi

    © CC Startup, Powered by Creative Collaboration. © 2020 Creative Collaboration, LLC. All Rights Reserved.

    Get more stuff like this
    in your inbox

    Subscribe to our mailing list and get interesting stuff and updates to your email inbox.

    Thank you for subscribing.

    Something went wrong.

    We respect your privacy and take protecting it seriously