TL;DR
More than 580 Google employees, including 20+ directors and VPs and senior DeepMind researchers, signed a letter urging CEO Sundar Pichai to refuse classified military AI work for the Pentagon. The letter argues that on air-gapped classified networks, Google cannot monitor how its AI is used, making “trust us” the only guardrail against autonomous weapons and mass surveillance. Google’s workforce won the Project Maven fight in 2018, but the company has since removed weapons language from its AI principles, won a share of the $9B JWCC cloud contract, deployed Gemini to 3 million Pentagon personnel, and is now negotiating classified access under “all lawful uses” terms.
More than 580 Google employees, including over 20 directors, senior directors, and vice presidents, have signed a letter urging CEO Sundar Pichai to refuse classified military AI work for the Pentagon, according to Bloomberg. The letter, which includes senior researchers at Google DeepMind, was sent to Pichai on Monday. “We are Google employees who are deeply concerned about ongoing negotiations between Google and the US Department of Defense,” it reads. “As people working on AI, we know that these systems can centralize power and that they do make mistakes.” The signatories want Google to reject all classified workloads, arguing that on air-gapped classified networks, isolated from the public internet, the company would have no ability to monitor or limit how its AI tools are actually used. “Currently, the only way to guarantee that Google does not become associated with such harms is to reject any classified workloads,” the letter states. “Otherwise, such uses may occur without our knowledge or the power to stop them.”
The history
Google employees have fought this fight before. In 2018, roughly 4,000 workers signed an internal petition and at least 12 resigned over Project Maven, a Pentagon programme that used AI to detect and analyse objects in drone video feeds. The protest forced Google to introduce AI principles pledging not to pursue weapons or surveillance technology, and to let the Maven contract expire in March 2019. Palantir took it over. The Maven contract was worth a few million dollars. Palantir’s Maven investment has since grown to $13 billion. The 2018 victory was real, but it was also the last time Google’s workforce successfully constrained the company’s defence ambitions. In the years since, Google has systematically rebuilt every bridge the protest burned.
In December 2022, Google won a share of the Pentagon’s $9 billion Joint Warfighting Cloud Capability contract alongside Amazon, Microsoft, and Oracle. In February 2025, Google removed the passage from its AI principles that pledged to avoid using the technology in “weapons or other technologies whose principal purpose or implementation is to cause or directly facilitate injury to people” and to avoid “technologies that gather or use information for surveillance violating internationally accepted norms.” A blog post co-authored by Demis Hassabis, CEO of Google DeepMind, cited “a global competition taking place for AI leadership” as justification. Human Rights Watch and Amnesty International both condemned the reversal. In December 2025, the Pentagon launched GenAI.mil, a platform powered by Google’s Gemini chatbot, available to all defence personnel. Defence Secretary Pete Hegseth said “the future of American warfare is here, and it’s spelled AI.” In March 2026, Google deployed Gemini AI agents to the Pentagon’s three-million-strong workforce at the unclassified level, with eight pre-built agents for tasks including summarising meeting notes, building budgets, and checking actions against defence strategy.
The negotiation
The classified deal is the next step. Emil Michael, the under secretary of defence for research and engineering, told Bloomberg in March that the Pentagon would “start with unclassified because that’s where most of the users are, and then we’ll get to classified and top secret.” He confirmed that talks with Google over using Gemini agents on classified cloud infrastructure were already underway. In April, The Information reported that negotiations are progressing toward “all lawful uses” of Google’s AI tools, a phrase that falls short of the red lines Anthropic established before being designated a supply-chain risk by the Pentagon for refusing to remove restrictions on autonomous weapons and domestic mass surveillance. The Pentagon strongly contested Anthropic’s characterisation and argued that commercial companies should not be able to dictate usage policies during wartime or preparations for war.
OpenAI signed its own Pentagon deal hours after the Anthropic blacklisting, with three stated red lines: no mass domestic surveillance, no autonomous weapons, and no high-stakes automated decisions. But the enforcement of those red lines on classified networks is the question that Google’s employees are raising. On an air-gapped system, the AI operates in a network that is, by design, disconnected from Google’s infrastructure. Google cannot see what queries are being run, what outputs are being generated, or what decisions are being made with those outputs. The “trust us” assurance from Pentagon leadership is the only mechanism preventing uses that would violate any red line the company might negotiate. Sofia Liguori, an AI research engineer at Google DeepMind in the UK who signed the letter, told Bloomberg that the main response to worker concerns has been to encourage the workforce to trust company leadership to sign good contracts. “But it’s all left very broad,” she said. “Agentic AI is particularly concerning because of the level of independence it can get to. It’s like giving away a very powerful tool at the same time as giving up on any kind of control on its usage.”
The stakes
The Pentagon’s AI budget tells the story of what the classified deal would fund. The fiscal 2026 defence budget included $13.4 billion dedicated to AI and autonomy. The fiscal 2027 request, submitted in April, asks for $54.6 billion for the Defence Autonomous Warfare Group, a 24,000% increase over the prior year, within a total defence budget of $1.5 trillion that represents a 42% year-over-year increase. The Pentagon is already testing humanoid robot soldiers with Foundation Future Industries and has formalised Palantir’s Maven as a core military system with multi-year funding. The scale of military AI investment has moved from the experimental phase that characterised Project Maven in 2018 to an industrial buildout that treats AI as a foundational capability of the American military. The classified workloads that Google’s employees are objecting to would sit at the centre of that buildout.
The letter’s organisers said “Maven is not over. Workers are going to continue organizing against the weaponization of Google’s AI technology until the company draws clear, enforceable lines.” The framing is significant. In 2018, the fight was about one contract for one programme. In 2026, the fight is about whether Google’s entire AI stack, Gemini, DeepMind’s research, the TPU chips that power inference, becomes military infrastructure on classified networks where no one outside the Pentagon can see what it does. The paradox of the administration blacklisting Anthropic while urging banks to adopt its AI illustrates the political environment: companies that resist unconstrained military use face designation as supply-chain risks, while companies that comply receive contracts worth billions. Google’s employees are asking Pichai to refuse a deal that the Pentagon has made clear it will punish refusal of, at a moment when the company has spent three years rebuilding its defence credentials precisely to win that deal.
The gap
The 580 signatures are notable for their seniority. Twenty directors, senior directors, and vice presidents have signed, along with senior DeepMind researchers. Two-thirds of signatories agreed to be named; a third requested anonymity for fear of retaliation. An earlier cross-company letter in February, signed by approximately 800 Google employees and 100 OpenAI employees, expressed support for Anthropic’s stance against unrestricted military AI use. Over 100 DeepMind employees separately signed an internal letter demanding that no DeepMind research or models be used for weapons development or autonomous targeting. Google’s chief scientist Jeff Dean wrote on X that “mass surveillance violates the Fourth Amendment and has a chilling effect on freedom of expression.” The internal dissent is not marginal. It extends into the technical leadership that builds the systems the Pentagon wants to deploy.
But the gap between internal dissent and corporate decision-making has widened since 2018. In 2018, 4,000 signatures and a dozen resignations were enough to kill a contract worth a few million dollars. In 2026, 580 signatures face a classified AI market worth tens of billions, a Pentagon that has shown it will retaliate against companies that refuse its terms, a company that has already removed its own red lines, and a CEO who approved the Gemini deployment to three million Pentagon personnel without, according to the letter’s organisers, discussing concrete usage restrictions with the workforce that built it. Trump has signalled openness to a Pentagon deal with Anthropic if the company drops its restrictions, suggesting that the administration views compliance as the eventual destination for every AI company, regardless of where it starts. Google’s employees are asking their company to be the exception. The company’s trajectory over the past three years suggests it intends to be the rule.


