TL;DR
Google signed a classified AI deal with the Pentagon for “any lawful government purpose” one day after 580+ employees urged Pichai to refuse. The contract includes advisory guardrails (no mass surveillance, no autonomous weapons without human oversight) but the government can request adjustments to safety settings. On the same day, Bloomberg revealed Google quietly dropped out of a $100M drone swarm contest in February after an internal ethics review. Google is drawing a line between selling general-purpose AI access and building specific weapons, but on classified networks, the distinction may be meaningless.
Google has signed a deal allowing the Pentagon to use its Gemini AI models for classified military work under terms that permit “any lawful government purpose,” the company confirmed on Tuesday, one day after more than 580 Google employees signed a letter urging CEO Sundar Pichai to refuse exactly this kind of arrangement. The agreement provides the Department of Defence with API access to Google’s AI systems on classified networks, extending a relationship that already includes Gemini deployment to three million Pentagon personnel on unclassified systems. The contract includes language stating that “the AI System is not intended for, and should not be used for, domestic mass surveillance or autonomous weapons (including target selection) without appropriate human oversight and control.” On the same day, Bloomberg separately reported that Google had quietly dropped out of a $100 million Pentagon prize challenge to create technology for voice-controlled autonomous drone swarms, withdrawing in February after an internal ethics review despite having advanced in the competition. The company officially cited a lack of “resourcing.” Google is drawing a line, but it is not the line its employees asked for.
The deal
The classified AI agreement is structured as an extension of Google’s existing Pentagon contract, providing API access rather than custom model development or bespoke military applications. A Google Public Sector representative confirmed the arrangement. The Pentagon can connect directly to Google’s software on classified networks, the air-gapped systems isolated from the public internet that handle mission planning, intelligence analysis, and weapons targeting. The “any lawful government purpose” language places Google alongside OpenAI and Elon Musk’s xAI, both of which have signed their own classified AI agreements with the Pentagon. The government can request adjustments to Google’s AI safety settings and content filters, a provision that effectively gives the Pentagon the ability to modify the guardrails that Google’s own researchers built into the models.
The nominal restrictions, no mass surveillance, no autonomous weapons without human oversight, echo the red lines that OpenAI negotiated in its own Pentagon deal. But the enforcement mechanism is the same one that Google’s employees identified as insufficient in their letter: on air-gapped classified networks, Google cannot see what queries are being run, what outputs are being generated, or what decisions are being made with those outputs. The “should not be used for” language is advisory, not contractual prohibition, and “appropriate human oversight and control” is undefined. The employees wrote that “the only way to guarantee that Google does not become associated with such harms is to reject any classified workloads.” Google chose to accept them with language that the employees had already argued was unenforceable. Pichai opened Cloud Next 2026 touting 750 million Gemini users and a $240 billion backlog. The same Gemini infrastructure that serves those users is now being extended to classified military networks where no one outside the Pentagon can monitor its use.
The withdrawal
The drone swarm exit is the other half of the story. Google advanced in a $100 million Pentagon prize challenge to create technology that would allow commanders to direct autonomous drone swarms using voice commands, converting spoken words like “left” into digital instructions sent to the drones. The company notified the government on February 11, 2026, that it would not participate further. Officially, Google cited a lack of resourcing. According to records reviewed by Bloomberg, the decision followed an internal ethics review. The withdrawal echoes Project Maven in 2018, when roughly 4,000 Google employees signed a petition over AI analysis of drone video feeds, and Google let the contract expire. Palantir took it over. The Maven contract was worth a few million dollars. Palantir’s Maven investment has since grown to $13 billion.
The juxtaposition is revealing. On the same day that Google confirmed a deal giving the Pentagon classified access to Gemini for “any lawful government purpose,” the company also revealed it had walked away from a programme that would have used its AI to control autonomous drone swarms. Google is willing to put its most powerful AI models on classified networks where it cannot monitor their use, but it is not willing to build voice-controlled drone swarms. The distinction matters to Google’s internal ethics apparatus: API access to general-purpose models is one step removed from weapons applications, even if the models will be used on networks that handle weapons targeting. Building technology specifically designed to command drone swarms is a direct weapons application that the ethics review could not approve. The line Google is drawing is between providing the tools and building the weapons, between selling access and designing lethality. Whether that distinction is meaningful on a classified network where the tools can be applied to any lawful purpose, including the purposes the drone swarm programme was designed to serve, is the question the employees’ letter was designed to answer.
The pattern
Google’s trajectory from Project Maven in 2018 to the classified Gemini deal in 2026 follows a pattern that the employees’ letter described as systematic. In 2018, Google introduced AI principles pledging not to pursue weapons or surveillance technology. In February 2025, Google removed the passage from its principles that excluded weapons and surveillance, citing “a global competition taking place for AI leadership.” In December 2022, Google won a share of the Pentagon’s $9 billion Joint Warfighting Cloud Capability contract. In December 2025, the Pentagon launched GenAI.mil, powered by Google’s Gemini chatbot. In March 2026, Google deployed Gemini AI agents to the Pentagon’s three-million-strong workforce on unclassified systems. In April 2026, Google extended that access to classified networks. Each step was individually defensible. The trajectory is not.
Google is simultaneously investing up to $40 billion in Anthropic, the company that was designated a supply-chain risk and blacklisted by the Trump administration for refusing to remove restrictions on autonomous weapons and mass surveillance from its Pentagon contract. Google is funding the company that refused what Google just accepted, while deploying the models that Anthropic’s restrictions were designed to constrain. Europe’s defence tech sector is building its own military AI capabilities independently, with purpose-specific applications like Helsing’s AI submarine that define their use case in the design rather than leaving it to the user on a classified network. The European approach builds the restriction into the technology. The American approach builds the technology and adds advisory language that the customer can modify. The Pentagon’s fiscal 2027 budget request includes $54.6 billion for the Defence Autonomous Warfare Group within a total defence budget of $1.5 trillion. The classified workloads that Google’s employees objected to sit at the centre of that investment.
The 580 signatures included more than 20 directors, senior directors, and vice presidents, along with senior DeepMind researchers. Two-thirds agreed to be named. A third requested anonymity for fear of retaliation. The letter’s organisers said “Maven is not over. Workers are going to continue organizing against the weaponization of Google’s AI technology until the company draws clear, enforceable lines.” Google drew a line on Tuesday. It drew it between classified AI access and autonomous drone swarms, between selling general-purpose models to the Pentagon and building specific weapons applications. The employees asked for the line to be drawn at classified work itself. Google chose to draw it where the optics of weapons development become undeniable, not where the potential for misuse becomes possible. Google released Gemma 4 under an open Apache 2.0 licence three weeks ago, making its research models freely available to anyone. Its frontier Gemini models now sit behind classified military networks where no external oversight is possible. The company that champions open AI research in public is locking its most powerful technology behind air-gapped walls in private. The employees who signed the letter understood this contradiction before the deal was signed. Google signed it anyway.


