A significant power play is unfolding between the U.S. Department of Defense (DoD) and major AI labs. The Pentagon is now actively negotiating with Google and OpenAI to adopt a broad 'all lawful purposes' standard for military AI use, a proposal that rival AI company Anthropic has firmly rejected.
This situation didn't emerge overnight; it's the result of a clear causal chain. First, the primary catalyst was Anthropic’s public refusal of the DoD's terms. Anthropic drew a line in the sand, specifically prohibiting its AI from being used for domestic mass surveillance or fully autonomous weapons. This ethical stance created a void that the DoD urgently needed to fill to maintain its AI modernization efforts. The Pentagon couldn't simply accept these restrictions, as its own policies, like Directive 3000.09, permit autonomous weapons under 'appropriate human judgment.'
Second, with Anthropic out of the picture for this broad use case, the DoD pivoted. It intensified talks with Google and OpenAI to secure the very terms Anthropic refused. This move was bolstered when Elon Musk's xAI agreed to a similar 'all lawful use' standard for its Grok model on classified systems. xAI's compliance provided the DoD with a credible alternative and significant leverage, effectively telling Google and OpenAI, 'If you don't agree, we have other options.'
Third, this pressure campaign faces a major hurdle: internal dissent. Employees at both Google and OpenAI have issued an open letter opposing the deal, echoing the famous 'Project Maven' revolt at Google in 2018. Back then, employee outrage forced Google to abandon a Pentagon AI contract. This history of workforce activism presents a real risk for management, as pushing a deal through could damage morale, trigger resignations, and create a public relations crisis. The financial incentive is minimal—a potential contract is a tiny fraction of Google's revenue—but the strategic stakes for leadership in the AI arms race are immense.
- All lawful purposes: A contractual term allowing a technology to be used for any purpose not explicitly prohibited by law, which is much broader than a contract with specific ethical restrictions.
- Project Maven: A 2018 U.S. Department of Defense project that used AI to analyze drone footage. It sparked a major employee protest at Google, leading the company to withdraw from the project and develop AI ethics principles.
- Defense Production Act (DPA): A U.S. federal law that allows the President to require businesses to accept and prioritize contracts for materials deemed necessary for national defense.