National

Pentagon Weighs Google's Gemini AI for Military Use After Anthropic Fallout

Google is in talks with the Pentagon about deploying its Gemini artificial intelligence (AI) system for classified work.

The discussions, first reported by Reuters, come as the U.S. military continues to reassess not only how it uses artificial intelligence, but which private‑sector partners it is willing to trust with some of its most sensitive data and missions.

Newsweek reached out to the Pentagon and Google by email on Thursday morning for comment.

Why It Matters

The Pentagon has increasingly framed AI as a strategic necessity, particularly as senior defense officials warn that rivals such as China are racing ahead in advanced technologies. The department's recent history, however, shows that enthusiasm for AI has been tempered by political pressure, ethical debates, and rapid shifts between vendors.

Those tensions have been on display in the Pentagon's expanding but uneven embrace of generative AI tools. In recent months, the Defense Department has granted different forms of access to multiple companies, reflecting a desire to avoid reliance on a single provider.

Anthropic's Claude model has been used by the U.S. military in carefully controlled settings, including for data analysis and decision‑support functions rather than autonomous combat roles. Pentagon officials have stressed that such systems are meant to assist human operators, not replace them, especially on the battlefield.

Worries about how reliable AI is in high-stakes operations come alongside concerns over how President Donald Trump and his administration have wielded the military, deploying soldiers in American cities and killing at least 150 people in a highly criticized strike campaign against alleged drug vessels in the Caribbean and eastern Pacific.

How This Affects Google

The reported conversations around Gemini mark a notable moment for Google, which for years has faced internal and external scrutiny over its work with the U.S. military.

Google has the chance to avoid the pitfalls of its predecessors and competitors, which faced public backlash for failing to include appropriate guardrails for use. The tech firm has taken this into account by proposing language in a potential contract with the Defense Department to prevent Gemini AI from being used for domestic mass surveillance and autonomous weapons without appropriate human control, according to Reuters.

For Google, engagement with the Pentagon would mark a deeper re‑entry into defense work after years of internal debate. For the military, it would be another step in a fast‑moving, complex effort to harness artificial intelligence without losing control of how it is used.

However, Google also faces some complications in deploying its AI after a new report released this week found that AI search powered by Gemini turned out around 9 percent incorrect search results, which tech news outlet Futurism argued could be considered a misinformation crisis due to the immense volume of search requests, numbering more than five trillion per year.

Google, however, contested the findings, with the company’s spokesperson Ned Adriance previously telling Newsweek, “This study has serious holes.”

The Military & Google Gemini, Anthropic Claude, OpenAI

Defense officials have repeatedly warned that failing to integrate advanced AI could leave the U.S. military at a disadvantage, with Pentagon leaders increasingly framing partnerships with firms like, potentially, Google as part of a broader effort to ensure the United States maintains technological superiority, even as they insist on guardrails to prevent misuse and unintended escalation.

If a deal with Google proceeds, it would add Gemini to a growing ecosystem of AI systems being tested or deployed across the Defense Department. Rather than signaling a definitive choice, the talks suggest an attempt to diversify vendors and capabilities, hedging against technical, political or ethical risks associated with any single platform.

Anthropic’s rift with the Pentagon followed a request to remove limits on how the company’s AI, Claude, could be used with the military and ultimately separated itself from Anthropic after disagreements on how to proceed. The U.S. used Claude as part of the daring operation to capture then-Venezuelan leader Nicolás Maduro from his Caracas compound early January, The Wall Street Journal first reported.

With the Anthropic deal falling through-and the Pentagon announcing a six-month phase-out of all products from Pentagon systems-OpenAI tried to fill the gap, but faced immediate and severe backlash, later admitting the deal with the U.S. government was “opportunistic and sloppy.”

Google has looked to expand government ties, however, and the opportunity presented by the diversification of AI within the Pentagon serves as the perfect opportunity to do so.

Newsweek's reporters and editors used Martyn, our Al assistant, to help produce this story. Learn more about Martyn.

2026 NEWSWEEK DIGITAL LLC.

This story was originally published April 16, 2026 at 12:19 PM.

Get unlimited digital access
#ReadLocal

Try 1 month for $1

CLAIM OFFER