Google removes restrictions on military AI development

Google removed a section from its AI principles that previously committed to avoiding the use of artificial intelligence for developing weapons and surveillance technologies; The change comes as global AI competition intensifies and as the Trump administration pushes for stronger collaboration between tech companies and the defense sector

Tal Shahaf|
Getting your Trinity Audio player ready...
Google has quietly removed a key section from its artificial intelligence (AI) principles, eliminating language that explicitly pledged not to develop AI technologies that could cause harm, including weapons systems, Bloomberg reported Tuesday.
Google’s AI principles, previously listed on its website, included a section titled "AI Applications We Will Not Pursue." It stated that the company would not develop "technologies that cause or are likely to cause overall harm," such as weapons, surveillance tools or applications that violate international law and human rights. That language has now disappeared from the page.
3 View gallery
מנכ"ל גוגל סונדר פיצ'אי
מנכ"ל גוגל סונדר פיצ'אי
Google CEO Sundar Pichai
(Photo: AP/Jeff Chiu)
Google declined to comment directly on the change but pointed to a blog post published Tuesday by Demis Hassabis, head of Google DeepMind, and James Manyika, the company’s senior vice president for technology and society.
"Google is updating its AI principles because the technology has become far more widespread, and companies in democratic nations must serve government and national security needs," the post read.
Hassabis and Manyika highlighted the global competition in AI development and the geopolitical factors at play. "We believe democracies should lead in AI development, guided by core values such as freedom, equality and respect for human rights," they wrote.
The executives further stated: "Companies, governments, and organizations that share these values must work together to create AI that protects people, promotes global growth, and supports national security. However, we remain committed to minimizing unintended or harmful outcomes, avoiding unfair bias, and adhering to established international legal and human rights principles."
The policy shift aligns Google with a broader trend in the AI industry, where leading tech firms are increasingly working with the defense sector.
Get the Ynetnews app on your smartphone: Google Play: https://bit.ly/4eJ37pE | Apple App Store: https://bit.ly/3ZL7iNv
Pentagon official Dr. Radha Plumb told TechCrunch that some companies' AI tools provide the U.S. Department of Defense with a major advantage in threat detection, tracking and assessment — capabilities that enhance what the military calls the "kill chain." This term refers to the process of identifying, tracking and eliminating threats using sensors and weapons platforms. According to Plumb, AI is proving invaluable in planning and strategizing within this framework.
3 View gallery
טראמפ מכריז על השקעה בתשתיות בינה מלאכותית (AI)
טראמפ מכריז על השקעה בתשתיות בינה מלאכותית (AI)
Donlad Trump
(Photo: Jim Watson/AFP)

A broader industry shift

Hassabis, one of the founders of DeepMind and its former CEO, joined Google in 2014 when the company acquired the AI startup. Notably, in a 2015 interview, he stated that DeepMind’s technology would never be used for military or surveillance purposes as part of the acquisition agreement.
Google is not alone in pivoting toward AI-driven defense applications. Last year, OpenAI disclosed its collaboration with military-tech company Anduril to develop AI tools for the Pentagon. OpenAI CEO Sam Altman has also been advising U.S. government officials on AI developments. Meanwhile, AI startup Anthropic — known for its chatbot "Claude" and its commitment to ethical AI — recently announced a partnership with defense contractor Palantir. However, Anthropic’s policies still prohibit using its AI models to develop or enhance "systems designed to cause harm or loss of human life."
Other tech giants, including Meta, Microsoft and Amazon, also maintain partnerships with the Pentagon.
Beyond technological developments, the political climate in Washington is driving a broader ideological shift in Silicon Valley. With Trump’s election signaling a rejection of progressive policies on gender, diversity, and inclusion, the tech industry’s past commitments to human rights and ethical AI development appear to be fading. Instead, the emphasis is shifting toward aligning with national security and U.S. geopolitical interests.
3 View gallery
הפגנת עובדי גוגל נגד החברה בסן פרנסיסקו באוגוסט
הפגנת עובדי גוגל נגד החברה בסן פרנסיסקו באוגוסט
(Photo: Reuters)
Google first introduced its AI principles in 2018 following internal backlash over its participation in the Pentagon’s Project Maven. That project used Google’s AI-powered image recognition technology to analyze drone footage. Thousands of Google employees signed an open letter to then-CEO Sundar Pichai, arguing: "We believe Google should not be in the business of war." As a result, Google chose not to renew its contract with the Department of Defense.
Similarly, Google employees in the U.S. staged protests against the company’s involvement in Israel’s "Project Nimbus," a cloud computing initiative for the Israeli government. Protesters called for an end to Google’s collaboration, arguing that the technology was being used to harm Palestinians.
Recently, The Washington Post reported that Google provided AI tools to Israel’s Defense Ministry and the IDF following the October 7 Hamas attack. When asked whether the AI policy change in the U.S. would affect Google’s work with the Israeli government or military, Google Israel declined to comment.
The evolving role of AI in warfare has sparked an ongoing debate in the U.S. over whether autonomous weapons systems should be allowed to make firing decisions without human intervention. Reports suggest that some U.S. military systems already incorporate AI capable of independent decision-making.
The IDF has repeatedly stated that while AI assists in intelligence and operational recommendations, humans always make the final decisions in combat situations.
<< Follow Ynetnews on Facebook | Twitter | Instagram | Telegram >>
Comments
The commenter agrees to the privacy policy of Ynet News and agrees not to submit comments that violate the terms of use, including incitement, libel and expressions that exceed the accepted norms of freedom of speech.
""