Google drops pledge on AI use for weapons
Alphabet, the parent company of Google, has revised its principles on the use of artificial intelligence (AI), removing a previous commitment to avoid applications such as weapon development and surveillance tools. This change reflects a shift in its stance on the ethical boundaries of AI deployment.
In a blog post, Google Senior Vice President James Manyika and Demis Hassabis, head of the AI lab Google DeepMind, justified the update. They emphasized the importance of collaboration between businesses and democratic governments to ensure AI supports national security efforts.
The move has reignited debates among AI professionals and ethicists regarding the governance of AI, balancing commercial interests with ethical considerations, and safeguarding against risks to humanity. Concerns about AI’s role in military applications and surveillance remain particularly contentious.
Alphabet explained that the updates were necessary to align with the rapid evolution of AI technology since the original principles were published in 2018.
“Billions of people are using AI in their everyday lives. AI has become a general-purpose technology, and a platform which countless organisations and individuals use to build applications.
“It has moved from a niche research topic in the lab to a technology that is becoming as pervasive as mobile phones and the internet itself,” the blog post said.
As a result baseline AI principles were also being developed, which could guide common strategies, it said.
However, Mr Hassabis and Mr Manyika said the geopolitical landscape was becoming increasingly complex.
“We believe democracies should lead in AI development, guided by core values like freedom, equality and respect for human rights,” the blog post said.
“And we believe that companies, governments and organisations sharing these values should work together to create AI that protects people, promotes global growth and supports national security.”
Alphabet’s updated AI principles were announced shortly before its year-end financial report, which fell below market expectations and led to a drop in its share price. This decline occurred despite a 10% increase in digital advertising revenue, its largest income source, driven by U.S. election-related spending.
In its earnings report, Alphabet revealed plans to invest $75 billion in AI projects this year, exceeding Wall Street analysts’ expectations by 29%. The funding will focus on AI infrastructure, research, and applications, including AI-powered search tools. One example is Google’s AI platform, Gemini, which now provides AI-generated summaries at the top of search results and is integrated into Google Pixel phones.
This announcement revives discussions about Alphabet’s evolving stance on ethical practices. Originally guided by the motto “Don’t be evil,” introduced by Google founders Sergey Brin and Larry Page, the company shifted to “Do the right thing” after restructuring as Alphabet Inc. in 2015.
In 2018, internal employee pushback led to Google’s decision not to renew its “Project Maven” contract with the U.S. Department of Defense. Thousands of employees signed a petition expressing concerns about the potential for AI to be used in lethal military applications, citing fears that it marked the first step toward weaponizing artificial intelligence.