Google’s new AI guidelines have removed the promise not to use AI for weapons or surveillance. Some employees are responding to the company’s internal message board. Google said it is important for businesses and governments to work together for “national security.”
After Google retracted its promise not to use artificial intelligence for weapons and surveillance, some employees posted responses on the company’s internal message board.
The company said Tuesday it updated its ethical AI guidelines. This explains how Google uses and does not use the technology. In the newer version, Google removed the wording that it would not use AI to build weapons, surveillance tools, or “techniques that are likely to cause or cause overall harm.”
Several Google employees expressed disappointment at the changes to Memegen, the company’s internal message board, according to a post shared with Business Insider.
One meme showed CEO Sundar Pichai called on Google’s search engine for “how to become an arms contractor.”
Another employee agreed to the popular memes of the actor who dressed Nazi soldiers in a TV comedy sketch. “Google prohibits using AI to use it for weapons and surveillance.” “Are we villains?”
In another post, Sheldon asks “Big Bang Theory” why Google drops red lines for weapons, and sees Google working closer together with defense customers, including the Pentagon, and says, ” That’s the reason.”
The three memes were one of the top votes among employees on Wednesday. But tHese was shared by only a handful of Google staff. The company has over 180,000 employees. These comments reflect a part of the workforce. Some Googlers may support tech companies that work more closely with the defense clients and the US government.
In recent years, some tech companies and startups have changed in providing more technology, including AI tools, for defensive purposes.
Google did not directly accept the removal of the wording, but Google Deepmind CEO Demis Hassabis and senior vice president of technology and society James Manyika on Tuesday said, “Explaining the increasingly complex geopolitical thing.” “The scenery,” he said, “it’s important for businesses and governments to cooperate for “national security.”
“We believe that democracy should lead AI development led by core values such as freedom, equality and respect for human rights,” they wrote. “And we believe that businesses, governments and organizations that share these values should work together to create AI that protect people, promote global growth and support national security. “
A Google spokesperson directed Bi to the company’s Tuesday blog post, seeking comment.
In 2018, Google employees protested a program with the pentagon that used Google’s AI in war. The company waived the contract and laid out AI principles, including examples of what it would not pursue, and explicitly referred to weapons and surveillance tools.
A 2018 Principles blog post contains links that will specify point users to the updated guidelines.
Google’s decision to draw a red line around the weapon has ruled it out of military deals signed by other tech giants, including Amazon and Microsoft. There have been massive advances in AI since 2018, and the US is currently competing with China and other countries for its technological advantage.
Are you a current or former Google employee? You can contact this reporter using a secure messaging app signal (+1 628-228-1836) or secure email (hlangley@protonmail.com). We can keep you anonymous.