In a surprising turn of events, Alphabet, the owner of Google, has dropped its pledge not to use AI for the development of surveillance tools and weapons. Google says that it has updated its ethical guidelines concerning the use of AI as it has the potential to protect national security. The previous policy listed categories of applications that the company would not use, including “weapons or other technologies whose principal purpose or implementation is to cause or directly facilitate injury to people”. Why did this suddenly change?
AI Development Should Be Led By Democracies

According to a blog post released by Google, the competition for the leadership of the AI industry continues to increase globally. Additionally, this global competition grows exponentially complex in the current geopolitical landscape. Therefore, they feel that “democracies should lead in AI development guided by freedom, equality, and respect for human rights”. In addition, they stated that they feel governments, companies, and organizations that share the above values would work as one to create AI systems that promote global growth, protect people, and support national security.
Don’t Be Evil?

Originally, Google’s motto was “Don’t be evil”. However, it was subsequently downgraded to a “mantra” from a motto in 2009. By the time the parent company, Alphabet, was created in 2015, the motto was no longer included in the company’s code of ethics. And while the company originally pledged to not use “technologies that gather or use the information for surveillance violating internationally accepted norms”, they have now done a complete turnaround. However, this change in stance has been justified since it may be more dangerous to let dictatorships have sole access to this type of technology.
Guarding Against Potential Risks

The rapid expansion of AI has sparked a discussion over how this new technology should be governed and how to mitigate its risks. Many people have warned against developing autonomous weapons systems, advocating for systems of global control, such as Stuart Russell, a computer scientist from the United Kingdom. However, Google noted that the technology has been evolving at an incredibly rapid pace since the company initially published its principles. This means that Alphabet has every intention to try to stay ahead of its global competition. In fact, they are planning on spending $75bn to build on their AI infrastructure and capabilities over the next year.
The Doomsday Clock

The Doomsday Clock is a representation of how close humanity is to destruction. It is regularly updated and currently sits at 89 seconds to midnight. The Bulletin of the Atomic Scientists sets the clock every year and this year they mentioned that the key factors included climate change, misuse of scientific advances, and risks associated with artificial intelligence. According to the group of scientists, “Systems that incorporate artificial intelligence in military targeting have been used in Ukraine and the Middle East, and several countries are moving to integrate artificial intelligence into their militaries.” They went on to say that China, Russia, and the United States have the power collectively to destroy our entire civilization.
The Bottom Line

We live in very interesting times indeed. While on the one hand, we have advances in technology that could improve our lives for the better, the same technology could destroy us. Considering that we are closer to “midnight” than we have ever been before, it seems vital that we find ways of dealing with the issues inherint with integrated AI systems. While it may have seemed like a good idea to steer clear of looking into AI for military applications a few years back, the concern that the rest of the world may be doing so has raised concerns over being left behind. However, it is yet to be seen whether pursuing these goals will help the situation or worsen it.
Read More: A new type of Artificial Intelligence can detect breast cancer 5 years before diagnosis