AI seems to be built into every new modern gadget these days. Many people around the world also rely on it for certain functions at work, while others turn to it for advice on a range of different topics. Yet, concerns have been raised as to how much control we actually have over this ever-growing technology. According to one researcher, no current evidence exists that AI can be controlled and therefore shouldn’t be further developed.
One of the Most Significant Issues Facing Humanity

Dr Yampolskiy, an AI safety expert and author, recently wrote about the impact AI will have on our society in his upcoming book, AI: Unexplainable, Unpredictable, Uncontrollable. He believes that it is among the biggest issues our species will need to face, yet it is currently poorly understood, defined, and researched. Dr. Yampolskiy hasn’t minced his words regarding his feelings as to whether this change will be positive or negative, stating, “We are facing an almost guaranteed event with the potential to cause an existential catastrophe”. While the outcome could potentially be prosperity, it could also be extinction.
Read More: 9 Risks and Dangers of Artificial Intelligence
Can AI Be Safely Controlled?

According to Yampolsky, he has extensively reviewed the scientific literature regarding AI and cannot find any definitive proof that we can safely control AI. He feels that we are producing intelligent software at a rate that outstrips our ability to control it. After reviewing the literature comprehensively, he has concluded that these advanced intelligence systems will never be fully controllable and will always present some level of risk. Yampolsky believes that minimizing this risk, while maximizing potential benefits should be the ultimate goal of those within the AI community.
The Obstacles

AI is not like traditional software, adjusting its performance as it learns from more data. It can often behave in unexpected ways, which increases the chances of more risks emerging over time. Many AI systems are referred to as “black boxes” because they cannot provide explanations for many of their actions. Yet, these systems are currently being tasked with making important decisions in fields such as investing, banking, healthcare, and security. If people start to blindly trust the output AI provides, they may not be able to recognize when it manipulates outcomes or makes errors.
Good Values and Bad Biases

As these AI systems become more advanced and their ability to act independently increases, it becomes harder for there to be sufficient human oversight. This becomes even more difficult when the gap between superintelligent systems and human intelligence grows blogger over time. There is also the chance that its training data could make it pick up bad human biases. One way to deal with this would be to make it learn everything about the world from scratch and ignore what we know. However, this could mean that all values that prioritize the well-being of humans could be lost too.
Should Humans Fully Control AI?

While some suggest that a system should be designed that completely listens to human instructions, there are concerns with this. For example, there is the potential that the AI would be faced with conflicting orders. It could also act upon misinterpreted instructions, or be subject to malicious use by bad actors. Rather using AI as an advisor could potentially bypass the issues brought about by direct orders. However, it would need to have superior values for that to work. According to Dr. Yampolskiy, “Most AI safety researchers are looking for a way to align future superintelligence to the values of humanity. Value-aligned AI will be biased by definition, pro-human bias, good or bad is still a bias.”
Minimizing the Risks Associated With AI

The author feels that these AI systems need to be, limitable, transparent, and modifiable with ‘undo’ options. He also feels that AI systems need to be labeled as controllable or uncontrollable, with partial bans on some technologies. However, he doesn’t view this as a reason for people to get discouraged. Instead, he sees it as a reason to increase AI safety and security efforts. This way we can advance from the benefits of AI without increasing any potential risks.
Read More: Artificial Intelligence Finds A Powerful New Antibiotic For The First Time