Why Experts Say We Should Control AI, Now

Parenting is HARD

Key Takeaways

  • New research suggests that there may be no way to control super-smart artificial intelligence. 
  • A journal paper argues that controlling AI would require much more advanced technology than we currently possess.
  • Some experts say that truly intelligent AI may be here sooner than we think.
conceptual image: Artificial Intelligence robot face is divided in two parts, completion and inside of networking form.
Yuichiro Chino / Getty Images

If humans ever develop super-smart artificial intelligence, there may be no way to control it, scientists say. 

AI has long been touted as either a cure for all humanity’s problems or a Terminator-style apocalypse. So far, though, AI hasn’t come close to even human-level intelligence. But keeping a leash on advanced AI could be too complex a problem for humans if it’s ever developed, according to a recent paper published in the Journal of Artificial Intelligence Research

"A super-intelligent machine that controls the world sounds like science fiction," Manuel Cebrian, one of the paper’s co-authors, said in a news release.

"But there are already machines that perform certain important tasks independently without programmers fully understanding how they learned it. The question therefore arises whether this could at some point become uncontrollable and dangerous for humanity."

Coming Soon to a Super Computer Near You

The journal paper argues that controlling AI would require much more advanced technology than we currently possess.

In their study, the team conceived a theoretical containment algorithm that ensures a superintelligent AI cannot harm people under any circumstances, by simulating the behavior of the AI first and halting it if considered harmful. But the authors found that such an algorithm cannot be built.

"If you break the problem down to basic rules from theoretical computer science, it turns out that an algorithm that would command an AI not to destroy the world could inadvertently halt its own operations." Iyad Rahwan, director of the Center for Humans and Machines at the Max Planck Institute for Human Development in Germany, said in the news release.

"If this happened, you would not know whether the containment algorithm is still analyzing the threat, or whether it has stopped to contain the harmful AI. In effect, this makes the containment algorithm unusable."

Conceptual image: Binary code transforming AI robot face.
Yuichiro Chino / Getty Images

Truly intelligent AI may be here sooner than we think, argues Michalis Vazirgiannis, a computer science professor at École Polytechnique in France. "AI is a human artifact, but it is fast becoming an autonomous entity," he said in an email to Lifewire.

"The critical point will be if/when singularity occurs (i.e., when AI agents will have consciousness as an entity) and therefore they will claim independence, self-control, and eventual dominance."

The Singularity is Coming

Vazirgiannis isn’t alone in predicting the imminent arrival of super AI. True believers in the AI threat like to talk about the "singularity," which Vazirgiannis explains is the point that AI will supersede human intelligence and "that AI algorithms will potentially realize their existence and start to behave selfishly and cooperatively."

According to Ray Kurzweil, Google’s director of engineering, the singularity will arrive before the mid-21st century. "2029 is the consistent date I have predicted for when an AI will pass a valid Turing test and therefore achieve human levels of intelligence," Kurzweil told Futurism.

If we can't clean our own house, what code are we supposed to ask AI to follow?

"I have set the date 2045 for the 'Singularity,' which is when we will multiply our effective intelligence a billion fold by merging with the intelligence we have created."

But not all AI experts think that intelligent machines are a threat. The AI that’s under development is more likely to be useful for drug development and isn’t showing any real intelligence, AI consultant Emmanuel Maggiori said in an email interview. "There is a big hype around AI, which makes it sound like it's really revolutionary," he added. "Current AI systems are not as accurate as publicized, and make mistakes a human would never make."

Take Control of AI, Now

Regulating AI so that it doesn’t escape our control may be difficult, Vazirgiannis says. Companies, rather than governments, control the resources that power AI. "Even the algorithms, themselves, are usually produced and deployed in the research labs of these large and powerful, usually multinational, entities," he said.

"It is evident, therefore, that states’ governments have less and less control over the resources necessary to control AI." 

Some experts say that to control superintelligent AI, humans will need to manage computing resources and electric power. "Science fiction movies like The Matrix make prophecies about a dystopian future where humans are used by AI as bio-power sources," Vazirgiannis said.

"Even though remote impossibilities, humankind should make sure there is sufficient control over the computing resources (i.e., computer clusters, GPUs, supercomputers, networks/communications), and of course the power plants that provide electricity which is absolutely detrimental to the function of AI."

Security officer watching cloud blocks forming face in sky.
Colin Anderson Productions pty ltd / Getty Images

The problem with controlling AI is that researchers don’t always understand how such systems make their decisions, Michael Berthold, the co-founder and CEO of data science software firm KNIME, said in an email interview. "If we don’t do that, how can we 'control' it?"

He added, "We don’t understand when a totally different decision is made based on, to us, irrelevant inputs." 

The only way to control the risk of using AI is to ensure that it’s only used when that risk is manageable, Berthold said. "Put differently, two extreme examples: Don’t put AI in charge of your nuclear power plant where a little error can have catastrophic side effects," he added.

"On the other hand, AI predicts if your room temperature should be adjusted up or down a bit may well be worth the tiny risk for the benefit of living comfort."

If we can’t control AI, we had better teach it manners, former NASA computer engineer Peter Scott said in an email interview. "We cannot, ultimately, ensure the controllability of AI any more than we can ensure that of our children," he said.

"We raise them right and hope for the best; so far, they have not destroyed the world. To raise them well, we need a better understanding of ethics; if we can't clean our own house, what code are we supposed to ask AI to follow?"

But all hope is not lost for the human race, says AI researcher Yonatan Wexler, the executive vice president of R&D at OrCam. "While advances are indeed impressive, my personal belief is that human intelligence should not be underestimated," he said in an email interview. "We as a species have created quite amazing things, including AI itself."

The search for ever-smarter AI continues. But it might be better to consider how we control our creations before it’s too late.

Was this page helpful?