Will artificial intelligence take over the world?

In recent years, there has been increasing concern that sometime in the future a super-intelligent AI will rise up, take over the world, and destroy all humanity. Is this a real concern? Or is it over-hyped science fiction, with no reasonable connection to reality? In this blog I will first argue that super-intelligence is possible, and then argue that if such an entity is created, it is a genuine possibility that it will indeed exterminate us all!

Superintelligent machines

A superintelligence can be defined as an “agent that possesses intelligence far surpassing that of the brightest and most gifted human minds” (Wiki). Intelligence itself is hard to define, but for our purposes it is fine to use a broad definition that encompasses every possible type of intelligence that you can think of: IQ tests, problem-solving, artistic ability, creativity, game playing, emotional intelligence, and so on. It is already clear that computers can surpass humans in certain, very specific types of intelligence, such as playing chess. There are also a few general things for which computers are vastly superior to humans. Computers are much faster at repeating simple tasks, such as calculations, as can be demonstrated by picking up your calculator and crunching some numbers. I have also found this aspect very useful in my own research, where I used a simple AI to design quantum physics experiments. Furthermore, computers can store and process vast amounts of data – imagine searching on the internet without Google! Note that we normally don’t refer to Google search as “intelligent”, but imagine the reaction when telling the average 19th-century scientist of its abilities.

Despite these successes, the majority of tasks performed by humans are still far superior than those of AI. Simply making a cup of tea, especially in an unfamiliar kitchen with unfamiliar equipment, is challenging for an AI, but trivial for us. The difference often comes down to common sense, of which we have plenty, but AI lacks entirely. For this reason, and others, it is at present hard to imagine an AI that could function in the world as well as we can. But is it impossible for us to build a human level, never mind a superintelligent, AI?

In the short/medium term, there seems to be no reason why development in AI won’t continue its rapid upwards trajectory. AI has already come close to (and sometimes outperformed) humans in tasks that reasonably recently seemed impossible, such as playing Go, competing in quiz games such as jeopardy, driving cars, and language translation. And with AI now proving its genuine commercial potential, it is reasonable to assume that funding in AI will continue for the foreseeable future, and probably increase. It is likely that much of this progress will require faster and faster computers, but again in the short/medium term there is no reason why computer power and memory etc won’t continue to increase, even if that just means building bigger and bigger supercomputers.

However, it is notoriously difficult to predict the future, and despite the confidence with which many commentators express their opinions, any long-term predictions should only really be treated as educated guesses. Indeed, we might find some major obstacle to developing human-level AI, or Moore’s law might run out. Or other more extreme things might happen such as nuclear war or a unanimous international agreement that we don’t need to develop AI. My personal opinion is that there is no clear reason why human-level intelligence can’t be developed on a computer, and I would guess that this would happen sometime this century. A survey of relevant experts (presented in Nick Bostrom’s excellent book Superintelligence) mirrors this, with 50% of the people surveyed saying that human-level intelligence will be attained by 2040, and 90% saying it will be attained by 2075.

Even so, no one really knows whether and when human-level AI will be developed. For this reason, I think that a much more important question is not will it be developed, but could it be developed. For me there is a clear answer to this: to the best of our knowledge it is not impossible to develop AI with human-level intelligence. Even if some experts think it is impossible (see below), any rational analysis that takes probability and human biases seriously should take an overview of opinions and therefore give the probability of developing human-level intelligence as greater than zero.

If human-level intelligence is developed then, assuming that at this point we still want to develop AI further, we should expect an intelligence explosion, i.e. a rapid increase in the intelligence of the AI system. The reason for this is that an AI with human-level intelligence (in every department) would also possess human-level abilities to research and develop AI. Then, we only need the AI to be slightly better than humans at developing itself in order to create a self-development loop of increasing ability and speed, leading to an explosion in the AI’s capabilities. Thus, quickly a human-level AI would quickly develop into a superintelligent AI.

Before moving on, I should mention that some think that there are fundamental reasons why human-level intelligence cannot be developed. The main criticism that I know of is based on Roger Penrose’s Gödel-like argument. I’m not an expert on this, and will only mention it briefly, but in short Penrose uses a logical argument to argue that the human mind is non-computable. At least in his book Shadows of the Mind, Penrose seems to largely use this argument to rule out AIs being conscious, rather than ruling out AIs being as intelligent as humans (consciousness and intelligence are completely different, and should not be conflated – I will discuss this in a future post). Even so, there are two reasons why I think his arguments aren’t a major problem: firstly, even if the human mind isn’t computable, then Penrose still thinks it can be explained within the laws of physics. Then, it seems reasonable to assume that eventually we will understand the human brain, and when we do we could in principle construct a computer to replicate the non-computable aspects of the brain. Secondly, as far as I know the aforementioned non-computable tasks are very specific, and are not necessary for most relevant tasks we deem “intelligent”. A future AI not possessing these extra non-computable abilities could still function just as well as humans in many arenas, and would vastly surpass humans in many. Truly human-level intelligent AIs might then not be possible, but for all practical purposes and for most tasks, human-level intelligent is still possible.

To conclude this section, at the very least the development of superintelligent AIs might be possible, and to many it is probably possible. As such, we should take the potential dangers associated with them seriously. If you disagree with me so far, please comment and let me know why!

Why would a superintelligent AI take over and exterminate us?

To demonstrate why a superintelligent AI might cause a threat, Nick Bostrom introduced the paperclip-machine argument. Suppose we have a superintelligent AI, and suppose that we give it the simple and seemingly harmless goal of creating as many paperclips as it can. Quite quickly we can see that this will get out of hand: making paperclips requires resources, such as materials and energy, and if the machine’s only goal is to create paperclips, then it will soon devour the resources of our planet, and continue on into outer space, harnessing all available resources and creating paperclips as it goes. If this scenario were to actually happen, humans would soon see that the paperclip machine is causing trouble, and presumably shut it down. However, remember that the AI is superintelligent – it surpasses humans in all types of intelligence, including social intelligence. It will therefore, very quickly, realise that the humans might want to shut it down, and this would clearly foil its paperclip-making plans, and so it would take every measure to make sure it is not turned off by humans. A good strategy might then be to destroy the humans, and put their atoms and molecules to good use.

This scenario is clearly hypothetical – why would we want a superintelligent AI to just create paperclips?! But we run into the same problems if we give the AI other more “beneficial” goals. We could give the AI the goal of making all humans smile. But how do we define a smile? If it is a specific facial shape, then a logical solution for the AI could be to create a metal structure that can be connected to the face, forcing the shape of a smile. Now to give a less silly example, what about giving a superintelligent AI the goal of eliminating all suffering in the world. Again, this can easily be misinterpreted. One solution to eliminate all suffering is to eliminate all humans, which clearly wasn’t the point. In a similar fashion, the goal of maximising human happiness could be achieved by putting humans’ brains in jars, and pumping them full of serotonin!

From this we can conclude that we need to be extremely careful about what we ask the AI to do. And furthermore, we need to be extremely careful that our commands aren’t misinterpreted. So far, this problem, often called the value alignment problem, is far from being solved. It is a revealing exercise to try to think of a goal, or a set of goals, to give a superintelligent AI; and then to find creative ways that these goals could be misinterpreted, resulting in potential disaster. Anyone interested in this (or anyone skeptical of this argument) should read Nick Bostrom’s book Superintelligence.

Despite the challenges, there are some ideas that seem to be heading in the right direction in specifying appropriate goals for superintelligent AIs. One example, which I think comes from Eliezer Yudkowsky, is to give the AI the goal of first trying to discover what humans would decide we want, if we had infinite time and infinite resources to think about it. And then to implement whatever it thinks humans would want. This still has its problems: it is not clear how we could program such a goal into an AI, and even if we did program such a goal, the AI might run forever and never come to a decision; and clearly not all human wishes align, so it might prioritise the creators’ wishes only, which might be devastating for everyone else.

Even if you are not convinced so far, note that everything above assumed that the creators of the AI will be friendly and want the best for humanity. This might of course not happen, for example if less well-intending governments pumped vast sums of money into AI research. A superintelligent AI would clearly be a powerful, probably even unstoppable, weapon.

Worrying about superintelligent AI taking over the world has only been taken seriously recently, and initially only by a tiny proportion of AI researchers. However, now it is becoming much more mainstream, and has been widely endorsed as a serious problem, e.g. see https://futureoflife.org/ai-principles/. Some researchers and commentators are still sceptical, but as with the development of superintelligence itself, there are no fundamental reasons why we should be certain that we can control a superintelligent AI. Because there are no fundamental reasons, we should, as a species, assume that we might develop AI, and that we might not be able to control it.

The only convincing counter-argument to this that I know of is based on the Fermi paradox. From our understanding of human evolution, and also the evolution of the first life on Earth, the probability that an Earth-like planet could produce intelligent life seems to be high enough so that we are not the only intelligent life forms in our galaxy. Then, in the entire observable universe there would be trillions of intelligent life forms. The probability that we were the 1st to emerge would be small, but if other intelligent life forms exist then they would presumably develop AI, and if there are no major hurdles to developing superintelligent AI they would have probably done it by now already, in which case we would know about it, because the superintelligent AI would utilise the resources of the observable universe to achieve its goals whether they are beneficial or dangerous. Of course, there are many assumptions here, any which could be false, but one possibility is that superintelligent AI is not so easy to develop after all. But even if this argument seems convincing, it would be unwise to use it (or any other argument for that matter), to rule out the dangers of superintelligent AI with 100% certainty.

A glorious future?

Everything good in our world has been created, directly or indirectly, by intelligence – namely our own intelligence. What, then, would the world look like if we created a superintelligent AI and did manage to control it? It seems reasonable that such a system could, given enough time, solve all of our problems: disease, starvation and war; incompetent governance; other existential risks such as nuclear war, biotechnology, nanotechnology, and global warming; and even more subtle issues like why many of the well-off people in the world – all the people with sufficient food and shelter and friends – can still be miserable, sometimes to the point of depression or suicide. Presumably, a world in which all these problems are solved would be a wonderful place to live in, and therefore in such a world we should be happy to produce many more humans, maybe even as many as possible, to live rich and happy lives. How many of such humans could exist in the future? Bostrom gives various estimates, depending on different assumptions, that range from 1035 to 1043, and even as high as 1058 human lives! Given that our successes or failures in AI research over the next hundred years or so might be the critical determinant in whether these lives exist, or whether all humans are wiped out, we might just be at the most important time and place in our universe. I will give Nick Bostrom the final word:

“If we represent all the happiness experienced during one entire such life with a single tear drop of joy, then the happiness of these souls could fill and refill the Earth’s oceans every second, and keep doing so for a hundred billion billion millennia. It is really important that we make sure these truly are tears of joy.”

 

References: I got most of the inspiration and ideas in this blog from Nick Bostrom’s book Superintelligence; Bostrom and Cirkovic’s Global Catastrophic Risks; and various episodes of Sam Harris’s (fantastic) podcast, in particular those with Max Tegmark, Stuart Russell and David Chalmers https://samharris.org/podcast/.

Advertisements

About P A Knott

I currently hold a Research Fellowship from the Royal Commission for the Exhibition of 1851. My research project will tackle a key challenge in the quantum technology revolution by designing computer algorithms that automate the engineering of useful quantum states. These algorithms will enable the design of novel experiments to bring forward the development of new technologies such as quantum computing, communications and metrology. In my previous post I worked at the University of Nottingham on a project entitled "Sentient observers in the quantum regime and the emergence of objective reality", with Gerardo Adesso, Marco Piani, and Tommaso Tufarelli. This project involved using quantum information theory to investigate foundational questions concerning the role of the observer in physical theories. More generally, my research interests include quantum metrology, quantum state engineering, quantum sensing networks, and optical interferometry.
This entry was posted in Artificial intelligence, Philosophy. Bookmark the permalink.

One Response to Will artificial intelligence take over the world?

  1. Pingback: Using a genetic algorithm to design quantum experiments | quanta rei

Leave a comment

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s