The Future of the Mind (39 page)

Read The Future of the Mind Online

Authors: Michio Kaku

BOOK: The Future of the Mind
8.44Mb size Format: txt, pdf, ePub

CAN ROBOTS UNDERSTAND OR FEEL?

Over the centuries, a great many theories have been advanced about whether a machine can think and feel. My own philosophy is called “constructivism”; that is, instead of endlessly debating the question, which is pointless, we should be devoting our energy to creating an automaton to see how far we can get. Otherwise we wind up in endless philosophical debates that are never ultimately resolved. The advantage of science is that, once everything is said and done, one can perform experiments to settle a question decisively.

Thus, to settle the question of whether a robot can think, the final resolution may be to build one. Some, however, have argued that machines will never be able to think like a human. Their strongest argument is that, although a robot can manipulate facts faster than a human, it does not “understand” what it is manipulating. Although it can process senses (e.g., color, sound) better than a human, it cannot truly “feel” or “experience” the essence of these senses.

For example, philosopher David Chalmers has divided the problems of AI into two categories, the Easy Problems and the Hard Problems. To him, the Easy Problems are creating machines that can mimic more and more human abilities, such as playing chess, adding numbers, recognizing certain patterns, etc. The Hard Problems involve creating machines that can understand feelings and subjective sensations, which are called “qualia.”

Just as it is impossible to teach the meaning of the color red to a blind person, a robot will never be able to experience the subjective sensation of the color red, they say. Or a computer might be able to translate Chinese words into English with great fluency, but it will never be able to understand what it is translating. In this picture, robots are like glorified tape recorders or adding machines, able to recite and manipulate information with incredible precision, but without any understanding whatsoever.

These arguments have to be taken seriously, but there is also another way of looking at the question of qualia and subjective experience. In the future, a machine most likely will be able to process a sensation, such as the color red, much better than any human. It will be able to describe the physical properties of red and even use it poetically in a sentence better than a human. Does the robot “feel” the color red? The point becomes irrelevant, since the word “feel” is not well defined. At some point, a robot’s description of the color red may exceed a human’s, and the robot may rightly ask: Do humans really understand the color red? Perhaps humans cannot really understand the color red with all the nuances and subtly that a robot can.

As behaviorist B. F. Skinner once said, “The real problem is not whether machines think, but whether men do.”

Similarly, it is only a matter of time before a robot will be able to define Chinese words and use them in context much better than any human. At that point, it becomes irrelevant whether the robot “understands” the Chinese language. For all practical purposes, the computer will know the Chinese
language better than any human. In other words, the word “understand” is not well defined.

One day, as robots surpass our ability to manipulate these words and sensations, it will become irrelevant whether the robot “understands” or “feels” them. The question will cease to have any importance.

As mathematician John von Neumann said, “
In mathematics, you don’t understand things. You just get used to them.”

So the problem lies not in the hardware but in the nature of human language, in which words that are not well defined mean different things to different people. The great quantum physicist Niels Bohr was once asked how one could understand the deep paradoxes of the quantum theory. The answer, he replied, lies in how you define the word “understand.”

Dr. Daniel Dennett, a philosopher at Tufts University, has written, “
There could not be an objective test to distinguish a clever robot from a conscious person. Now you have a choice: you can either cling to the Hard Problem, or you can shake your head in wonder and dismiss it. Just let go.”

In other words, there is no such thing as the Hard Problem.

To the constructivist philosophy, the point is not to debate whether a machine can experience the color red, but to construct the machine. In this picture, there is a continuum of levels describing the words “understand” and “feel.” (This means that it might even be possible to give numerical values to the degree of understanding and feeling.) At one end we have the clumsy robots of today, which can manipulate a few symbols but not much more. At the other end we have humans, who pride themselves on feeling qualia. But as time goes by, robots will eventually be able to describe sensations better than us on any level. Then it will be obvious that robots understand.

This was the philosophy behind Alan Turing’s famous Turing test. He predicted that one day a machine would be built that could answer any question, so that it would be indistinguishable from a human. He said, “A computer would deserve to be called intelligent if it could deceive a human into believing that it was human.”

Physicist and Nobel laureate Francis Crick said it best. In the last century, he noted, biologists had heated debates over the question “What is life?” Now, with our understanding of DNA, scientists realize that the question is not well defined. There are many variations, layers, and complexities to that simple question. The question “What is life?” simply faded away. The same may eventually apply to feeling and understanding.

SELF-AWARE ROBOTS

What steps must be taken before computers like Watson have self-awareness? To answer this question, we have to refer back to our definition of self-awareness: the ability to put one’s self inside a model of the environment, and then run simulations of this model into the future to achieve a goal. This first step requires a very high level of common sense in order to anticipate a variety of events. Then the robot has to put itself inside this model, which requires an understanding of the various courses of action it may take.

At Meiji University, scientists have taken the first steps to create a robot with self-awareness. This is a tall order, but they think they can do it by creating robots with a Theory of Mind. They started by building two robots. The first was programmed to execute certain motions. The second was programmed to observe the first robot, and then to copy it. They were able to create a second robot that could systematically mimic the behavior of the first just by watching it. This is the first time in history that a robot has been built specifically to have some sense of self-awareness. The second robot has a Theory of Mind; that is, it is capable of watching another robot and then mimicking its motions.

In 2012, the next step was taken by scientists at Yale University who created a robot that passed the mirror test. When animals are placed in front of a mirror, most of them think the image in the mirror is that of another animal. As we recall, only a few animals have passed the mirror test, realizing that the mirror image was a reflection of themselves. The scientists at Yale created a robot called Nico that resembles a gangly skeleton made of twisted wires, with mechanical arms and two bulging eyes sitting on top. When placed in front of a mirror, Nico not only recognized itself but could also deduce the location of objects in a room by looking at their images in the mirror. This is similar to what we do when we look into a rearview mirror and infer the location of objects behind us.

Nico’s programmer, Justin Hart, says, “
To our knowledge, this is the first robotic system to attempt to use a mirror in this way, representing a significant step towards a cohesive architecture that allows robots to learn about their bodies and appearance through self-observation, and an important capability required in order to pass the mirror test.”

Because the robots at Meiji University and Yale University represent the state of the art in terms of building robots with self-awareness, it is easy to
see that scientists have a long ways to go before they can create robots with humanlike self-awareness.

Their work is just the first step, because our definition of self-awareness demands that the robot use this information to create simulations of the future. This is far beyond the capability of Nico or any other robot.

This raises the important question: How can a computer gain full self-awareness? In science fiction, we often encounter a situation where the Internet suddenly becomes self-aware, as in the movie
The Terminator
. Since the Internet is connected to the entire infrastructure of modern society (e.g., our sewer system, our electricity, our telecommunications, our weapons), it would be easy for a self-aware Internet to seize control of society. We would be left helpless in this situation. Scientists have written that this may happen as an example of an “emergent phenomenon” (i.e., when you amass a sufficiently large number of computers together, there can be a sudden phase transition to a higher stage, without any input from the outside).

However, this says everything and it says nothing, because it leaves out all the important steps in between. It’s like saying that a highway can suddenly become self-aware if there are enough roads.

But in this book we have given a definition of consciousness and self-awareness, so it should be possible to list the steps by which the Internet can become self-aware.

First, an intelligent Internet would have to continually make models of its place in the world. In principle, this information can be programmed into the Internet from the outside. This would involve describing the outside world (i.e., Earth, its cities, and its computers), all of which can be found on the Internet itself.

Second, it would have to place itself in the model. This information is also easily obtained. It would involve giving all the specifications of the Internet (the number of computers, nodes, transmission lines, etc.) and its relationship to the outside world.

But step three is by far the most difficult. It means continually running simulations of this model into the future, consistent with a goal. This is where we hit a brick wall. The Internet is not capable of running simulations into the future, and it has no goals. Even in the scientific world, simulations into the future are usually done in just a few parameters (e.g., simulating the collision of two black holes). Running a simulation of the model of
the world containing the Internet is far beyond the programming available today. It would have to incorporate all the laws of common sense, all the laws of physics, chemistry, and biology, as well as facts about human behavior and human society.

In addition, this intelligent Internet would have to have a goal. Today it is just a passive highway, without any direction or purpose. Of course, one can in principle impose a goal on the Internet. But let us consider the following problem: Can you create an Internet whose goal is self-preservation?

This would be the simplest possible goal, but no one knows how to program even this simple task. Such a program, for example, would have to stop any attempt to shut down the Internet by pulling the plug. At present, the Internet is totally incapable of recognizing a threat to its existence, let alone plotting ways to prevent it. (For example, an Internet capable of detecting threats to its existence would have to be able to identify attempts to shut down its power, cut lines of communication, destroy its servers, disable its fiber-optic and satellite connections, etc. Furthermore, an Internet capable of defending itself against these attacks would have to have countermeasures for each scenario and then run these attempts into the future. No computer on Earth is capable of doing even a fraction of such things.)

In other words, one day it may be possible to create self-aware robots, even a self-aware Internet, but that day is far into the future, perhaps at the end of this century.

But assume for the moment that the day has arrived, that self-aware robots walk among us. If a self-aware robot has goals that are compatible with our own, then this type of artificial intelligence will not pose a problem. But what happens if the goals are different? The fear is that humans may be outwitted by self-aware robots and then may be enslaved. Because of their superior ability to simulate the future, the robots could plot the outcomes of many scenarios to find the best way to overthrow humanity.

One way this possibility may be controlled is to make sure that the goals of these robots are benevolent. As we have seen, simulating the future is not enough. These simulations must serve some final goal. If a robot’s goal is merely to preserve itself, then it would react defensively to any attempt to pull the plug, which could spell trouble for mankind.

WILL ROBOTS TAKE OVER?

In almost all science-fiction tales, the robots become dangerous because of their desire to take over. The word “robot,” in fact, comes from the Czech word for “worker,” first seen in the 1920 play
R.U.R. (Rossum’s Universal Robots)
by Karel Capek, in which scientists create a new race of mechanical beings that look identical to humans. Soon there are thousands of these robots performing menial and dangerous tasks. However, humans mistreat them badly, and one day they rebel and destroy the human race. Although these robots have taken over Earth, they have one defect: they cannot reproduce. But at the end of the play, two robots fall in love. So perhaps a new branch of “humanity” emerges once again.

A more realistic scenario comes from the movie
The Terminator
, in which the military has created a supercomputer network called Skynet that controls the entire U.S. nuclear stockpile. One day, it wakes up and becomes sentient. The military tries to shut down Skynet but then realizes there is a flaw in its programming: it is designed to protect itself, and the only way to do so is by eliminating the problem—humanity. It starts a nuclear war, which reduces humanity to a ragtag bunch of misfits and rebels fighting the juggernaut of the machines.

It is certainly possible that robots could become a threat. The current Predator drone can target its victims with deadly accuracy, but it is controlled by someone with a joystick thousands of miles away. According to the
New York Times
, the orders to fire come directly from the president of the United States. But in the future, a Predator might have face recognition technology and permission to fire if it is 99 percent confident of the identity of its target. Without human intervention, it could automatically use this technology to fire at anyone who fits the profile.

Other books

Eona by Alison Goodman
The Sinister Touch by Jayne Ann Krentz
Hillside Stranglers by Darcy O'Brien
Surrender To A Scoundrel by Julianne Maclean
The Messiah of Stockholm by Cynthia Ozick