The Future of the Mind (38 page)

Read The Future of the Mind Online

Authors: Michio Kaku

BOOK: The Future of the Mind
10.24Mb size Format: txt, pdf, ePub

Strangely, fear is another emotion that is desirable. Evolution gave us the feeling of fear for a reason, to avoid certain things that are dangerous to us. Even though robots will be made of steel, they should fear certain things that can damage them, like falling off tall buildings or entering a raging fire. A totally fearless robot is a useless one if it destroys itself.

But certain emotions may have to be deleted, forbidden, or highly regulated, such as anger. Given that robots could be built to have great physical strength, an angry robot could create tremendous problems in the home and workplace. Anger could get in the way of its duties and cause great damage to property. (The original evolutionary purpose of anger was to show our dissatisfaction. This can be done in a rational, dispassionate way, without getting angry.)

Another emotion that should be deleted is the desire to be in command. A bossy robot will only make trouble and might challenge the judgment and wishes of the owner. (This point will also be important later, when we discuss
whether robots will one day take over from humans.) Hence the robot will have to defer to the wishes of the owner, even if this may not be the best path.

But perhaps the most difficult emotion to convey is humor, which is a glue that can bond total strangers together. A simple joke can defuse a tense situation or inflame it. The basic mechanics of humor are simple: they involve a punch line that is unanticipated. But the subtleties of humor can be enormous. In fact, we often size up other people on the basis of how they react to certain jokes. If humans use humor as a gauge to measure other humans, then one can appreciate the difficulty of creating a robot that can tell if a joke is funny or not. President Ronald Reagan, for example, was famous for defusing the most difficult questions with a quip. In fact, he accumulated a large card catalog of jokes, barbs, and wisecracks, because he understood the power of humor. (Some pundits concluded that he won the presidential debate against Walter Mondale when he was asked if he was too old to be president. Reagan replied that he would not hold the youth of his opponent against him.) Also, laughing inappropriately could have disastrous consequences (and is, in fact, sometimes a sign of mental illness). The robot has to know the difference between laughing with or at someone. (Actors are well aware of the diverse nature of laughter. They are skilled enough to create laughter that can represent horror, cynicism, joy, anger, sadness, etc.) So, at least until the theory of artificial intelligence becomes more developed, robots should stay away from humor and laughter.

PROGRAMMING EMOTIONS

In this discussion we have so far avoided the difficult question of precisely how these emotions would be programmed into a computer. Because of their complexity, emotions will probably have to be programmed in stages.

First, the easiest part is identifying an emotion by analyzing the gestures in a person’s face, lips, eyebrows, and tone of voice. Today’s facial recognition technology is already capable of creating a dictionary of emotions, so that certain facial expressions mean certain things. This process actually goes back to Charles Darwin, who spent a considerable amount of time cataloging emotions common to animals and humans.

Second, the robot must respond rapidly to this emotion. This is also easy.
If someone is laughing, the robot will grin. If someone is angry, the robot will get out of his way and avoid conflict. The robot would have a large encyclopedia of emotions programmed into it, and hence would know how to make a rapid response to each one.

The third stage is perhaps the most complex because it involves trying to determine the underlying motivation behind the original emotion. This is difficult, since a variety of situations can trigger a single emotion. Laughter may mean that someone is happy, heard a joke, or watched someone fall. Or it might mean that a person is nervous, anxious, or insulting someone. Likewise, if someone is screaming, there may be an emergency, or perhaps someone is just reacting with joy and surprise. Determining the reason behind an emotion is a skill that even humans have difficulty with. To do this, the robot will have to list the various possible reasons behind an emotion and try to determine the reason that makes the most sense. This means trying to find a reason behind the emotion that fits the data best.

And fourth, once the robot has determined the origin of this emotion, it has to make the appropriate response. This is also difficult, since there are often several possible responses, and the wrong one may make the situation worse. The robot already has, within its programming, a list of possible responses to the original emotion. It has to calculate which one will best serve the situation, which means simulating the future.

WILL ROBOTS LIE?

Normally, we might think of robots as being coldly analytical and rational, always telling the truth. But once robots become integrated into society, they will probably have to learn to lie or at least tactfully restrain their comments.

In our own lives, several times in a typical day we are confronted with situations where we have to tell a white lie. If people ask us how they look, we often dare not tell the truth. White lies, in fact, are like a grease that makes society run smoothly. If we were suddenly forced to tell the whole truth (like Jim Carrey in
Liar Liar
), we most likely would wind up creating chaos and hurting people. People would be insulted if you told them what they really looked like or how you really felt. Bosses would fire you. Lovers would dump you. Friends would abandon you. Strangers would slap you. Some thoughts are better kept confidential.

In the same way, robots may have to learn how to lie or conceal the truth, or else they might wind up offending people and being decommissioned by their owners. At a party, if a robot tells the truth, it could reflect badly on its owner and create an uproar. So if someone asks for its opinion, it will have to learn how to be evasive, diplomatic, and tactful. It must either dodge the question, change the subject, give platitudes for answers, reply with a question, or tell white lies (all things that today’s chat-bots are increasingly good at). This means that the robot has already been programmed to have a list of possible evasive responses, and must choose the one that creates the fewest complications.

One of the few times that a robot would tell the entire truth would be if asked a direct question by its owner, who understands that the answer might be brutally honest. Perhaps the only other time when the robot will tell the truth is when there is a police investigation and the absolute truth is necessary. Other than that, robots will be able to freely lie or conceal the whole truth to keep the wheels of society functioning.

In other words, robots have to be socialized, just like teenagers.

CAN ROBOTS FEEL PAIN?

Robots, in general, will be assigned to do types of tasks that are dull, dirty, and dangerous. There is no reason why robots can’t do repetitive or dirty jobs indefinitely, since we wouldn’t program them to feel boredom or disgust. The real problem emerges when robots are faced with dangerous jobs. At that point, we might actually want to program them to feel pain.

We evolved the sense of pain because it helped us survive in a dangerous environment. There is a genetic defect in which children are born without the ability to feel pain. This is called congenital analgesia. At first glance, this may seem to be a blessing, since these children do not cry when they experience injury, but it is actually more of a curse. Children with this affliction have serious problems, such as biting off parts of their tongue, suffering severe skin burns, and cutting themselves, often leading to amputations of their fingers. Pain alerts us to danger, telling us when to move our hand away from the burning stove or to stop running on a twisted ankle.

At some point robots must be programmed to feel pain, or else they will not know when to avoid precarious situations. The first sense of pain they
must have is hunger (i.e., a craving for electrical energy). As their batteries run out, they will get more desperate and urgent, realizing that soon their circuits will shut down, leaving all their work in disarray. The closer they are to running out of power, the more anxious they will become.

Also, regardless of how strong they are, robots may accidentally pick up an object that is too heavy, which could cause their limbs to break. Or they may suffer overheating by working with molten metal in a steel factory, or by entering a burning building to help firemen. Sensors for temperature and stress would alert them that their design specifications are being exceeded.

But once the sensation of pain is added to their menu of emotions, this immediately raises ethical issues. Many people believe that we should not inflict unnecessary pain on animals, and people may feel the same about robots as well. This opens the door to robots’ rights. Laws may have to be passed to restrict the amount of pain and danger that a robot is allowed to face. People will not care if a robot is performing dull or dirty tasks, but if they feel pain doing a dangerous one, they may begin to lobby for laws to protect robots. This may even start a legal conflict, with owners and manufacturers of robots arguing for increasing the level of pain that robots can endure, while ethicists may argue for lowering it.

This, in turn, may set off other ethical debates about other robot rights. Can robots own property? What happens if they accidentally hurt someone? Can they be sued or punished? Who is responsible in a lawsuit? Can a robot own another robot? This discussion raises another sticky question: Should robots be given a sense of ethics?

ETHICAL ROBOTS

At first, the idea of ethical robots seems like a waste of time and effort. However, this question takes on a sense of urgency when we realize that robots will make life-and-death decisions. Since they will be physically strong and have the capability of saving lives, they will have to make split-second ethical choices about whom to save first.

Let’s say there is a catastrophic earthquake and children are trapped in a rapidly crumbling building. How should the robot allocate its energy? Should it try to save the largest number of children? Or the youngest? Or the most vulnerable? If the debris is too heavy, the robot may damage its electronics.
So the robot has to decide yet another ethical question: How does it weigh the number of children it saves versus the amount of damage that it will sustain to its electronics?

Without proper programming, the robot may simply halt, waiting for a human to make the final decision, wasting valuable time. So someone will have to program it ahead of time so that the robot automatically makes the “right” decision.

These ethical decisions will have to be preprogrammed into the computer from the start, since there is no law of mathematics that can put a value on saving a group of children. Within its programming, there has to be a long list of things, ranked in terms of how important they are. This is tedious business. In fact, it sometimes takes a human a lifetime to learn these ethical lessons, but a robot has to learn them rapidly, before it leaves the factory, if it is to safely enter society.

Only people can do this, and even then ethical dilemmas sometimes confound us. But this raises questions: Who will make the decisions? Who decides the order in which robots save human lives?

The question of how decisions will ultimately be made will probably be resolved via a combination of the law and the marketplace. Laws will have to be passed so that there is, at minimum, a ranking of importance of whom to save in an emergency. But beyond that, there are thousands of finer ethical questions. These subtler decisions may be decided by the marketplace and common sense.

If you work for a security firm guarding important people, you will have to tell the robot how to save people in a precise order in different situations, based on considerations such as fulfilling the primary duty but also doing it within budget.

What happens if a criminal buys a robot and wants the robot to commit a crime? This raises a question: Should a robot be allowed to defy its owner if it is asked to break the law? We saw from the previous example that robots must be programmed to understand the law and also make ethical decisions. So if it decides that it is being asked to break the law, it must be allowed to disobey its master.

There is also the ethical dilemma posed by robots reflecting the beliefs of their owners, who may have diverging morals and social norms. The “culture wars” that we see in society today will only be magnified when we have
robots that reflect the opinions and beliefs of their owners. In some sense, this conflict is inevitable. Robots are mechanical extensions of the dreams and wishes of their creators, and when robots are sophisticated enough to make moral decisions, they will do so.

The fault lines of society may be stressed when robots begin to exhibit behaviors that challenge our values and goals. Robots owned by youth leaving a noisy, raucus rock concert may conflict with robots owned by elderly residents of a quiet neighborhood. The first set of robots may be programmed to amplify the sounds of the latest bands, while the second set may be programmed to keep noise levels to an absolute minimum. Robots owned by devout, churchgoing fundamentalists may get into arguments with robots owned by atheists. Robots from different nations and cultures may be designed to reflect the mores of their society, which may clash (even for humans, let alone robots).

So how does one program robots to eliminate these conflicts?

You can’t. Robots will simply reflect the biases and prejudices of their creators. Ultimately, the cultural and ethical differences between these robots will have to be settled in the courts. There is no law of physics or science that determines these moral questions, so eventually laws will have to be written to handle these social conflicts. Robots cannot solve the moral dilemmas created by humans. In fact, robots may amplify them.

But if robots can make ethical and legal decisions, can they also feel and understand sensations? If they succeed in saving someone, can they experience joy? Or can they even feel things like the color red? Coldly analyzing the ethics of whom to save is one thing, but understanding and feeling is another. So can robots feel?

Other books

Ghost of a Dream by Simon R. Green
The Warning by Sophie Hannah
Isle of Waves by Sue Brown
Soul(s) by Vera West
The Red Wolf's Prize by Regan Walker
Blood to Dust by L.J. Shen
Hot Blooded by Amanda Carlson
Semi-Detached by Griff Rhys Jones