Emotional Design (26 page)

Read Emotional Design Online

Authors: Donald A. Norman

BOOK: Emotional Design
9.42Mb size Format: txt, pdf, ePub
Many people in the robotics and computer research community believe that the way to display emotions is to have a robot decide whether it is happy or sad, angry or upset, and then display the appropriate face, usually an exaggerated parody of a person in those states. I
argue strongly against this approach. It is fake, and, moreover, it looks fake. This is not how people operate. We don't decide that we are happy, and then put on a happy face, at least not normally. This is what we do when we are trying to fool someone. But think about all those professionals who are forced to smile no matter what the circumstance: they fool no one—they look just like they are forcing a smile, as indeed they are.
The way humans show facial expression is by automatic innervation of the large number of muscles involved in controlling the face and body. Positive affect leads to relaxation of some muscle groups, automatic pulling up of many facial muscles (hence the smile, raised eyebrows and cheeks, etc.), and a tendency to open up and draw closer to the positive event or thing. Negative affect has the opposite impact, causing withdrawal, to push away. Some muscles are tensed, and some of the facial muscles pull downward (hence the frown). Most affective states are complex mixtures of positive and negative valence, at differing levels of arousal, with some residue of the immediately previous states. The resulting expressions are rich and informative. And real.
Fake emotions look fake: we are very good at detecting false attempts to manipulate us. Thus, many of the computer systems we interact with—the ones with cute, smiling helpers and artificially sweet voices and expressions—tend to be more irritating than useful. “How do I turn this off?” is a question often asked of me, and I have become adept at disabling them, both in my own computers or those of others who seek to be released from the irritation.
I have argued that machines should indeed both have and display emotions, the better for us to interact with them. This is precisely why the emotions need to appear as natural and ordinary as human emotions. They must be real, a direct reflection of the internal states and processing of a robot. We need to know when a robot is confident or confused, secure or worried, understanding our queries or not, working on our request or ignoring us. If the facial and body expressions reflect the underlying processing, then the emotional displays will
seem genuine precisely because they are real. Then we can interpret their state, they can interpret ours, and the communication and interaction will flow ever more smoothly.
I am not the only person to have reached this conclusion. MIT Professor Rosalind Picard once said, talking about whether robots should have emotions, “I wasn't sure they had to have emotions until I was writing up a paper on how they would respond intelligently to our emotions without having their own. In the course of writing that paper, I realized it would be a heck of a lot easier if we just gave them emotions.”
Once robots have emotions, then they need to be able to display them in a way that people can interpret—that is, as body language and facial expressions similar to human ones. Thus, the robot's face and body should have internal actuators that act and react like human muscles according to the internal states of the robot. People's faces are richly endowed with muscle groups in chin, lips, nostrils, eyebrows, forehead, cheeks, and so on. This complex of muscles makes for a sophisticated signaling system, and if robots were created in a similar way, the features of the face will naturally smile when things are going well and frown when difficulties arise. For this purpose, robot designers need to study and understand the complex workings of human expressions, with its very rich set of muscles and ligaments tightly intertwined with the affective system.
Displaying full facial emotions is actually very difficult.
Figure 6.4
shows Leonardo, Professor Cynthia Breazeal's robot at the MIT Media Laboratory, designed to control a vast array of facial features, neck, body, and arm movements, all the better to interact socially and emotionally with us. There is a lot going on inside our bodies, and much the same complexity is required within the faces of robots.
But what of the underlying emotional states? What should these be? As I've discussed, at the least, the robot should be cautious of heights, wary of hot objects, and sensitive to situations that might lead to hurt or injury. Fear, anxiety, pain, and unhappiness might all be appropriate states for a robot. Similarly, it should have positive states,
including pleasure, satisfaction, gratitude, happiness and pride, which would enable it to learn from its actions, to repeat the positive ones and improve, where possible.
FIGURE 6.4
The complexity of robot facial musculature.
MIT Professor Cynthia Breazeal with her robot Leonardo.
(Photograph by author.)
Surprise is probably essential. When what happens is not what is expected, the surprised robot should interpret this as a warning. If a room unexpectedly gets dark, or maybe the robot bumps into something it didn't expect, a prudent response is to stop all movement and figure out why. Surprise means that a situation is not as anticipated, and that planned or current behavior is probably no longer appropriate—hence, the need to stop and reassess.
Some states, such as fatigue, pain, or hunger, are simpler, for they do not require expectations or predictions, but rather simple monitoring of internal sensors. (Fatigue and hunger are technically not affective states, but they can be treated as if they were.) In the human,
sensors of physical states signal fatigue, hunger, or pain. Actually, in people, pain is a surprisingly complex system, still not well understood. There are millions of pain receptors, plus a wide variety of brain centers involved in interpreting the signals, sometimes enhancing sensitivity, sometimes suppressing it. Pain serves as a valuable warning system, preventing us from damaging ourselves and, if we are injured, acting as a reminder not to stress the damaged parts further. Eventually it might be useful for robots to feel pain when motors or joints were strained. This would lead robots to limit their activities automatically, and thus protect themselves against further damage.
Frustration would be a useful affect, preventing servant robots from getting stuck doing a task to the neglect of its other duties. Here is how it would work. I ask the servant robot to bring me a cup of coffee. Off it goes to the kitchen, only to have the coffee robot explain that it can't give any because it lacks clean cups. Then the coffeemaker might ask the pantry robot for more cups, but suppose that it, too, didn't have any. The pantry would have to pass on the request to the dishwasher robot. And now suppose that the dishwasher didn't have any dirty ones it could wash. The dishwasher would ask the servant robot to search for dirty cups so that it could wash them, give them to the pantry, which would feed them to the coffeemaker, which in turn would give the coffee to the servant robot. Alas, the servant would have to decline the dishwasher's request to wander about the house: it is still busy at its main task—waiting for coffee.
This situation is called “deadlock.” In this case, nothing can be done because each machine is waiting for the next, and the final machine is waiting for the first. This particular problem could be solved by giving the robots more and more intelligence, learning how to solve each new problem, but problems always arise faster than designers can anticipate them. These deadlock situations are difficult to eliminate because each one arises from a different set of circumstances. Frustration provides a general solution.
Frustration is a useful affect for both humans and machines, for when things reach that point, it is time to quit and do something else.
The servant robot should get frustrated waiting for the coffee, so it should temporarily give up. As soon as the servant robot gives up the quest for coffee, it is free to attend to the dishwasher's request, go off and find the dirty coffee cups. This would automatically solve the deadlock: the servant robot would find some dirty cups, deliver them to the dishwasher, which would eventually let the coffeemaker make the coffee and let me get my coffee, although with some delay.
Could the servant robot learn from this experience? It should add to its list of activities the periodic collection of dirty dishes, so that the dishwasher/pantry would never run out again. This is where some pride would come in handy. Without pride, the robot doesn't care: it has no incentive to learn to do things better. Ideally, the robot would take pride in avoiding difficulties, in never getting stuck at the same problem more than once. This attitude requires that robots have positive emotions, emotions that make them feel good about themselves, that cause them to get better and better at their jobs, to improve, perhaps even to volunteer to do new tasks, to learn new ways of doing things. Pride in doing a good job, in pleasing their owners.
Machines That Sense Emotion
The extent to which emotional upsets can interfere with mental life is no news to teachers. Students who are anxious, angry, or depressed don't learn; people who are caught in these states do not take in information efficiently or deal with it well.
 
 
—Daniel Goleman,
Emotional Intelligence
Suppose machines could sense the emotions of people. What if they were as sensitive to the moods of their users as a good therapist might be? What if an electronic, computer-controlled educational system
could sense when the learner was doing well, was frustrated, or was proceeding appropriately? Or what if the home appliances and robots of the future could change their operations according to the moods of their owners? What then?
FIGURE 6.5
MIT's Affective Computing program.
The diagram indicates the complexity of the human affective system and the challenges required to monitor affect properly. From the work of Prof. Rosalind Picard of MIT.
(Drawing courtesy of Roz Picard and Jonathan Klein.)
Professor Rosalind Picard at the MIT Media Laboratory leads a research effort entitled “Affective Computing,” an attempt to develop machines that can sense the emotions of the people with whom they are interacting, and then respond accordingly. Her research group has made considerable progress in developing measuring devices to sense fear and anxiety, unhappiness and distress. And, of course, satisfaction
and happiness.
Figure 6.5
is taken from their web site and demonstrates the variety of issues that must be addressed.
How are someone's emotions sensed? The body displays its emotional state in a variety of ways. There are, of course, facial expressions and body language. Can people control their expressions? Well, yes, but the visceral layer works automatically, and although the behavioral and reflective levels can try to inhibit visceral reaction, complete suppression does not appear to be possible. Even the most controlled person, the so-called poker-face who keeps a neutral display of emotional responses no matter what the situation, still has micro-expressions—short, fleeting expressions that can be detected by trained observers.
In addition to the responses of one's musculature, there are many physiological responses. For example, although the size of the eye's pupil is affected by light intensity, it is also an indicator of emotional arousal. Become interested or emotionally aroused, and the pupil widens. Work hard on a problem, and it widens. These responses are involuntary, so it is difficult—probably impossible—for a person to control them. One reason professional gamblers sometimes wear tinted eyeglasses even in dark rooms is to prevent their opponents from detecting changes in the size of their pupils.
Heart rate, blood pressure, breathing rate, and sweating are common measures that are used to derive affective state. Even amounts of sweating so small that the person can be unaware of it can trigger a change in the skin's electrical conductivity. All of these measures can readily be detected by the appropriate electronics.

Other books

The Glass Mountain by Celeste Walters
Blood Magic by Tessa Gratton
A Shattering Crime by Jennifer McAndrews
I Can Barely Breathe by August Verona
Lost and Found by Alan Dean Foster
Freed by Tara Crescent