Read The Design of Future Things Online
Authors: Don Norman
Shoshana Zuboff, a social psychologist at the Harvard Business School, has analyzed the impact of automation on a factory floor. The automatic equipment completely changed the social structure of the workers. On the one hand, it removed the operators from directly experiencing the production process. Whereas before they had felt the machines, smelled the fumes, and heard the sounds so that they could tell through their perceptions just how the procedure was going, now they were located in air-conditioned, sound-deadened control rooms, trying to imagine the state of affairs through dials, meters, and other indicators provided by the instrumentation. Although
this change did speed up the process and increase uniformity, it also isolated the workers from the work and prevented the factory from making use of their years of experience with anticipating and correcting problems.
On the other hand, the use of computerized control equipment empowered the workers. Before, they were only given limited knowledge of the plant's operation and how their activities affected the performance of the company. Now, the computers helped keep them informed about the entire state of the plant, allowing them to understand the larger context to which their activities contributed. As a result, they could interact with middle and higher management on their own terms, by combining their knowledge of shop-floor operations with the information gleaned from their automation. Zuboff coined the term
informate
to describe the impact of the increased access to information afforded by automation to the workers: the workers were informated.
People have many unique capabilities that cannot be replicated in machines, as least not yet. As we introduce automation and intelligence into the machines we use today, we need to be humble and recognize the problems and the potential for failure. We also need to recognize the vast discrepancy between the workings of people and of machines. On the whole, these responsive systems are valuable and helpful. But they can fail when they come across the fundamental limitations of human-machine interaction, most especially the lack of common ground that was discussed so extensively in
chapter 2
.
Autonomous, intelligent devices have proven invaluable in situations that are too dangerous for people; occasional failures are still far better than the risk of human life. Similarly, many intelligent devices have taken over the dull, routine tasks of maintaining our infrastructure, continually adjusting operating parameters and checking conditions in situations that are simply too tedious for people.
Augmentative technology has proven its worth. The recommender systems of many internet shopping sites are providing us with sensible suggestions, but because they are optional, they do not disrupt. Their occasional successes suffice to keep us content with their operation. Similarly, the augmentative technologies now being tested in smart homes, some described in this chapter, provide useful aids to everyday problems. Once again, their voluntary, augmentative status makes them palatable.
The future of design clearly lies in the development of smart devices that drive cars for us, make our meals, monitor our health, clean our floors, and tell us what to eat and when to exercise. Despite the vast differences between people and machines, if the task can be well specified, if the environmental conditions are reasonably well controlled, and if the machines and people can limit their interactions to the bare minimum, then intelligent, autonomous systems are valuable. The challenge is to add intelligent devices to our lives in a way that supports our activities, complements our skills, and adds to our pleasure, convenience, and accomplishments, but not to our stress.
Â
The whistling of the kettle and the sizzling of food cooking on the stove are reminders of an older era when everything was visible, everything made sounds, which allowed us to create mental models, conceptual models, of their operations. These models provided us with clues to help us troubleshoot when things did not go as planned, to know what to expect next, and to allow us to experiment.
Mechanical devices tend to be self-explaining. Their moving parts are visible and can be watched or manipulated. They make natural sounds that help us understand what is happening, so that even when we are not watching the machinery, we can often infer their state just from these sounds. Today, however, many of these powerful indicators are hidden from sight and sound, taken over by silent, invisible electronics. As a result, many devices operate silently, efficiently, and aside from the occasional clicking of a hard drive or the noise of a fan, they do not reveal much of their internal operations. We are left to the mercy of the designers for any information about
the device's internal workings, what is happening within the device.
Communication, explanation, and understanding: these are the keys to working with intelligent agents, whether they are other people, animals, or machines. Teamwork requires coordination and communication, plus a good sense of what to expect, a good understanding of why things are, or are not, happening. This is true whether the team is composed of people, a skilled rider and horse, a driver and automobile, or a person and automated equipment. With animate beings, the communication is part of our biological heritage. We signal our emotional state through body language, posture, and facial expressions. We use language. Animals use body language and posture, as well as facial expressions. We can read the state of our pets through the way they hold their bodies, their tails, and their ears. A skilled rider can feel the horse's state of tension or relaxation.
Machines, though, are artificially created by people who often assume perfect performance on its part and, moreover, fail to understand the critical importance of a continuing dialogue between cooperating entities. If the machine is working perfectly, they tend to believe, why does anyone have to know what is happening? Why? Let me tell you a story.
I am seated in the fancy auditorium of IBM's Almaden Research Laboratories, situated in the beautiful, rolling hills just south of San Jose, California. The speaker at this conference, a professor of computer science at MITâlet me call him “Prof. M”âis extolling the virtues of his new program. After describing his work, Prof. M proudly starts to demonstrate it. First, he
brings up a web page on the screen. Then, he does some magic with his mouse and keyboard, and after a few clicks and a little typing here and there, a new button appears on the page. “Ordinary people,” explains the professor, “can add new controls to their web pages.” (He never explains why anyone would want to.) “Now, watch as I show you that it works,” he proudly announces. He clicks and we watch. And wait. And watch. Nothing happens.
Prof. M is puzzled. Should he restart the program? Restart the computer? The audience, filled with Silicon Valley's finest technocrats, shouts advice. IBM research scientists scurry back and forth, peering at his computer, getting down on hands and knees to follow the wiring. The seconds stretch into minutes. The audience starts to giggle.
Prof. M was so enamored of his technology that he never considered what would happen if it failed. It hadn't occurred to him to provide feedback for reassurance that things were workingâor in this case, to provide clues when things didn't work. Later on, we discovered that the program was actually working perfectly, but there was no way of knowing this. The problem was that the security controls on IBM's internal network were not letting him gain access to the internet. Without feedback, however, without reassurance about the state of the program, nobody could tell just where the problem lay. The program lacked simple feedback to indicate that the click on the button had been detected, that the program was carrying out several steps of its internal instructions, that it had initiated an internet search, and that it was still waiting for the results to come back from that search.
Without feedback it wasn't possible to create the appropriate conceptual model. Any one of a dozen things could have failed: without evidence, there was no way to know. Prof. M had violated a fundamental design rule: provide continual awareness, without annoyance.
“I'm at a meeting in Viña del Mar, Chile,” starts an email from a colleague, “at a nice new Sheraton Hotel perched on the seawall. A lot of design effort went into it, including the elevators. A bank of them with up-down buttons at either end. The doors are glass and slide silently open and closed, with no sound to signal arrival or departure. With the typical ambient noise, you can't hear them, and unless standing close to an arriving elevator, can hardly see it move and can't tell when one is open. The only sign that an elevator is present is that the up-down signal light goes outâbut you can't see that from the center of the elevator bank either. In my first day here, I missed elevators that came and went three times.”
Feedback provides informative clues about what is happening, clues about what we should do. Without it, many simple operations fail, even one as simple as getting into an elevator. Proper feedback can make the difference between a pleasurable, successful system and one that frustrates and confuses. If the inappropriate use of feedback is frustrating with simple devices such as elevators, what will it be like with the completely automatic, autonomous devices of our future?
When we interact with people, we often form mental models of their internal thoughts, beliefs, and emotional states. We like
to believe we know what they are thinking. Recall how frustrating it can be to interact with people who do not show any facial expressions, give no verbal responses? Are they even listening? Do they understand? Agree? Disagree? The interaction is strained and unpleasant. Without feedback, we can't operate, whether it is with an elevator, a person, or a smart machine.
Actually, feedback is probably even more essential when we interact with our machines than with other people. We need to know what is happening, what the machine has detected, what its state is, what actions it is about to do. Even when everything is working smoothly, we need reassurance that this is the case.
This applies to everyday things such as home appliances. How do we know they are working well? Fortunately, many appliances make noises: the hum of the refrigerator, the sounds of the dishwasher, clothes washer and drier, and the whir of the fan for home heating and cooling systems all provide useful, reassuring knowledge that the systems are on and are operating. The home computer has fans, and the hard drive makes clicking noises when active, once again, providing some reassurance. Notice that all these sounds are natural: they were not added artificially into the system by a designer or engineer but are natural side-effects of the working of physical devices. This very naturalness is what makes them so effective: differences in operation are often reflected in subtle differences in the sounds, so not only is it possible to tell if something is operating, but usually one can tell what operation is being done and whether or not the sounds are normal or possibly signify problems.
Newer systems have tried to reduce noise for good reason: the background level of noise in our homes and offices is disturbing. Yet, when systems make no sounds at all, it isn't possible to
know if they are working. Just like the elevators of the opening quotation, sound can be informative. Quiet is good; silence may not be.
If sound is intrusive and annoying, even as a feedback mechanism, why not use lights? One problem is that a light, all by itself, is just as meaningless as the beeps that seem to erupt continually from my appliances. This is because the sounds are naturally created by the internal operation of the systems, whereas added on lights and beeps are artificial, signifying whatever arbitrary information the designer thought appropriate. Added-on lights almost always signify only some simple binary state: working or not, trouble or not, plugged in or not. There is no way for a person to know their meaning without recourse to a manual. There is no richness of interpretation, no subtlety: the light or beeps means that maybe things are good, or maybe bad, and all too often the person has to guess which.
Every piece of equipment has its own code for beeps, its own code for lights. A small red light visible on an appliance could mean that electric power is being applied, even though the appliance is off. Or it could mean that the unit is turned on, that it is working properly. Then again, red could signal that it is having trouble, and green could mean it is working properly. Some lights blink and flash; some change color. Different devices can use the same signals to indicate quite different things. Feedback is meaningless if it does not precisely convey a message.