The Design of Future Things (15 page)

BOOK: The Design of Future Things
5.02Mb size Format: txt, pdf, ePub
ads

All automobile manufacturers are concerned about these issues. In addition to addressing actual safety in this modern, litigious society, they worry that even the slightest problem may cause massive lawsuits against them. So, how do they respond? Cautiously, very cautiously.

Driving a vehicle at high speeds over crowded highways is hazardous: there are over 1.2 million deaths and 50 million injuries each year in the world. This is truly a situation where our
reliance on a machine, the automobile, exposes all of us to unnecessary risk—one that is helpful, invaluable to the population of the world, and deadly.

Yes, we could train drivers better, but part of the problem is that driving is inherently dangerous. When problems arise, they do so quickly, with little time to respond. Every driver experiences wavering levels of attention—a natural human condition. Even in the best of cases, driving is a dangerous activity.

If one cannot automate fully, then the automation that is possible must be applied with great care, sometimes not being invoked, sometimes requiring more human participation than is really needed in order to keep the human drivers informed and attentive. Full manual control of automobiles is dangerous. Fully automatic control will be safer. The difficulty lies in the transition toward full automation, when only some things will be automated, when different vehicles will have different capabilities, and when even the automation that is installed will be limited in capability. I fear that while the partial automation of driving will lead to fewer accidents, the accidents that do happen will be greater in magnitude, involve more cars, and exact a higher toll. The joint relationship between machines and their humans must be approached with caution.

 

CHAPTER FIVE
The Role of
Automation

Why do we need automation? Many technologists cite three major reasons: to eliminate the dull, the dangerous, and the dirty. It is difficult to argue with this answer, but many things are automated for other reasons—to simplify a complex task, to reduce the work force, to entertain—or simply because it can be done.

Even successful automation always comes at a price, for in the process of taking over one set of tasks, it invariably introduces a new set of issues. Automation often satisfactorily performs its task but adds an increased need for maintenance. Some automation exchanges the need for skilled laborers with the need for caretakers. In general, whenever any task is automated, the impact is felt far beyond the one task. Rather, the application of automation is a system issue, changing the way work is done, restructuring jobs, shifting the required tasks from one portion of the population to another, and, in many cases, eliminating the need for some functions and adding the need for others. For some people, automation is helpful; for
others, especially those whose jobs have been changed or eliminated, it can be terrible.

The automation of even simple tasks has an impact. Consider the mundane task of making a cup of coffee. I use an automated machine that makes coffee at the push of a button, automatically heating the water, grinding the beans, brewing the coffee, and disposing of the grounds. The result is that I have replaced the mild tedium of making coffee each morning with the more onerous need to maintain my machine. The water and bean containers must be filled, the inner parts of the machine must be disassembled and cleaned periodically, and all areas in contact with liquid must be cleaned both of coffee residue and calcium deposits (then the machine must be cleaned again to remove all vestiges of the cleaning solution used to dissolve the calcium deposits). Why all this effort to minimize the difficulty of a task that isn't really very difficult in the first place? The answer, in this case, is that the automation allows me to time-shift the demand on my attention: I trade a little bit of work at an inconvenient time—when I have just awakened, am still somewhat sleepy, in a rush—with considerable work later, which I can schedule to be at my convenience.

The trend toward increasing automation seems unstoppable in terms of both the sheer number of tasks and activities that are becoming automated and the intelligence and autonomy of the machines that are taking over these tasks. Automation is not inevitable, however. Moreover, there is no reason why automation must present us with so many deficiencies and problems. It should be possible to develop technology that truly minimizes the dull, the dangerous, and the dirty, without introducing huge negative side effects.

Smart Things
Smart Homes

It's late in the evening in Boulder, Colorado, and Mike Mozer is sitting in his living room, reading. After a while he yawns, stretches, then stands up and wanders toward his bedroom. The house, ever alert to his activity, decides that he is going to bed, so it turns off the living room lights and turns on the lights in the entry, the master bedroom, and the master bath. It also turns the heat down. Actually, it is the computer system in his house that continually monitors Mozer's behavioral patterns and adjusts the lighting, heating, and other aspects of the home to prepare for his anticipated behavior. This is no ordinary program. It operates through what is called a “neural network” designed to mimic the pattern-recognition and learning abilities of human neurons, thus, the human brain. Not only does it recognize Mozer's activity patterns, but it can appropriately anticipate his behavior most of the time. A neural network is a powerful pattern recognizer, and because it examines the sequence of his activities, including the time of day at which they occur, it predicts both what he will do and when. As a result, when Mozer leaves the house to go to work, it turns off the heat and hot water heater in order to save energy, but when its circuits anticipate his return, it turns them back on again so that the house will be comfortable when he enters.

Is this house smart? Intelligent? The designer of this automated system, Mike Mozer, doesn't think so: he calls it “adaptive.” It is instructive to look at Mozer's experience as we try to understand just what it means to be intelligent. The house has over seventy-five sensors that measure each room's temperature, ambient light,
sound levels, door and window positions, the weather outside and amount of sunlight, and any movements by inhabitants. Actuators control the heating of the rooms and the hot water, lighting, and ventilation. The system contains more than five miles of cabling. Neural network computer software can learn, so the house is continually adapting its behavior according to Mozer's preferences. If it selects a setting that is not appropriate, Mozer corrects the setting, and the house then changes its behavior. One journalist described how this happens:

 

Mozer demonstrated the bathroom light, which turned on to a low intensity as he entered. “The system picks the lowest level of the light or heat it thinks it can get away with in order to conserve energy, and I need to complain if I am not satisfied with its decision,” he said. To express his discomfort, he hit a wall switch, causing the system to brighten the light and to “punish itself” so that the next time he enters the room, a higher intensity will be selected.

The house trains its owner as much as the owner trains the house. When working late at night at the university, Mozer would sometimes realize that he had to get home: his house was expecting him, dutifully turning on the heat and hot water, getting ready for his arrival. This raises an interesting question: why can't he just call his home and tell it that he is going to be late? Similarly, his attempt to discover and fix some faulty hardware led to a system that also could detect when someone dawdled too long in the bathroom. “Long after the hardware problem was resolved,” said Mozer, “we left the broadcast message in the system, because it provided useful feedback to the
inhabitants about how their time was being spent.” So, now the house warns inhabitants when they spend too much time in the bathroom? This home sounds like a real nag.

Is this an intelligent house? Here are some more comments by Mozer himself on the limits to the control system's intelligence:

 

The Adaptive House project has inspired much brainstorming about ways to extend the project further, most of which seem entirely misguided. One idea often mentioned is controlling home entertainment systems—stereos, TVs, radios, etc. The problem with selection of video and audio in the home is that the inhabitants' preferences will depend on state of mind, and few cues are directly available from the environment—even using machine vision—that correlate with state of mind. The result is likely to be that the system mispredicts often and annoys the inhabitants more than it supports them. The annoyance is magnified by the fact that when inhabitants seek audio or video entertainment, they generally have an explicit intention to do so. This intention contrasts with, say, temperature regulation in a home, where the inhabitants do not consciously consider the temperature unless it becomes uncomfortable. If inhabitants are aware of their goals, achieving the goal is possible with a simple click of a button, and errors—such as blasting the stereo when one is concentrating on a difficult problem—are all but eliminated. The benefit/cost trade-off falls on the side of manual control.

If only the house could read the mind of its owner. It is this inability to read minds, or, as the scientists prefer to say, to infer
a person's intentions, that defeats these systems. Here the problem goes far beyond the lack of common ground, as anyone who has ever lived with another person knows. There may be much sharing of knowledge and activities, but it is still difficult to know exactly what another person intends to do. In theory, the mythical British butler could anticipate the wants and desires of his master, although my knowledge of how well this succeeds comes from novels and television—not the most reliable sources. Even here, much of the butler's success comes about because his employers' lives are well regulated by the pace of social events, so that the schedule dictates which tasks need doing.

Automatic systems that decide whether or not to do some activity can, of course, be right or wrong. Failures come in two forms: misses and false alarms. A miss means that the system has failed to detect a situation, therefore to perform the desired action. A false alarm means that the system has acted when it shouldn't have. Think of an automated fire detection system. A miss is a failure to signal a fire when it happens. A false alarm is the signaling of a fire, even though none is present. These two forms of error have different costs.

A failure to detect a fire can have disastrous consequences, but false detections can also create problems. If the only action taken by the fire detector is to sound an alarm, a false alarm is mostly just a nuisance, but it also diminishes trust in the system. But what if the false alarm turns on the sprinkler system and notifies the fire department? Here the cost can be enormous, especially if the water damages valuable objects. If a smart home misreads the intentions of its occupants, the costs of misses and false alarms are usually small. If the music system suddenly comes on because the house thinks the resident would
like to hear music, it is annoying but not dangerous. If the system diligently turns up the heat every morning, even though the inhabitants are away on vacation, there are no serious consequences. In an automobile, however, if the driver relies on the car to slow up every time it gets too close to the car in front, a miss can be life threatening. And a false alarm, where the car veers because it thinks the driver is wandering out of his lane or brakes because it incorrectly thinks something is in front of it, can be life threatening if nearby vehicles are surprised by the action and fail to respond quickly enough.

Whether false alarms are dangerous or simply annoying, they diminish trust. After a few false alarms, the alarm system will be disregarded. Then, if there is a real fire, the inhabitants are apt to ignore the warning as “just another false alarm.” Trust develops over time and is based on experience, along with continual reliable interaction.

The Mozer home system works for its owner because he is also the scientist who built it, so he is more forgiving of problems. Because he is a research scientist and an expert on neural networks, his home serves as a research laboratory. It is a wonderful experiment and would be great fun to visit, but I don't think I would want to live there.

Homes That Make People Smart

In sharp contrast to the fully automated home that tries to do things automatically, a group of researchers at Microsoft Research Cambridge (England) designs homes with devices that augment human intelligence. Consider the problem of coordinating the activities of a home's inhabitants—say, a family with two working adults and two teenagers. This presents a daunting
problem. The technologist's traditional approach in dealing with multiple agendas is to imagine intelligent calendars. For example, the home could match up the schedules of every house member to determine when meals should be scheduled and who should drive others to and from their activities. Just imagine your home continually communicating with you—emailing, instant messaging, text messaging, or even telephoning—reminding you of your appointments, when you need to be home for dinner, when to pick up other family members, or even when to stop at the market on the way home.

BOOK: The Design of Future Things
5.02Mb size Format: txt, pdf, ePub
ads

Other books

A Reliable Wife by Robert Goolrick
Nothing But Trouble by Bettye Griffin
Between Planets by Robert A Heinlein
Old Powder Man by Joan Williams
Cold Pastoral by Margaret Duley
Slow Burn by Terrence McCauley
Pennyroyal by Stella Whitelaw
Suites imperiales by Bret Easton Ellis
Ghost Town by Richard W. Jennings
The Big Ugly by Hinkson, Jake