The Glass Cage: Automation and Us (26 page)

BOOK: The Glass Cage: Automation and Us
6.5Mb size Format: txt, pdf, ePub

The game I was playing, an exquisitely crafted, goofily written open-world shooter called Red Dead Redemption, is set in the early years of the last century, in a mythical southwestern border territory named New Austin. Its plot is pure Peckinpah. When you start the game, you assume the role of a stoic outlaw-turned-rancher named John Marston, whose right cheek is riven by a couple of long, symbolically deep scars. Marston is being blackmailed into tracking down his old criminal associates by federal agents who are holding his wife and young son hostage. To complete the game, you have to guide the gunslinger through various feats of skill and cunning, each a little tougher than the one preceding it.

After a few more tries, I finally did make it over that bridge, grisly cargo in tow. In fact, after many mayhem-filled hours in front of my Xbox-connected flat-screen TV, I managed to get through all of the game’s fifty-odd missions. As my reward, I got to watch myself—John Marston, that is—be gunned down by the very agents who had forced him into the quest. Gruesome ending aside, I came away from the game with a feeling of accomplishment. I had roped mustangs, shot and skinned coyotes, robbed trains, won a small fortune playing poker, fought alongside Mexican revolutionaries, rescued harlots from drunken louts, and, in true
Wild Bunch
fashion, used a Gatling gun to send an army of thugs to Kingdom Come. I had been tested, and my middle-aged reflexes had risen to the challenge. It may not have been an epic win, but it was a win.

Video games tend to be loathed by people who have never played them. That’s understandable, given the gore involved, but it’s a shame. In addition to their considerable ingenuity and occasional beauty, the best games provide a model for the design of software. They show how applications can encourage the development of skills rather than their atrophy. To master a video game, a player has to struggle through challenges of increasing difficulty, always pushing the limits of his talent. Every mission has a goal, there are rewards for doing well, and the feedback (an eruption of blood, perhaps) is immediate and often visceral. Games promote a state of flow, inspiring players to repeat tricky maneuvers until they become second nature. The skill a gamer learns may be trivial—how to manipulate a plastic controller to drive an imaginary wagon over an imaginary bridge, say—but he’ll learn it thoroughly, and he’ll be able to exercise it again in the next mission or the next game. He’ll become an expert, and he’ll have a blast along the way.
*

When it comes to the software we use in our personal lives, video games are an exception. Most popular apps, gadgets, and online services are built for convenience, or, as their makers say, “usability.” Requiring only a few taps, swipes, or clicks, the programs can be mastered with little study or practice. Like the automated systems used in industry and commerce, they’ve been carefully designed to shift the burden of thought from people to computers. Even the high-end programs used by musicians, record producers, filmmakers, and photographers place an ever stronger emphasis on ease of use. Complex audio and visual effects, which once demanded expert know-how, can be achieved by pushing a button or dragging a slider. The underlying concepts need not be understood, as they’ve been incorporated into software routines. This has the very real benefit of making the software useful to a broader group of people—those who want to get the effects without the effort. But the cost of accommodating the dilettante is a demeaning of expertise.

Peter Merholz, a respected software-design consultant, counsels programmers to seek “frictionlessness” and “simplicity” in their products. Successful devices and applications, he says, hide their technical complexity behind user-friendly interfaces. They minimize the cognitive load they place on users: “Simple things don’t require a lot of thought. Choices are eliminated, recall is not required.”
1
That’s a recipe for creating the kinds of applications that, as Christof van Nimwegen’s Cannibals and Missionaries experiment demonstrated, bypass the mental processes of learning, skill building, and memorization. The tools demand little of us and, cognitively speaking, give little to us.

What Merholz calls the “it just works” design philosophy has a lot going for it. Anyone who has struggled to set the alarm on a digital clock or change the settings on a WiFi router or figure out Microsoft Word’s toolbars knows the value of simplicity. Needlessly complicated products waste time without much compensation. It’s true we don’t need to be experts at everything, but as software writers take to scripting processes of intellectual inquiry and social attachment, frictionlessness becomes a problematic ideal. It can sap us not only of know-how but of our sense that know-how is something important and worth cultivating. Think of the algorithms for reviewing and correcting spelling that are built into virtually every writing and messaging application these days. Spell checkers once served as tutors. They’d highlight possible errors, calling your attention to them and, in the process, giving you a little spelling lesson. You learned as you used them. Now, the tools incorporate autocorrect functions. They instantly and surreptitiously clean up your mistakes, without alerting you to them. There’s no feedback, no “friction.” You see nothing and learn nothing.

Or think of Google’s search engine. In its original form, it presented you with nothing but an empty text box. The interface was a model of simplicity, but the service still required you to think about your query, to consciously compose and refine a set of keywords to get the best results. That’s no longer necessary. In 2008, the company introduced Google Suggest, an autocomplete routine that uses prediction algorithms to anticipate what you’re looking for. Now, as soon as you type a letter into the search box, Google offers a set of suggestions for how to phrase your query. With each succeeding letter, a new set of suggestions pops up. Underlying the company’s hyperactive solicitude is a dogged, almost monomaniacal pursuit of efficiency. Taking the misanthropic view of automation, Google has come to see human cognition as creaky and inexact, a cumbersome biological process better handled by a computer. “I envision some years from now that the majority of search queries will be answered without you actually asking,” says Ray Kurzweil, the inventor and futurist who in 2012 was appointed Google’s director of engineering. The company will “just know this is something that you’re going to want to see.”
2
The ultimate goal is to fully automate the act of searching, to take human volition out of the picture.

Social networks like Facebook seem impelled by a similar aspiration. Through the statistical “discovery” of potential friends, the provision of “Like” buttons and other clickable tokens of affection, and the automated management of many of the time-consuming aspects of personal relations, they seek to streamline the messy process of affiliation. Facebook’s founder, Mark Zuckerberg, celebrates all of this as “frictionless sharing”—the removal of conscious effort from socializing. But there’s something repugnant about applying the bureaucratic ideals of speed, productivity, and standardization to our relations with others. The most meaningful bonds aren’t forged through transactions in a marketplace or other routinized exchanges of data. People aren’t nodes on a network grid. The bonds require trust and courtesy and sacrifice, all of which, at least to a technocrat’s mind, are sources of inefficiency and inconvenience. Removing the friction from social attachments doesn’t strengthen them; it weakens them. It makes them more like the attachments between consumers and products—easily formed and just as easily broken.

Like meddlesome parents who never let their kids do anything on their own, Google, Facebook, and other makers of personal software end up demeaning and diminishing qualities of character that, at least in the past, have been seen as essential to a full and vigorous life: ingenuity, curiosity, independence, perseverance, daring. It may be that in the future we’ll only experience such virtues vicariously, through the exploits of action figures like John Marston in the fantasy worlds we enter through screens.

 

*
In suggesting video games as a model for programmers, I’m not endorsing the voguish software-design practice that goes by the ugly name “gamification.” That’s when an app or a website uses a game-like reward system to motivate or manipulate people into repeating some prescribed activity. Building on the operant-conditioning experiments of the psychologist B. F. Skinner, gamification exploits the flow state’s dark side. Seeking to sustain the pleasures and rewards of flow, people can become obsessive in their use of the software. Computerized slot machines, to take one notorious example, are carefully designed to promote an addictive form of flow in their players, as Natasha Dow Schüll describes in her chilling book
Addiction by Design: Machine Gambling in Vegas
(Princeton: Princeton University Press, 2012). An experience that is normally “life affirming, restorative, and enriching,” she writes, becomes for gamblers “depleting, entrapping, and associated with a loss of autonomy.” Even when used for ostensibly benign purposes, such as dieting, gamification wields a cynical power. Far from being an antidote to technology-centered design, it takes the practice to an extreme. It seeks to automate human will.

YOUR INNER DRONE

I
T’S A COLD
, misty Friday night in mid-December and you’re driving home from your office holiday party. Actually, you’re being driven home. You recently bought your first autonomous car—a Google-programmed, Mercedes-built eSmart electric sedan—and the software is at the wheel. You can see from the glare of your self-adjusting LED headlights that the street is icy in spots, and you know, thanks to the continuously updated dashboard display, that the car is adjusting its speed and traction settings accordingly. All’s going smoothly. You relax and let your mind drift back to the evening’s stilted festivities. But as you pass through a densely wooded stretch of road, just a few hundred yards from your driveway, an animal darts into the street and freezes, directly in the path of the car. It’s your neighbor’s beagle, you realize—the one that’s always getting loose.

What does your robot driver do? Does it slam on the brakes, in hopes of saving the dog but at the risk of sending the car into an uncontrolled skid? Or does it keep its virtual foot off the brake, sacrificing the beagle to ensure that you and your vehicle stay out of harm’s way? How does it sort through and weigh the variables and probabilities to arrive at a split-second decision? If its algorithms calculate that hitting the brakes would give the dog a 53 percent chance of survival but would entail an 18 percent chance of damaging the car and a 4 percent chance of causing injury to you, does it conclude that trying to save the animal would be the right thing to do? How does the software, working on its own, translate a set of numbers into a decision that has both practical and moral consequences?

What if the animal in the road isn’t your neighbor’s pet but your own? What, for that matter, if it isn’t a dog but a child? Imagine you’re on your morning commute, scrolling through your overnight emails as your self-driving car crosses a bridge, its speed precisely synced to the forty-mile-per-hour limit. A group of schoolchildren is also heading over the bridge, on the pedestrian walkway that runs alongside your lane. The kids, watched by adults, seem orderly and well behaved. There’s no sign of trouble, but your car slows slightly, its computer preferring to err on the side of safety. Suddenly, there’s a tussle, and a little boy is pushed into the road. Busily tapping out a message on your smartphone, you’re oblivious to what’s happening. Your car has to make the decision: either it swerves out of its lane and goes off the opposite side of the bridge, possibly killing you, or it hits the child. What does the software instruct the steering wheel to do? Would the program make a different choice if it knew that one of your own children was riding with you, strapped into a sensor-equipped car seat in the back? What if there was an oncoming vehicle in the other lane? What if that vehicle was a school bus? Isaac Asimov’s first law of robot ethics—“a robot may not injure a human being, or, through inaction, allow a human being to come to harm”
1
—sounds reasonable and reassuring, but it assumes a world far simpler than our own.

The arrival of autonomous vehicles, says Gary Marcus, the NYU psychology professor, would do more than “signal the end of one more human niche.” It would mark the start of a new era in which machines will have to have “ethical systems.”
2
Some would argue that we’re already there. In small but ominous ways, we have started handing off moral decisions to computers. Consider Roomba, the much-publicized robotic vacuum cleaner. Roomba makes no distinction between a dust bunny and an insect. It gobbles both, indiscriminately. If a cricket crosses its path, the cricket gets sucked to its death. A lot of people, when vacuuming, will also run over the cricket. They place no value on a bug’s life, at least not when the bug is an intruder in their home. But other people will stop what they’re doing, pick up the cricket, carry it to the door, and set it loose. (Followers of Jainism, the ancient Indian religion, consider it a sin to harm any living thing; they take great care not to kill or hurt insects.) When we set Roomba loose on a carpet, we cede to it the power to make moral choices on our behalf. Robotic lawn mowers, like Lawn-Bott and Automower, routinely deal death to higher forms of life, including reptiles, amphibians, and small mammals. Most people, when they see a toad or a field mouse ahead of them as they cut their grass, will make a conscious decision to spare the animal, and if they should run it over by accident, they’ll feel bad about it. A robotic lawn mower kills without compunction.

Up to now, discussions about the morals of robots and other machines have been largely theoretical, the stuff of science-fiction stories or thought experiments in philosophy classes. Ethical considerations have often influenced the design of tools—guns have safeties, motors have governors, search engines have filters—but machines haven’t been required to have consciences. They haven’t had to adjust their own operation in real time to account for the ethical vagaries of a situation. Whenever questions about the moral use of a technology arose in the past, people would step in to sort things out. That won’t always be feasible in the future. As robots and computers become more adept at sensing the world and acting autonomously in it, they’ll inevitably face situations in which there’s no one right choice. They’ll have to make vexing decisions on their own. It’s impossible to automate complex human activities without also automating moral choices.

Other books

The Unraveling of Melody by Erika Van Eck
Untrained Eye by Jody Klaire
Endure My Heart by Joan Smith
No Strings... by Janelle Denison
Footsteps in Time by Sarah Woodbury
All Our Tomorrows by Peter Cawdron
Perfect Peace by Daniel Black