The Glass Cage: Automation and Us (2 page)

BOOK: The Glass Cage: Automation and Us
4.92Mb size Format: txt, pdf, ePub

Autonomous automobiles have a ways to go before they start chauffeuring us to work or ferrying our kids to soccer games. Although Google has said it expects commercial versions of its car to be on sale by the end of the decade, that’s probably wishful thinking. The vehicle’s sensor systems remain prohibitively expensive, with the roof-mounted laser apparatus alone going for eighty thousand dollars. Many technical challenges remain to be met, such as navigating snowy or leaf-covered roads, dealing with unexpected detours, and interpreting the hand signals of traffic cops and road workers. Even the most powerful computers still have a hard time distinguishing a bit of harmless road debris (a flattened cardboard box, say) from a dangerous obstacle (a nail-studded chunk of plywood). Most daunting of all are the many legal, cultural, and ethical hurdles a driverless car faces. Where, for instance, will culpability and liability reside should a computer-driven automobile cause an accident that kills or injures someone? With the car’s owner? With the manufacturer that installed the self-driving system? With the programmers who wrote the software? Until such thorny questions get sorted out, fully automated cars are unlikely to grace dealer showrooms.

Progress will sprint forward nonetheless. Much of the Google test cars’ hardware and software will come to be incorporated into future generations of cars and trucks. Since the company went public with its autonomous vehicle program, most of the world’s major carmakers have let it be known that they have similar efforts under way. The goal, for the time being, is not so much to create an immaculate robot-on-wheels as to continue to invent and refine automated features that enhance safety and convenience in ways that get people to buy new cars. Since I first turned the key in my Subaru’s ignition, the automation of driving has already come a long way. Today’s automobiles are stuffed with electronic gadgetry. Microchips and sensors govern the workings of the cruise control, the antilock brakes, the traction and stability mechanisms, and, in higher-end models, the variable-speed transmission, parking-assist system, collision-avoidance system, adaptive headlights, and dashboard displays. Software already provides a buffer between us and the road. We’re not so much controlling our cars as sending electronic inputs to the computers that control them.

In coming years, we’ll see responsibility for many more aspects of driving shift from people to software. Luxury-car makers like Infiniti, Mercedes, and Volvo are rolling out models that combine radar-assisted adaptive cruise control, which works even in stop-and-go traffic, with computerized steering systems that keep a car centered in its lane and brakes that slam themselves on in emergencies. Other manufacturers are rushing to introduce even more advanced controls. Tesla Motors, the electric car pioneer, is developing an automotive autopilot that “should be able to [handle] 90 percent of miles driven,” according to the company’s ambitious chief executive, Elon Musk.
3

The arrival of Google’s self-driving car shakes up more than our conception of driving. It forces us to change our thinking about what computers and robots can and can’t do. Up until that fateful October day, it was taken for granted that many important skills lay beyond the reach of automation. Computers could do a lot of things, but they couldn’t do everything. In an influential 2004 book,
The New Division of Labor: How Computers Are Creating the Next Job Market
, economists Frank Levy and Richard Murnane argued, convincingly, that there were practical limits to the ability of software programmers to replicate human talents, particularly those involving sensory perception, pattern recognition, and conceptual knowledge. They pointed specifically to the example of driving a car on the open road, a talent that requires the instantaneous interpretation of a welter of visual signals and an ability to adapt seamlessly to shifting and often unanticipated situations. We hardly know how we pull off such a feat ourselves, so the idea that programmers could reduce all of driving’s intricacies, intangibilities, and contingencies to a set of instructions, to lines of software code, seemed ludicrous. “Executing a left turn across oncoming traffic,” Levy and Murnane wrote, “involves so many factors that it is hard to imagine the set of rules that can replicate a driver’s behavior.” It seemed a sure bet, to them and to pretty much everyone else, that steering wheels would remain firmly in the grip of human hands.
4

In assessing computers’ capabilities, economists and psychologists have long drawn on a basic distinction between two kinds of knowledge:
tacit
and
explicit
. Tacit knowledge, which is also sometimes called procedural knowledge, refers to all the stuff we do without thinking about it: riding a bike, snagging a fly ball, reading a book, driving a car. These aren’t innate skills—we have to learn them, and some people are better at them than others—but they can’t be expressed as a simple recipe. When you make a turn through a busy intersection in your car, neurological studies show, many areas of your brain are hard at work, processing sensory stimuli, making estimates of time and distance, and coordinating your arms and legs.
5
But if someone asked you to document everything involved in making that turn, you wouldn’t be able to, at least not without resorting to generalizations and abstractions. The ability resides deep in your nervous system, outside the ambit of your conscious mind. The mental processing goes on without your awareness.

Much of our ability to size up situations and make quick judgments about them stems from the fuzzy realm of tacit knowledge. Most of our creative and artistic skills reside there too. Explicit knowledge, which is also known as declarative knowledge, is the stuff you can actually write down: how to change a flat tire, how to fold an origami crane, how to solve a quadratic equation. These are processes that can be broken down into well-defined steps. One person can explain them to another person through written or oral instructions: do this, then this, then this.

Because a software program is essentially a set of precise, written instructions—do this, then this, then this—we’ve assumed that while computers can replicate skills that depend on explicit knowledge, they’re not so good when it comes to skills that flow from tacit knowledge. How do you translate the ineffable into lines of code, into the rigid, step-by-step instructions of an algorithm? The boundary between the explicit and the tacit has always been a rough one—a lot of our talents straddle the line—but it seemed to offer a good way to define the limits of automation and, in turn, to mark out the exclusive precincts of the human. The sophisticated jobs Levy and Murnane identified as lying beyond the reach of computers—in addition to driving, they pointed to teaching and medical diagnosis—were a mix of the mental and the manual, but they all drew on tacit knowledge.

Google’s car resets the boundary between human and computer, and it does so more dramatically, more decisively, than have earlier breakthroughs in programming. It tells us that our idea of the limits of automation has always been something of a fiction. We’re not as special as we think we are. While the distinction between tacit and explicit knowledge remains a useful one in the realm of human psychology, it has lost much of its relevance to discussions of automation.

T
HAT DOESN’T
mean that computers now have tacit knowledge, or that they’ve started to think the way we think, or that they’ll soon be able to do everything people can do. They don’t, they haven’t, and they won’t. Artificial intelligence is not human intelligence. People are mindful; computers are mindless. But when it comes to performing demanding tasks, whether with the brain or the body, computers are able to replicate our ends without replicating our means. When a driverless car makes a left turn in traffic, it’s not tapping into a well of intuition and skill; it’s following a program. But while the strategies are different, the outcomes, for practical purposes, are the same. The superhuman speed with which computers can follow instructions, calculate probabilities, and receive and send data means that they can use explicit knowledge to perform many of the complicated tasks that we do with tacit knowledge. In some cases, the unique strengths of computers allow them to perform what we consider to be tacit skills better than we can perform them ourselves. In a world of computer-controlled cars, you wouldn’t need traffic lights or stop signs. Through the continuous, high-speed exchange of data, vehicles would seamlessly coordinate their passage through even the busiest of intersections—just as computers today regulate the flow of inconceivable numbers of data packets along the highways and byways of the internet. What’s ineffable in our own minds becomes altogether effable in the circuits of a microchip.

Many of the cognitive talents we’ve considered uniquely human, it turns out, are anything but. Once computers get quick enough, they can begin to mimic our ability to spot patterns, make judgments, and learn from experience. We were first taught that lesson back in 1997 when IBM’s Deep Blue chess-playing supercomputer, which could evaluate a billion possible moves every five seconds, beat the world champion Garry Kasparov. With Google’s intelligent car, which can process a million environmental readings a second, we’re learning the lesson again. A lot of the very smart things that people do don’t actually require a brain. The intellectual talents of highly trained professionals are no more protected from automation than is the driver’s left turn. We see the evidence everywhere. Creative and analytical work of all sorts is being mediated by software. Doctors use computers to diagnose diseases. Architects use them to design buildings. Attorneys use them to evaluate evidence. Musicians use them to simulate instruments and correct bum notes. Teachers use them to tutor students and grade papers. Computers aren’t taking over these professions entirely, but they are taking over many aspects of them. And they’re certainly changing the way the work is performed.

It’s not only vocations that are being computerized. Avocations are too. Thanks to the proliferation of smartphones, tablets, and other small, affordable, and even wearable computers, we now depend on software to carry out many of our daily chores and pastimes. We launch apps to aid us in shopping, cooking, exercising, even finding a mate and raising a child. We follow turn-by-turn GPS instructions to get from one place to the next. We use social networks to maintain friendships and express our feelings. We seek advice from recommendation engines on what to watch, read, and listen to. We look to Google, or to Apple’s Siri, to answer our questions and solve our problems. The computer is becoming our all-purpose tool for navigating, manipulating, and understanding the world, in both its physical and its social manifestations. Just think what happens these days when people misplace their smartphones or lose their connections to the net. Without their digital assistants, they feel helpless. As Katherine Hayles, a literature professor at Duke University, observed in her 2012 book
How We Think
, “When my computer goes down or my Internet connection fails, I feel lost, disoriented, unable to work—in fact, I feel as if my hands have been amputated.”
6

Our dependency on computers may be disconcerting at times, but in general we welcome it. We’re eager to celebrate and show off our whizzy new gadgets and apps—and not only because they’re so useful and so stylish. There’s something magical about computer automation. To watch an iPhone identify an obscure song playing over the sound system in a bar is to experience something that would have been inconceivable to any previous generation. To see a crew of brightly painted factory robots effortlessly assemble a solar panel or a jet engine is to view an exquisite heavy-metal ballet, each movement choreographed to a fraction of a millimeter and a sliver of a second. The people who have taken rides in Google’s car report that the thrill is almost otherworldly; their earth-bound brain has a tough time processing the experience. Today, we really do seem to be entering a brave new world, a Tomorrowland where computers and automatons will be at our service, relieving us of our burdens, granting our wishes, and sometimes just keeping us company. Very soon now, our Silicon Valley wizards assure us, we’ll have robot maids as well as robot chauffeurs. Sundries will be fabricated by 3-D printers and delivered to our doors by drones. The world of the
Jetsons
, or at least of
Knight Rider
, beckons.

It’s hard not to feel awestruck. It’s also hard not to feel apprehensive. An automatic transmission may seem a paltry thing beside Google’s tricked-out, look-ma-no-humans Prius, but the former was a precursor to the latter, a small step along the path to total automation, and I can’t help but remember the letdown I felt after the gear stick was taken from my hand—or, to put responsibility where it belongs, after I begged to have the gear stick taken from my hand. If the convenience of an automatic transmission left me feeling a little lacking, a little
underutilized
, as a labor economist might say, how will it feel to become, truly, a passenger in my own car?

Other books

The Delta Star by Joseph Wambaugh
Kassern (Archangels Creed) by Boone, Azure, Kenra Daniels
Falling Angels by Tracy Chevalier
The Sibyl in Her Grave by Sarah Caudwell
Yolo by Lauren Myracle
Darconville's Cat by Alexander Theroux