Everything Is Obvious (7 page)

Read Everything Is Obvious Online

Authors: Duncan J. Watts

BOOK: Everything Is Obvious
11.94Mb size Format: txt, pdf, ePub

Finally, people digest new information in ways that tend to reinforce what they already think. In part, we do this by
noticing information that confirms our existing beliefs more readily than information that does not. And in part, we do it by subjecting disconfirming information to greater scrutiny and skepticism than confirming information. Together, these two closely related tendencies—known as confirmation bias and motivated reasoning respectively—greatly impede our ability to resolve disputes, from petty disagreements over domestic duties to long-running political conflicts like those in Northern Ireland or Israel-Palestine, in which the different parties look at the same set of “facts” and come away with completely different impressions of reality. Even in science, confirmation bias and motivated reasoning play pernicious roles. Scientists, that is, are supposed to follow the evidence, even if it contradicts their own preexisting beliefs; and yet, more often then they should, they question the evidence instead. The result, as the physicist Max Planck famously acknowledged, is often that “A new scientific truth does not triumph by convincing its opponents and making them see the light, but rather because its opponents eventually die.”
15

WHAT IS RELEVANT?

Taken together, the evidence from psychological experiments makes clear that there are a great many potentially relevant factors that affect our behavior in very real and tangible ways but that operate largely outside of our conscious awareness. Unfortunately, psychologists have identified so many of these effects—priming, framing, anchoring, availability, motivated reasoning, loss aversion, and so on—that it’s hard to see how they all fit together. By design, experiments emphasize one potentially relevant factor at a time in order to isolate its effects. In real life, however, many such factors may be present to varying extents in any given situation; thus it’s critical
to understand how they interact with one another. It may be true, in other words, that holding a green pen makes you think of Gatorade, or that listening to German music predisposes you to German wine, or that thinking of your social security number affects how much you will bid for something. But what will you buy, and how much will you pay for it, when you are exposed to many, possibly conflicting, subconscious influences at once?

It simply isn’t clear. Nor is the profusion of unconscious psychological biases the only problem. To return to the ice cream example from before, although it may be true that I like ice cream as a general rule, how much I like it at a particular point in time might vary considerably, depending on the time of day, the weather, how hungry I am, and how good the ice cream is that I expect to get. My decision, moreover, doesn’t depend just on how much I like ice cream, or even just the relation between how much I like it versus how much it costs. It also depends on whether or not I know the location of the nearest ice cream shop, whether or not I have been there before, how much of a rush I’m in, who I’m with and what they want, whether or not I have to go to the bank to get money, where the nearest bank is, whether or not I just saw someone else eating an ice cream, or just heard a song that reminded me of a pleasurable time when I happened to be eating an ice cream, and so on. Even in the simplest situations, the list of factors that might turn out to be relevant can get very long very quickly. And with so many factors to worry about, even very similar situations may differ in subtle ways that turn out to be important. When trying to understand—or better yet predict—individual decisions, how are we to know which of these many factors are the ones to pay attention to, and which can be safely ignored?

The ability to know what is relevant to a given situation
is of course the hallmark of commonsense knowledge that I discussed in the previous chapter. And in practice, it rarely occurs to us that the ease with which we make decisions disguises any sort of complexity. As the philosopher Daniel Dennett points out, when he gets up in the middle of the night to make himself a midnight snack, all he needs to know is that there is bread, ham, mayonnaise, and beer in the fridge, and the rest of the plan pretty much works itself out. Of course he also knows that “mayonnaise doesn’t dissolve knives on contact, that a slice of bread is smaller than Mount Everest, that opening the refrigerator doesn’t cause a nuclear holocaust in the kitchen” and probably trillions of other irrelevant facts and logical relations. But somehow he is able to ignore all these things, without even being aware of what it is that he’s ignoring, and focus on the few things that matter.
16

But as Dennett argues, there is a big difference between knowing what is relevant in practice and being able to explain how it is that we know it. To begin with, it seems clear that what is relevant about a situation is just those features that it shares with other comparable situations—for example, we know that how much something costs is relevant to a purchase decision because cost is something that generally matters whenever people buy something. But how do we know which situations are comparable to the one we’re in? Well, that also seems clear: Comparable situations are those that share the same features. All “purchase” decisions are comparable in the sense that they involve a decision maker contemplating a number of options, such as cost, quality, availability, and so on. But now we encounter the problem. Determining which features are relevant about a situation requires us to associate it with some set of comparable situations. Yet determining which situations are comparable depends on knowing which features are relevant.

This inherent circularity poses what philosophers and cognitive scientists call the frame problem, and they have been beating their heads against it for decades. The frame problem was first noticed in the field of artificial intelligence, when researchers started trying to program computers and robots to solve relatively simple everyday tasks like, say, cleaning a messy room. At first they assumed that it couldn’t be
that
hard to write down everything that was relevant to a situation like this. After all, people manage to clean their rooms every day without even really thinking about it. How hard could it be to teach a robot? Very hard indeed, as it turned out. As I discussed in the last chapter, even the relatively straightforward activity of navigating the subway system requires a surprising amount of knowledge about the world—not just about subway doors and platforms but also about maintaining personal distance, avoiding eye contact, and getting out of the way of pushy New Yorkers. Very quickly AI researchers realized that virtually
every
everyday task is difficult for essentially the same reason—that the list of potentially relevant facts and rules is staggeringly long. Nor does it help that most of this list can be safely ignored most of the time—because it’s generally impossible to know in advance which things can be ignored and which cannot. So in practice, the researchers found that they had to wildly overprogram their creations in order to perform even the most trivial tasks.
17

The intractability of the frame problem effectively sank the original vision of AI, which was to replicate human intelligence more or less as we experience it ourselves. And yet there was a silver lining to this defeat. Because AI researchers had to program
every
fact, rule, and learning process into their creations from scratch, and because their creations failed to behave as expected in obvious and often catastrophic ways—like driving off a cliff or trying to walk
through a wall—the frame problem was impossible to ignore. Rather than trying to crack the problem, therefore, AI researchers took a different approach entirely—one that emphasized statistical models of data rather than thought processes. This approach, which nowadays is called machine learning, was far less intuitive than the original cognitive approach, but it has proved to be much more productive, leading to all kinds of impressive breakthroughs, from the almost magical ability of search engines to complete queries as you type them to building autonomous robot cars, and even a computer that can play
Jeopardy!
18

WE DON’T THINK THE WAY WE THINK WE THINK

The frame problem, however, isn’t just a problem for artificial intelligence—it’s a problem for human intelligence as well. As the psychologist Daniel Gilbert describes in
Stumbling on Happiness
, when we imagine ourselves, or someone else, confronting a particular situation, our brains do not generate a long list of questions about all the possible details that might be relevant. Rather, just as an industrious assistant might use stock footage to flesh out a drab PowerPoint presentation, our “mental simulation” of the event or the individual in question simply plumbs our extensive database of memories, images, experiences, cultural norms, and imagined outcomes, and seamlessly inserts whatever details are necessary in order to complete the picture. Survey respondents leaving restaurants, for example, readily described the outfits of the waiters inside, even in cases where the waitstaff had been entirely female. Students asked about the color of a classroom blackboard recalled it as being green—the normal color—even though the board in question was blue. In general, people
systematically overestimate both the pain they will experience as a consequence of anticipated losses and the joy they will garner from anticipated gains. And when matched online with prospective dates, subjects report greater levels of liking for their matches when they are given
less
information about them. In all of these cases, a careful person ought to respond that he can’t answer the question accurately without being given more information. But because the “filling in” process happens instantaneously and effortlessly, we are typically unaware that it is even taking place; thus it doesn’t occur to us that anything is missing.
19

The frame problem should warn us that when we do this, we are bound to make mistakes. And we do, all the time. But unlike the creations of the AI researchers, humans do not surprise us in ways that force us to rewrite our whole mental model of how we think. Rather, just as Paul Lazarsfeld’s imagined reader of the
American Soldier
found every result and its opposite is equally obvious, once we know the outcome we can almost always identify previously overlooked aspects of the situation that
then
seem relevant. Perhaps we expected to be happy after winning the lottery, and instead find ourselves depressed—obviously a bad prediction. But by the time we realize our mistake, we also have new information, say about all the relatives who suddenly appeared wanting financial support. It will then seem to us that if we had only had
that
information earlier, we would have anticipated our future state of happiness correctly, and maybe never bought the lottery ticket. Rather than questioning our ability to make predictions about our future happiness, therefore, we simply conclude that we missed something important—a mistake we surely won’t make again. And yet we do make the mistake again. In fact, no matter how many times we fail to predict someone’s behavior correctly, we can always
explain away our mistakes in terms of things that we didn’t know at the time. In this way, we manage to sweep the frame problem under the carpet—always convincing ourselves that this time we are going to get it right, without ever learning what it is that we are doing wrong.

Nowhere is this pattern more evident, and more difficult to expunge, than in the relationship between financial rewards and incentives. It seems obvious, for example, that employee performance can be improved through the application of financial incentives, and in recent decades performance-based pay schemes have proliferated in the workplace, most notably in terms of executive compensation tied to stock price.
20
Of course, it’s also obvious that workers care about more than just money—factors like intrinsic enjoyment, recognition, and a feeling of advancement in one’s career might all affect performance as well. All else equal, however, it seems obvious that one can improve performance with the proper application of financial rewards. And yet, the actual relationship between pay and performance turns out to be surprisingly complicated, as a number of studies have shown over the years.

Recently, for example, my Yahoo! colleague Winter Mason and I conducted a series of Web-based experiments in which subjects were paid at different rates to perform a variety of simple repetitive tasks, like placing a series of photographs of moving traffic into the correct temporal sequence, or uncovering words hidden in a rectangular grid of letters. All our participants were recruited from a website called Amazon’s Mechanical Turk, which Amazon launched in 2005 as a way to identify duplicate listings among its own inventory. Nowadays, Mechanical Turk is used by hundreds of businesses looking to “crowd-source” a wide range of tasks, from labeling objects in an image to characterizing the sentiment of
a newspaper article or deciding which of two explanations is clearer. However, it is also an extremely effective way to recruit subjects for psychology experiments—much as psychologists have done over the years by posting flyers around college campuses—except that because workers (or “turkers”) are usually paid on the order of a few cents per task, it can be done for a fraction of the usual cost.
21

In total, our experiments involved hundreds of participants who completed tens of thousands of tasks. In some cases they were paid as little as one cent per task—for example, sorting a single set of images or finding a single word—while in other cases they were paid five or even ten cents to do the same thing. A factor of ten is a pretty big difference in pay—by comparison, the average hourly rate of a computer engineer in the United States is only six times the federal minimum wage—so you’d expect it to have a pretty big effect on how people behave. And indeed it did. The more we paid people, the more tasks they completed before leaving the experiment. We also found that for any given pay rate, workers who were assigned “easy” tasks—like sorting sets of two images—completed more tasks than workers assigned medium or hard tasks (three and four images per set respectively). All of this, in other words, is consistent with common sense. But then the kicker: in spite of these differences, we found that the
quality
of their work—meaning the accuracy with which they sorted images—did not change with pay level at all, even though they were paid only for the tasks they completed correctly.
22

Other books

Under the Lash by Carolyn Faulkner
Run For the Money by Eric Beetner
Haven by Tim Stevens
Faustus Resurrectus by Thomas Morrissey
JACK KNIFED by Christopher Greyson
Game Over by Cynthia Harrod-Eagles
The Little Vampire by Angela Sommer-Bodenburg