Priceless: The Myth of Fair Value (and How to Take Advantage of It) (28 page)

Read Priceless: The Myth of Fair Value (and How to Take Advantage of It) Online

Authors: William Poundstone

Tags: #Marketing, #Consumer Behavior, #Economics, #Business & Economics, #General

BOOK: Priceless: The Myth of Fair Value (and How to Take Advantage of It)
2.22Mb size Format: txt, pdf, ePub

In one of the 2006 “Tom Sawyer” experiments, the researchers tried to interest marketing students in a poetry reading (Walt Whitman’s
Leaves of Grass
) that Ariely was supposedly going to give on the Berkeley campus. One group was asked whether they would be willing to pay $2 to hear Dan Ariely recite poetry. The answer was a pretty firm no. A scant 3 percent said they’d be willing to pay.

After the answers were all collected, the students were informed that in fact Ariely’s reading was going to be free. They were asked to indicate whether they wanted to be notified by e-mail of the time and location. Now 35 percent said yes, they wanted to be informed.

That’s as you’d expect. More were open to attending an event, provided it was free. A second group of students was asked a different question: Would you be willing to listen to Ariely recite poetry if we paid you
$2? This time 59 percent said yes. Then these students, like the first group, were told that the reading was going to be free (forget about that $2 payment). When asked whether they wanted to be informed of the specifics, only 8 percent indicated they were still interested.

Up to 35 percent of the first group thought the free recital was worth attending—a positive experience with greater-than-zero value. Only 8 percent of the second group thought that way. The only difference was that the first group had been led to think the recital was worth money, and the second had been told it was a chore meriting pay.

In another variation, the researchers asked two groups of MIT students whether they would pay/demand to
be
paid $10 to hear Ariely recite poetry for 10 minutes. They then asked the same students to name prices for 6, 3, and 1 minutes of poetry reading. As with the annoying-sound experiments, the average prices were scaled to the duration. But this time, one group was assigning
positive
prices (money they were willing to pay for the pleasure of hearing Ariely’s vocal interpretations) and the other was naming
negative
prices (wages, to put up with the recital). On the whole, the MIT students had no conviction as to whether they should be paying or be paid.

The Tom Sawyer experiments refute the common sense that every experience can be sorted as a positive or a negative. Yes, there are dreadful experiences and glorious ones. Most experiences are distinctly mixed. Is a trip to Paris a good thing? Well, sure, everyone immediately says yes. That’s because everyone
else
says yes, and not incidentally because
it costs a lot of money
. Suppose trips to Paris were free, and would always be free from now on. Would you go there this weekend? How about the weekend after that?

Tom Sawyer’s innocent con game has become the first big business model of the twenty-first century. It’s called Web 2.0. Google, YouTube, Facebook, and Twitter have become multimillion-dollar businesses with what is respectfully called user-generated content. All are founded on the premise that users will do worthwhile “work” (journalism, filmmaking, political commentary) for free. Someone is making a lot of money—someone, but not the folks whitewashing the Internet’s fences.

Thirty-six
Reality Constraint

One of Margaret Neale’s most famous experiments infuriated real estate agents and even her own mother. Neale wanted to see whether anchoring would work in the real estate market.

She arrived at the University of Arizona in 1982, with an interest in the psychology of bargaining. “Negotiation at the time was relatively moribund,” she said. Psychologists and economists “weren’t speaking to each other.” Neale immersed herself in the work of Kahneman and Tversky, Hillel Einhorn, and Robin Hogarth. She realized that the psychology of decision making could be a powerful tool for negotiators. “The argument that we were making at the time was there’s not a lot to be changed in negotiation,” she explained. “You’re faced with the situation as it exists. We know people behave differently when there’s a future” (when they know they will have further dealings with a bargaining partner). “But when you get in a negotiation, you don’t get to choose whether there’s a future. You don’t get to choose the personality of your counterpart. It’s already there, it’s already set. What you can change is the cognitive perspective that you take.”

“Maggie and I used to have lunch together every day,” said colleague Gregory Northcraft. “We sat down, and we’d start seeing connections between what was going on in our lives and what was going on in our research.” One connection involved anchoring and home prices. Northcraft and Neale were each buying their first house. “We both had the experience that when we were looking at houses, it was hard to know what to think of a house until we saw the listing price,” said Northcraft.
“When the price was higher, we tended to focus on the things that made it a higher priced house, and if it was lower, we tended to focus on the things that explained why the price was lower.”

They recognized this as anchoring. They also knew that economists had doubted whether Tversky and Kahneman’s findings would apply to major financial decisions. Market forces would mandate reasonable prices, it was claimed.

“There’s really two ways of looking at this,” Northcraft told me. “One is that heuristics and biases make a huge impact when there’s very little information. If you don’t have any other information, you go to your bag of tricks and pull out something. But a lot of people were saying, yeah, when you get into a rich, real-world setting, then there’s lots of other things to pay attention to, and you don’t need the shortcuts.

“The flip side of that is that when you get into a sufficiently rich setting, the amount of information available can become overwhelming. That provides a secondary route for heuristics and biases to come into play. When you have too much information, they’re there to sort that information out.”

Northcraft and Neale applied to the National Science Foundation for a grant to test heuristics and biases in the real world. They sketched three likely domains of research: real estate, business negotiations, and legal judgments. They got the grant and started with real estate.

Their goal was to see whether anchoring could affect the perceived value of actual houses on the market in Tucson. To do that, they needed a real estate agent to lend them a house to use in the experiment. Neale asked her mother, a real estate broker, for advice. She advised playing up the networking possibilities. Agents would welcome the chance to make some connections with the faculty, she said. Agent Katherine Martin of Tucson Realty and Trust agreed to let them use one of her listings.

The experimental subjects were 54 junior and senior undergraduate business students and 47 local real estate agents. For those real estate professionals, the Tucson market was their bread and butter. On average, they bought or sold 16 properties a year and had been selling real estate in Tucson for more than eight years.

Northcraft drove the participants to the home, and all were free to inspect it, to “kick the tires,” just like a buyer. The subjects were given all
the information a buyer would normally have, including a list of comps for nearby houses that had recently sold and a packet containing Multiple Listing Service sheets for the house and for all the nearby houses then on sale. The subjects were then asked to estimate what the home was worth. The one experimental variable was the listing price. Each of four groups was told a different price.

 

“Science is often portrayed as this very systematic, clean, sterile process,” said Northcraft, “and this study proved that good science is often nothing of the sort.” Just as Northcraft was driving the subjects to the house, a desert cloudburst began. It was as though someone were heaving buckets of water at the windows. The subjects refused to get out of the van. On the drive back, the streets were flooding to hubcap level.

They tried again on a sunny day, but the home sold before they had all the data they wanted. They had to get permission to use a second house. The results for both homes were similar. I’ll describe the second house, as they collected more data for it. This home had been appraised at $135,000 the previous year and was listed at $134,900. No one in the experiment saw this price, though. The subjects heard one of four fictitious prices: $119,900, $129,900, $139,900, and $149,900.

Both the real estate experts and the student amateurs were asked to price the home in four distinct ways. They were to play home appraiser and give a fair appraisal value; to pretend they were a listing agent and suggest a proper listing price; to assume the role of buyer and name a reasonable price to pay; and finally, to play seller and give the lowest offer that they would be willing to accept. All four measures showed similar anchoring. I’ll give the estimates for buyer’s reasonable purchase price.

 

 

 

Estimated Purchase Price (average)

 

 

Listing Price

Amateurs

Experts

 

 

$119,900

$107,916

$111,454

 

 

$129,900

$120,457

$123,209

 

 

$139,900

$123,785

$124,653

 

 

$149,900

$138,885

$127,318

 

 

Now remember, all these figures apply to the same house. For the student amateurs, raising the listing price $30,000 (from $119,900 to $149,900) increased their average estimate of the home’s value by nearly $31,000. They understood that the purchase price would be less than the listing price. But every dollar added to the listing price added a dollar to what they thought the house was worth.

Those who cherish faith in licensed professionals will be pleased to learn that the pros were less influenced by the fake listing prices. For the pros, raising the listing price by $30,000 raised their estimate by “only” $16,000.
Listing prices shouldn’t make any difference at all to a professional
. Agents are the first to say that the market, not the seller, determines value. The seller is usually a nonexpert who may have completely unrealistic expectations. Part of an agent’s job is to know the market price and (as buyer’s agent) to steer clients away from overpriced properties.

How could working real estate agents be so fallible? “I think there are a lot of areas where people who have experience think they’re experts,” Northcraft said. “But the difference is that experts have predictive models, and people who have experience have models that aren’t necessarily predictive.”

Experience is useful only to the extent that there is feedback. An agent who sells a home at a price that is a little too high or low will rarely be confronted with wiggle-proof evidence that she mispriced the property. “For these judgments,” Northcraft and Neale wrote, “expertise may amount to little more than knowledge of relevant accepted conventions, and feedback may correct descriptions of the judgment process (so that the descriptions conform to convention) rather than the accuracy of the judgments themselves. For such judgment tasks, we might expect experts to talk a better game than amateurs, but to produce (on average) similar judgments.”

There was one telling difference between the experts and the amateurs. Thirty-seven percent of amateurs admitted that they considered the listing price. Only 19 percent of the experts said they did. “It remains an open question,” Northcraft and Neale archly observed, “whether experts’ denial of the use of listing price as a consideration in valuing property reflects a lack of awareness of their use of listing price as a consideration, or simply an unwillingness to acknowledge publicly their dependence on an admittedly inappropriate piece of evidence.”

Before the experiment began, a consulting group of agents had told Northcraft and Neale that there is a “zone of credibility.” Any listing price that differed from the appraisal value by more than 5 percent would stand out as “obviously deviant.”

The experiment’s two middle prices ($129,900 and $139,900 in the table above) were just within the zone. Each was about 4 percent off the appraised price. The two more extreme prices were 12 percent off and should have raised a red flag.

Only they didn’t. The agents thought the house was worth nearly $3,000 more when listed at the deviant price of $149,900 rather than the more credible $139,000. The amateurs thought the house was worth $15,000 more at the higher price.

“At issue here is just how malleable decision processes might be, and whether there is some reality constraint on the extent to which such processes can be influenced,” Northcraft and Neale wrote. “For instance, can just any listing price really influence the perceived value of a piece of real estate, or does the listing price need to be credible to be considered, and therefore to influence value estimates? This study provided only limited support for a reality constraint . . .”

This experiment, published in a 1987 issue of
Organizational Behavior and Human Decision Processes
, brought intense reaction. “Back in those days, the economists weren’t doing much reading of the organizational literature,” said Neale. That was to change, at least for this paper. It supplied needed evidence of the practical reality of anchoring, resulting in more than two hundred citations in scholarly papers.

Other books

The Warriors by Sol Yurick
On the Way Home by Warren, Skye
Ithaca by Patrick Dillon
Twisted by Francine Pascal
Neptune's Fingers by Lyn Aldred
In a Treacherous Court by Michelle Diener
The Dreamer's Curse (Book 2) by Honor Raconteur
La sombra del viento by Carlos Ruiz Zafón