Super Crunchers (3 page)

Read Super Crunchers Online

Authors: Ian Ayres

BOOK: Super Crunchers
6.45Mb size Format: txt, pdf, ePub

All of industry, worldwide, is being remade around the database capacities of modern computers. The expectation (and fear) of the 1950s and '60s—in books like Vance Packard's
The Hidden Persuaders
—that sophisticated social engineering, at the behest of big government and big corporations, was about to take over the world has been suddenly resurrected for a new generation. But where we once expected big government to solve all human problems by command and control, we now observe something similar arising in the form of massive data networks.

Why Me?

I'm a number cruncher myself. Even though I teach law at Yale, I learned econometrics while I was studying at MIT for a Ph.D. I've crunched numbers on everything from bail bonds and kidney transplantation to concealed handguns and reckless sex. You might think that your basic Ivy-tower egghead is completely disconnected from real-world decision making (and yes, I am the kind of absentminded professor who was once so engrossed in writing an article on a train that I went to Poughkeepsie instead of New Haven). Still, even data mining by eggheads can sometimes have an impact on the world.

A few years back Steve Levitt and I teamed up to figure out something very practical—the impact of LoJack on auto theft. LoJack is a small radio transmitter that is hidden in one of many possible locations within a car. When the car is reported stolen the police remotely activate the transmitter, and then specially equipped police cars can track the precise location of the stolen vehicle. LoJack is highly effective as a vehicle-recovery device. LoJack Corporation knew this and proudly advertised a 95 percent recovery rate. But Steve and I wanted to test whether LoJack helped reduce auto theft generally. The problem with lots of anti-theft devices is that they might just shift crime around. If you use “the Club” on your car, it probably doesn't stop crime, it just causes the thief to walk down the street and steal another car. The cool thing about LoJack is that it's hidden. In a city covered by LoJack, a thief doesn't know whether a particular car has it or not.

This is just the kind of perversity that Levitt likes to explore.
Freakonomics
reviewers really got it right when they said that Steve looks at things differently. Several years ago, I had an extra ticket and invited Steve to come with me to see Michael Jordan play with the Chicago Bulls. Steve figured he'd enjoy the game more if he was invested in it, but (in sharp contrast to me) he didn't care that much about whether the Bulls won or lost. So just before the game, he hopped online and placed a substantial bet that Chicago would win. Now he really was invested in the game. The online bet changed his incentives.

In an odd way, LoJack is also a device for changing incentives. Before LoJack, many professional thieves were almost untouchable. LoJack changed all that. With LoJack, cops not only recover the vehicle, they often catch the thief. In Los Angeles alone, LoJack has broken up more than 100 chop shops. If you steal 100 cars in a LoJack town, you're almost certain to steal some that have LoJack in them. We wanted to test whether LoJack scared thieves from taking cars generally. If it does, LoJack creates what economists call a “positive externality.” When you put the Club on your car, you probably are increasing the chance the next guy's car will be stolen. If enough people put LoJack in their cars, however, Steve and I thought that they might be helping their neighbors by scaring professional car thieves from taking any cars.

Our biggest problem was convincing LoJack to share any of its sales data with us. I remember repeatedly calling and trying to convince them that if Steve and I were right, it would provide another reason for people to buy LoJack. If LoJack reduces the chance that thieves will take
other
people's cars, then LoJack might be able to convince insurance companies to give LoJack users more substantial discounts. A junior executive finally did send us tons of helpful data. But to be honest, LoJack just wasn't that interested in the research at first.

All that changed when they saw the first draft of our paper. After looking at auto theft in fifty-six cities over fourteen years, we found that LoJack had a huge positive benefit for other people. In high-crime areas, a $500 investment in LoJack reduced the car theft losses of non-LoJack users by $5,000. Because we had LoJack sales broken down by both year and city, we could generate a pretty accurate estimate about the proportion of cars with LoJack that were on the road. (For example, in Boston, where the state mandated the largest insurance discount, over 10 percent of the cars had LoJack.) We looked to see what happened to auto theft in the city as a whole as the number of LoJack users increased. Since LoJack service began in different cities in different years, we could estimate the impact of LoJack separate from the general level of crime in that year. In city after city, as the percentage of cars with LoJack increased, the rate of auto theft fell dramatically. Insurance companies weren't giving nearly big enough discounts for LoJack, because they weren't taking into account how much LoJack reduced payouts on even unprotected cars.

Steve and I never bought LoJack stock (because we didn't want to change our own incentives, to tell the truth) but we knew we were sitting on valuable information. When our working paper went public the stock jumped 2.4 percent. Our study has helped convince other cities to adopt the LoJack technology and has spurred slightly higher insurance discounts (but they're still not nearly large enough!).

The bottom line here is that I care passionately about number crunching. I have been a cook myself in the data-mining café. Like Ashenfelter, I am the editor of a serious journal, the
Journal of Law, Economics, and Organization,
where I have to evaluate the quality of statistical papers all the time. I'm well placed to explore the rise of data-based decision-making because I have been both a participant and an observer. I know where the bodies are buried.

Plan of Attack

The next five chapters will detail the rise of Super Crunching across society. The first three chapters will introduce you to two fundamental statistical techniques—regressions and randomized trials—and show how the art of quantitative prediction is reshaping business and government. We'll explore the debate over “evidence-based” medicine in Chapter 4. And Chapter 5 will look at hundreds of tests evaluating how data-based decision making fares in comparison with experience-and intuition-based decisions.

The second part of the book will step back and assess the significance of this trend. We'll explore why it's happening now and whether we should be happy about it. Chapter 7 will look at who's losing out—in terms of both status and discretion. And finally, Chapter 8 will look to the future. The rise of Super Crunching doesn't mean the end of intuition or the unimportance of on-the-job experience. Rather, we are likely to see a new era where the best and the brightest are comfortable with both statistics and ideas.

In the end, this book will not try to bury intuition or experiential expertise as norms of decision making, but will show how intuition and experience are evolving to interact with data-based decision making. In fact, there is a new breed of innovative Super Crunchers—people like Steve Levitt—who toggle between their intuitions and number crunching to see farther than either intuitivists or gearheads ever could before.

CHAPTER 1

Who's Doing Your Thinking for You?

Recommendations make life a lot easier. Want to know what movie to rent? The traditional way was to ask a friend or to see whether reviewers gave it a thumbs-up.

Nowadays people are looking for Internet guidance drawn from the behavior of the masses. Some of these “preference engines” are simple lists of what's most popular. The
New York Times
lists the “most emailed articles.” iTunes lists the top downloaded songs. Del.icio.us lists the most popular Internet bookmarks. These simple filters often let surfers zero in on the greatest hits.

Some recommendation software goes a step further and tries to tell you what people like you enjoyed. Amazon.com tells you that people who bought
The Da Vinci Code
also bought
Holy Blood, Holy Grail
. Netflix gives you recommendations that are contingent on the movies that you yourself have recommended in the past. This is truly “collaborative filtering,” because your ratings of movies help Netflix make better recommendations to others and their ratings help Netflix make better recommendations to you. The Internet is a perfect vehicle for this service because it's really cheap for an Internet retailer to keep track of customer behavior and to automatically aggregate, analyze, and display this information for subsequent customers.

Of course, these algorithms aren't perfect. A bachelor buying a one-time gift for a baby could, for example, trigger the program into recommending more baby products in the future. Wal-Mart had to apologize when people who searched for
Martin Luther King
:
I Have a Dream
were told they might also appreciate a
Planet of the Apes
DVD collection. Amazon.com similarly offended some customers who searched for “abortion” and were asked “Did you mean adoption?” The adoption question was generated automatically simply because many past customers who searched for abortion had also searched for adoption.

Still, on net, collaborative filters have been a huge boon for both consumers and retailers. At Netflix, nearly two-thirds of the rented films are recommended by the site. And recommended films are rated half a star higher (on Netflix's five-star ranking system) than films that people rent outside the recommendation system.

While lists of most-emailed articles and best-sellers tend to concentrate usage, the great thing about the more personally tailored recommendations is that they diversify usage. Netflix can recommend different movies to different people. As a result, more than 90 percent of the titles in its 50,000-movie catalog are rented at least monthly. Collaborative filters let sellers access what Chris Anderson calls the “long tail” of the preference distribution. The Netflix recommendations let its customers put themselves in rarefied market niches that used to be hard to find.

The same thing is happening with music. At Pandora.com, users can type in a song or an artist that they like and almost instantaneously the website starts streaming song after song in the same genre. Do you like Cyndi Lauper and Smash Mouth?
Voilà,
Pandora creates a Lauper/Smash Mouth radio station just for you that plays these artists plus others that sound like them. As each song is playing, you have the option of teaching the software more about what you like by clicking “I really like this song” or “Don't play this type of song again.”

It's amazing how well this site works for both me and my kids. It not only plays music that each of us enjoys, but it also finds music that we like by groups we've never heard of. For example, because I told Pandora that I like Bruce Springsteen, it created a radio station that started playing the Boss and other well-known artists, but after a few songs it had me grooving to “Now” by Keaton Simons (and because of on-hand quick links, it's easy to buy the song or album on iTunes or Amazon). This is the long tail in action because there's no way a nerd like me would have come across this guy on my own. A similar preference system lets Rhapsody.com play more than 90 percent of its catalog of a million songs every month.

MSNBC.com has recently added its own “recommended stories” feature. It uses a cookie to keep track of the sixteen articles you've most recently read and uses automated text analysis to predict what new stories you'll want to read. It's surprising how accurate a sixteen-story history can be in kickstarting your morning reading. It's also a bit embarrassing: in my case
American Idol
articles are automatically recommended.

Still, Chicago law professor Cass Sunstein worries that there's a social cost to exploiting the long tail. The more successful these personalized filters are, the more we as a citizenry are deprived of a common experience. Nicholas Negroponte, MIT professor and guru of media technology, sees in these “personalized news” features the emergence of the “Daily Me”—news publications that expose citizens only to information that fits with their narrowly preconceived preferences. Of course, self-filtering of the news has been with us for a long time. Vice President Cheney only watches Fox News. Ralph Nader reads
Mother Jones
. The difference is that now technology is creating listener censorship that is diabolically more powerful. Websites like Excite.com and Zatso.net started to allow users to produce “the newspaper of me” and “a personalized newscast.” The goal is to create a place “where you decide what's the news.” Google News allows you to personalize your newsgroups. Email alerts and RSS feeds allow you to select “This Is the News I Want.” If we want, we can now be relieved of the hassle of even glancing at those pesky news articles about social issues that we'd rather ignore.

All of these collaborative filters are examples of what James Surowiecki called “The Wisdom of Crowds.” In some contexts, collective predictions are more accurate than the best estimate that any member of the group could achieve. For example, imagine that you offer a $100 prize to a college class for the student with the best estimate of the number of pennies in a jar. The wisdom of the group can be found simply by calculating their average estimate. It's been shown repeatedly that this average estimate is very likely to be closer to the truth than any of the individual estimates. Some people guess too high, and others too low—but collectively the high and low estimates tend to cancel out. Groups can often make better predictions than individuals.

On the TV show
Who Wants to Be a Millionaire,
“asking the audience” produces the right answer more than 90 percent of the time (while phoning an individual friend produces the right answer less than two-thirds of the time). Collaborative filtering is a kind of tailored audience polling. People who are like you can make pretty accurate guesses about what types of music or movies you'll like. Preference databases are powerful ways to improve personal decision making.

eHarmony Sings a New Tune

There is a new wave of prediction that utilizes the wisdom of crowds in a way that goes beyond conscious preferences. The rise of eHarmony is the discovery of a new wisdom of crowds through Super Crunching. Unlike traditional dating services that solicit and match people based on their conscious and articulated preferences, eHarmony tries to find out what kind of person you are and then matches you with others who the data say are most compatible. eHarmony looks at a large database of information to see what types of personalities actually are happy together as couples.

Neil Clark Warren, eHarmony's founder and driving force, studied more than 5,000 married people in the late 1990s. Warren patented a predictive statistical model of compatibility based on twenty-nine different variables related to a person's emotional temperament, social style, cognitive mode, and relationship skills.

eHarmony's approach relies on the mother of Super Crunching techniques—the regression. A regression is a statistical procedure that takes raw historical data and estimates how various causal factors influence a single variable of interest. In eHarmony's case the variable of interest is how compatible a couple is likely to be. And the causal factors are twenty-nine emotional, social, and cognitive attributes of each person in the couple.

The regression technique was developed more than 100 years ago by Francis Galton, a cousin of Charles Darwin. Galton estimated the first regression line way back in 1877. Remember Orley Ashenfelter's simple equation to predict the quality of wine? That equation came from a regression. Galton's very first regression was also agricultural. He estimated a formula to predict the size of sweet pea seeds based on the size of their parent seeds. Galton found that the offspring of large seeds tended to be larger than the offspring of average or small seeds, but they weren't quite as large as their large parents.

Galton calculated a different regression equation and found a similar tendency for the heights of sons and fathers. The sons of tall fathers were taller than average but not quite as tall as their fathers. In terms of the regression equation, this means that the formula predicting a son's height will multiply the father's height by some factor less than one. In fact, Galton estimated that every additional inch that a father was above average only contributed two-thirds of an inch to the son's predicted height.

He found the pattern again when he calculated the regression equation estimating the relationship between the IQ of parents and children. The children of smart parents were smarter than the average person but not as smart as their folks. The very term “regression” doesn't have anything to do with the technique itself. Dalton just called the technique a regression because the first things that he happened to estimate displayed this tendency—what Galton called “regression toward mediocrity”—and what we now call “regression toward the mean.”

The regression literally produces an equation that best fits the data. Even though the regression equation is estimated using historical data, the equation can be used to predict what will happen in the future. Dalton's first equation predicted seed and child size as a function of their progenitors' size. Orley Ashenfelter's wine equation predicted how temperature and rain would impact wine quality.

eHarmony produced a formula to predict preference. Unlike the Netflix or Amazon preference engines, the eHarmony regression is trying to match compatible people by using personality and character traits that people may not even know they have or be able to articulate. Indeed, eHarmony might match you with someone who you might never have imagined that you could like. This is the wisdom of crowds that goes beyond the conscious choices of individual members to see what works at unconscious, hidden levels.

eHarmony is not alone in trying to use data-driven matching. Perfectmatch matches users based on a modified version of the Myers-Briggs personality test. In the 1940s, Isabel Briggs Myers and her mother Katharine Briggs developed a test based on psychiatrist Carl Jung's theory of personality types. The Myers-Briggs test classifies people into sixteen different basic types. Perfectmatch uses this M-B classification to pair people who have personalities that historically have the highest probability of forming lasting relationships.

Not to be outdone, True.com collects data from its clients on ninety-nine relationship factors and feeds the results into a regression formula to calculate the compatibility index score between any two members. In essence, True.com will tell you the likelihood you will get along with anyone else.

While all three services crunch numbers to make their compatibility predictions, their results are markedly different. eHarmony believes in finding people who are a lot like you. “What our research kept saying,” Warren has observed, “is [to] find somebody whose intelligence is a lot like yours, whose ambition is a lot like yours, whose energy is a lot like yours, whose spirituality is a lot like yours, whose curiosity is a lot like yours. It was a similarity model.”

Perfectmatch and True.com in contrast look for complementary personalities. “We all know, not just in our heart of hearts, but in our experience, that sometimes we're attracted [to], indeed get along better with, somebody different from us,” says Pepper Schwartz, the empiricist behind Perfectmatch. “So the nice thing about the Myers-Briggs was it's not just characteristics, but how they fit together.”

This disagreement over results isn't the way data-driven decision making is supposed to work. The data should be able to adjudicate whether similar or complementary people make better matches. It's hard to tell who's right, because the industry keeps its analysis and the data on which the analysis is based a tightly held secret. Unlike the data from a bunch of my studies (on taxicab tipping, affirmative action, and concealed handguns) that anyone can freely download from the Internet, the data behind the matching rules at the Internet dating services are proprietary.

Mark Thompson, who developed Yahoo! Personals, says it's impractical to apply social science standards to the market. “The peer-review system is not going to apply here,” Thompson says. “We had two months to develop the system for Yahoo! We literally worked around the clock. We did studies on 50,000 people.”

Other books

InformedConsent by Susanna Stone
Between Us Girls by Sally John
Reparation by Sawyer Bennett
Law of Return by Pawel, Rebecca
The Right to Arm Bears by Gordon R. Dickson
The Wolf's Surrender by Kendra Leigh Castle
The Village Newcomers by Rebecca Shaw