The Naked Future (12 page)

Read The Naked Future Online

Authors: Patrick Tucker

BOOK: The Naked Future
12.83Mb size Format: txt, pdf, ePub

They gained access to live weather feeds through the National Oceanic and Atmospheric Administration (NOAA) and the National Climatic Data Center (NCDC). Khaliq tapped his network at Stanford and Friedberg looked into climate data sources from Berkeley. The data had to be up to the minute and constantly streaming;
telemetric, not numbers on tables. And there had to be
a lot
of it. They began aggregating data going back three decades from two hundred weather stations, and quickly doubled that number. Ultimately the product they would be selling was a climate model, just like those pesky IPCC models that prove so inconvenient for politicians. But these models would tell you the probability of a specific place getting too much sun, too much rain, and the like within a given time that was of relevance to a stakeholder. That's a very different product than a big projection about the global climate a hundred years hence.

But which industries were the most vulnerable to financial loss from weather events? Neighborhood bike shops, it turned out, didn't constitute a large enough market on which to found an insurance business. On the other end of the spectrum they considered the travel industry, but this field was dominated by large airlines that could afford to self-insure. Energy was another option, but that market had too few players as well. Friedberg and Khaliq soon realized that what they were selling wasn't exactly new; it had existed prior to 2007 in the form of the infamous derivative contract.

Unlike the rest of the derivative market, which has become synonymous with shady and complicated backroom deals that produce money but not value, derivative contracts in the context of weather insurance are actually straightforward. Let's say you are a big energy company and you do deep-water drilling off the coast of Mexico. You have offshore rigs that are vulnerable to big weather incidents. Massive storms can cause production delays, extensive damage to equipment, and worse. To insure—or “hedge”—against financial loss from a big weather event, you can go to a financial services firm like Goldman Sachs and ask to buy insurance. Goldman Sachs then does what Friedberg was trying to do: create a working model to figure out what sort of weather events were most likely and, from that, how much it would cost to insure against weather loss. The way Goldman Sachs does it is extremely expensive; they build these models for a very small customer base. For Friedberg that was not an option. Agriculture was the largest, most
underserved market. Individual farmers needed a way to insure against crop loss but couldn't afford the hedges Goldman Sachs was selling in the form of derivative contracts.

Here then was the opportunity to do the same thing big Wall Street banks charged huge service fees for, but Friedberg was going to do it for millions of people on demand (at scale), automatically, and at a much lower cost, effectively scooping up the business of small farmers that such companies as Goldman Sachs didn't care to reach. That they had no experience in agriculture (or insurance, for that matter) didn't seem to matter.

They could limit the variables the system had to consider by focusing on specific crops such as corn, soybeans, and wheat. Each crop needed a certain amount of sun, water, and growing time, and could fetch a particular price at market. They next had to figure out what the weather parameters, thus the potential loss, were going to be for the farmer growing that crop. “We realized we had to measure the weather much more locally. No one wanted coverage for weather a hundred and fifty miles away,” Friedberg told the Stanford students.

They expanded their weather data sources to millions of daily measurement sources. Currently they use that data to build simulations for each potential location. They run the simulation ten thousand times, a process that takes several days. The company generates 25 terabytes of data every two weeks. If you ran that simulation on a conventional server, it would take forty years. If you ran it on the ENIAC, it would take centuries. Today, they farm the heavy processing to the cloud, specifically Amazon's cloud services. They are now one of the largest users of MapReduce.

The result: the Climate Corporation can output the distribution of possible weather outcomes for any 2.5-by-2.5-mile area in the United States, on any given day,
in the next two years
(730 days). “It is the probabilistic distribution of things that might happen that allows us to figure out what price to charge for the insurance that we are selling,” Friedberg told MSNBC reporter John Roach.
16

In other words, they're assessing the risk of financial loss from weather, but they're doing so with a level of precision that surpasses
what your local weatherperson can do. They're not predicting the weather; they're predicting it ten thousand times per customer, and learning more and more about the sort of weather that particular customer will experience every time they run that simulation, in much the same way that if you were to sit down and watch ten thousand games of tennis, your ability to predict the next Wimbledon champion would be better than average. It's not magic. It's a statistical trick with an enormous number of data points, including not only what's happening in the clouds but what's happening in the soil, what stage of growth the crops are in, and what seed a farmer planted.
17

Are the estimates good? Ultimately, the market will be the judge. The government already offers farmers some coverage against loss. Climate Corporation's Total Weather Insurance (TWI) is designed to complement what the government does already. Farmers don't tend to have a lot of discretionary income for supplemental insurance, so if the company's product is too expensive, it will lose customers fast. Tellingly, TWI has become one of the most expensive insurance products on the market for farmers. And as drought conditions worsen year after year, the price is going up. In 2012, throughout parts of Kansas that were experiencing extreme drought, Climate Corporation charged $30 to cover an acre that would only yield $50 in profit.

Climate Corporation doesn't have to put any of its or its backers' money down to settle claims. The money the firm pays out comes from reinsurance, which, in effect, is insurance for insurance companies. The private reinsurers that the company deals with are multibillion-dollar outfits; they won't go belly-up based on claims from a few unlucky winter wheat growers, but that doesn't mean that those contracts, if they're poorly designed, won't lead reinsurers to raise prices on Climate Corporation, which in turn will force Climate Corporation to raise premiums on its customers.

Part of the reason for the already big bill is that the company sends a lot of money back to its policyholders. In 2012, 80 percent of its policyholders who were corn farmers received a payout from Climate Corporation for crop loss. In the states of Illinois, Nebraska,
Colorado, Kentucky, Missouri, Oklahoma, and Tennessee, where the 2012 drought hit particularly hard, virtually every Climate Corporation customer got a check.
18

More remarkable is how these checks are issued. When the company detects that water, heat, or other conditions have reached a point where the crop will suffer damages, the system calculates the cost and sends the payout. The policyholder doesn't lift a finger. This is Climate Corporation's edge: it pays off far faster than the government insurance it's supplementing.

It sounds like a business model that could never endure. Yet it's booming precisely because the company is able to adapt the price of a policy to reflect the distribution of loss risk. Friedberg claims that far from damaging his business, the major drought of 2012 fell within the distribution of outcomes their models projected. The company is increasing the number of counties it covers by about a factor of two per year. In 2013 it doubled its number of agents from the prior year. Climate Corporation believes it is first in line to protect the $3.8 trillion agriculture market.

Is Climate Corporation an infinite forecast machine? No. The weather is growing less predictable, but our modeling abilities are advancing more quickly. In this way, Climate Corporation is much closer to von Neumann's dream of influencing the weather than is the IPCC. Whereas the importance of the IPCC is waning precisely because it can offer nothing but a number of scenarios, Climate Corporation has learned how to make money by predicting what the weather will cost. Unless you're fighting a war, knowing what the weather will cost you is as valuable as knowing what the weather conditions will be. That's the difference between the big data present and the naked future where people have telemetric data to make individual decisions about what to do next. Are we any closer to controlling the weather? The answer is both yes and no. The ability to mitigate risk is a form of control.

It is worth noting that Climate Corporation was recently bought by Monsanto, the controversial company most closely associated with genetically modified foods and for a number of patent lawsuits
with farmers. Monsanto may use Climate Corporation's data to engineer new, genetically novel seeds that are more resistant to heat and water stress, which could be a boon to the fight against global hunger. But not all of Monsanto's business practices are in line with the public interest and they may take the same protective approach to climate data as they have taken to seeds, restricting access to this important resource.

Because we live in the age of the vaunted entrepreneur, when even our most nominally right-leaning politicians make frequent habit of praising the free market and all its wondrous efficiencies while denigrating government as bloated and inefficient, we may draw from the story of Freidberg and von Neumann the simple yet wrong conclusion that business was able to adapt to our rapidly changing climate where government failed to arrest it because the business mentality is inherently superior to that of the public servant.

We would do well to remember that Climate Corporation does not have to take a direct stance on man-made climate change to sell its product. It's a company that provides a real and valuable service but it isn't fixing climate change so much as profiting from a more advanced understanding of it. Higher corn prices and decreased crops can create profit but they do not—in themselves—create value. Ultimately, we will have to fix this problem, and government will have to be part of that solution.
19

If not for the wisdom, creativity, and genius of people who weren't afraid to be labeled public servants, there would be no international satellite data for NOAA to help Climate Corporation improve its models. There may not even be computers, as we understand them, on which to write code or do calculations.

Today, in many respects, we are moving backward on climate change even as we have learned to profit by it. But we are finally just beginning to understand what climate change means to us as individuals, which, perhaps ironically, could be the critical step in addressing the greatest problem we have ever faced. As the big data present becomes the naked future, we may still be able to save our species as well as many others from the worst consequences of our excess.

CHAPTER 5

Unities of Time and Space

THE
date is February 29, 2012. The setting is the O'Reilly Strata Conference in Santa Clara, California. Xavier Amatriain, engineering manager of Netflix, is concluding his presentation on how the company recommends movies to users. In 2009 Netflix launched a $1 million prize to build a better recommendation engine. The conditions for the award were: the winning algorithm had to correctly predict the next 2.8 million ratings, around 6 per user, 10 percent more accurately than the current Netflix system (10 percent defined by rote mean-square deviation).

Not a single entry effectively hit the 10 percent mark but Netflix awarded a team called Bellkor the prize for a composite algorithm that performed 8.8 percent better than the system it was using at the time. Netflix has since stopped using it.

I am in the audience for this presentation. Like a lot of people, I use Netflix begrudgingly as I rarely, if ever, like the movies the site recommends for me. I have no attachment to Gritty Sci-Fi Fantasy, Dark Independent Thrillers, Heartfelt Movies, or Controversial Documentaries. I don't like movies because they share some cobbled-together
thematic resemblance to a movie I just watched. I enjoy the cinema offerings that I enjoy because, quite simply, they're good. I can predict that I'm going to like a particular movie almost as soon as I begin watching it.

Today, Netflix has more than 5 billion ratings and receives 4 million new ratings every day. It streams billions of hours of content every three months. Millions of users now stream movies from services like Netflix. In so doing, these users create unique telemetric data about ratings and even at what point people start and stop watching particular movies—data that go toward revealing not only how Netflix users on average react to different scripts but also how individual tastes change from viewer to viewer and movie to movie. What I want to know from Amatriain is: Why can't Netflix predict I'll like a movie any better than I can?

I approach the microphone.

“In that ninety-nine percent of movies are crap, shouldn't there be more of a research effort to separate movies that are actually good from movies that are bad—research beyond genre and adjective tagging? And aren't you the best person in the world to lead that effort?” I ask.

He stares at me coldly. “I would argue that the concept of quality is very subjective,” he answers. He goes on to explain that because of this subjectivity there is no way to predict what any individual person will actually like in a movie. The most we can hope for is a system to predict the
types
of movies someone
might
like.

If we assume the human response to art lends itself to some sort of analysis, then what precisely can be measured and how?

At the same time that I was having this conversation with Amatriain, Netflix was in the process of doing exactly what I was daring it to do: using its storehouse of user-generated telemetric data to predict and respond to its viewers. In February 2013 Netflix debuted its second original series, a political-suspense drama called
House of Cards
that was based on an old British television show. The new piece was set in Washington, D.C., that hotbed of Macbethian intrigue.
House of Cards
is an example of what will probably
be called “optimized
television.” It represents all the important bits of information Netflix has been able to glean from silently observing the digital-viewing habits of its customers.

As Andrew Leonard noted in his write-up of the show on Salon.com, “Netflix's data indicated that the same subscribers who loved the original BBC production also gobbled down movies starring Kevin Spacey or directed by David Fincher. Therefore, concluded Netflix executives, a remake of the BBC drama with Spacey and Fincher attached was a no-brainer, to the point that the company committed $100 million for two 13-episode seasons.”
1

Netflix, like Amazon, knows that correlations across data sets don't offer scientific certainty, but they are enough for sales. Netflix isn't out to answer
why
people like to binge on political-suspense TV, why Kevin Spacey appeals to those audiences, or, indeed, why certain plots keep interest better than do other plots. The question of what causes a film or television drama to be good can be left to the art majors; Netflix has got subscriptions to sell. Every time we see a notice saying “People who bought this item also bought . . .” and we succumb to the urge to follow the herd and click “Buy,” we show that this strategy works.

Back to the question of what can and cannot be predicted about individual taste from telemetric data. There are two ways of considering this problem. The first approach is Amatriain's, who said, “The concept of quality is very subjective.” According to this line of thinking, with rankings, recorded viewings, and friend recommendations, you can make a number of determinations about the type of movie or television millions of people will watch, how they will watch it, and how to keep them watching. But an algorithm can't begin to figure out
why
a person likes the movies that she likes.

The second line of thinking can be summed up thusly: all of the above is bullshit.

The Engineering Professor

The year is 1996. University of Pennsylvania marketing professor Jehoshua Eliashberg is at a movie multiplex in downtown
Philadelphia. He finds all the movies on offer unappealing. This is hardly an unusual experience, but Jehoshua Eliashberg is a rather unusual man.

Born in Israel, the son of a prosperous executive, Eliashberg became obsessed with logical problems at a young age. He studied electrical engineering at the Technion–Israel Institute of Technology but longed for a human challenge; he went into marketing and excelled, eventually landing a permanent faculty position at Wharton, consistently ranked one of the most prestigious business schools in the world.

Standing outside that movie theater in 1996, it occurred to him that there had to be a more scientific way to help the theater manager pick better movies. Eliashberg did some digging and discovered that all the decisions about what would show at his local theater came out of a central office in New York. In fact, the head of this office made all the booking decisions for theaters in this multiplex-movie chain across multiple states. This was the person who was going to pick what would play in Eliashberg's local theater. From a management perspective, this seemed terribly removed. Obviously, the guy was not in close touch with customers.

Eliashberg concluded that to make better decisions about what movies to book and how long to book them, the regional manager would need to know an estimate of three things: the national box-office revenues for particular movies, the expected revenues for individual screens, and the box-office revenue for particular films on a weekly basis, as a typical movie run was less than fifteen weeks, but could vary.
2

The model that he and his colleagues Berend Wierenga and Chuck Weinberg went on to publish was useful for multiplex-movie managers who already had three weeks of box-office data on a movie they were showing and were looking to decide whether to let the movie continue to play at the later stages of the life cycle or to replace it with something else. It was a good result. But he wasn't satisfied. There had to be a way to apply some scientific knowledge to the actual
making
of the movie and improve not just the array of products on offer but do so before millions of dollars had been lost
in production through an improved green-lighting process. There had to be a way to use statistics to make better films.

The first step was to establish the parameters of a problem: What were the components of a good movie? Here is where a scientist interested in only a pure quantification experiment would have thrown up his hands, as taste is subjective after all. But Eliashberg believes there are certain universal truths to storytelling and that these truths are accessible. He and his colleagues began reading the works of prominent film critics, including Irwin Blacker's
Elements of Screenwriting
, Syd Field's
Screenplay: The Foundations of Screenwriting
, and James Monaco's classic
How to Read a Film.

In so doing, they took what some scientists might consider to be too great a liberty by basing their theory on the writings of experts, people who had deep experience in the areas of theater, film, and storytelling. The model would be built around their (gasp!)
opinions
. Eliashberg wasn't interested in money balling the movies. He didn't want to throw out the insight that had never been quantified. He just wanted to understand how different expert insight could fit into a working model.

“I'm always looking to solve problems in a more rigorous way,” he explained to me, his voice anchored by his deep Israeli accent. “However, my experience has taught me that the best solution is usually a combination of intuition and science, human decision making guided by a formal model. Scientific methodology by itself doesn't do very well, but human intuition by itself also doesn't do well.”

Many of the critical opinions were related to some best practices for narrative, how to build tension, craft sympathetic characters, create drama, time a joke, and so on. Other suggestions were specific to an uncanny degree: How long should a flashback be in an action movie? Does a topical and important premise play better in a suspense film than in a comedy? How should dialogue be distributed among the characters in a family film, a political thriller, a horror movie? How many scenes should be set inside or outside (“interior”
versus
“exterior”
in script parlance)? What is the optimal
title length? One of the most important considerations is genre. Some genres consistently perform better than others but any film can perform well if it hews close to those combinations of elements that are key to how its genre functions.

This idea actually forms the basis for all film criticism, but it predates Monaco and any other critic. It was invented before film and even before what we today call theater. It comes from a time when performance was understood to be the recitation of poetry.

It was Aristotle who first established the idea that different genres may have different rules. In his seminal work on art,
Poetics
, Aristotle singles out three genres of poetry: comedies, tragedies, and epics. Epics were the blockbusters of the Hellenistic period. They involved flights of fancy, special effects, and the impossible. Comedies were works about stupid, base, or low men. The most important genre for Aristotle was tragedy, which he defined not as we do today—stories with sad endings possibly involving teenagers who kill themselves—but as “an imitation of an action that is serious, complete, and of a certain magnitude.” Aristotle lays down a series of guidelines for how to craft these certain-magnitude stories. The most important of these insights are the three
unities
:

Unity of place: The action should be confined to a specific locale, a town, a castle, et cetera.

Unity of time: One revolution of the sun or a single day is the optimal time span for a dramatic work.

Unity of action: Basically, all action should move the plot forward in some way. Cut out gratuitous action if at all possible. This rule is the most closely related to the
certain-magnitude
portion of the description of tragedy.
3

Carl Foreman's script for the 1952 Western
High Noon
(for which Foreman was nominated for Best Original Screenplay) observes all three unities perfectly. The stage is set in the one-road town of Hadleyville in New Mexico Territory. It opens with newly retired marshal Will Kane marrying Amy Fowler, a beautiful young Quaker woman played by Grace Kelly. When Kane learns that his nemesis, the outlaw Frank Miller, is due in Hadleyville on the noon
train accompanied by three roughnecks, Kane attempts to rally the town to mount a defense. But he's abandoned and betrayed by the people he serves at every turn. In the span of a single morning, Kane goes from happy newlywed to town pariah. No one expects him to survive the day. The action culminates in a gunfight. Kane emerges victorious, but he and his young wife have been transformed by the ordeal, and the people of Hadleyville for whom Kane has sacrificed all to protect have been revealed as cowards. The movie ends before the sun sets on the day.

Aristotle saw very different rules for each of the ancient genres. An epic was successful if it delighted the audience. Epics were often about the past, episodes that could be embellished for the glory of the historical figures depicted therein. The action could be random and disconnected so long as the ultimate effect was maximum entertainment. Today,
Pirates of the Caribbean
and the
Star Wars
franchise could all be considered Aristotelian epics and likely would not be improved by any observance of the unities.

Action in tragedy, according to Aristotle, should be “comprised within such limits, that the sequence of events, according to the law of probability or necessity, will admit of
a change
from bad fortune to good, or from good fortune to bad” (emphasis added). This is an important point that strikes directly at what Aristotle saw as the purpose of poetry and of storytelling. A work of tragedy presents a hypothesis about causality. As a method for exploring consequences it asks: How does the world work? How do humans operate within it? Why does misfortune befall the good? The credibility of the storyteller is supremely important in broaching these weighty subjects. Giant leaps across time and space damage the inherent believability of a narrative, even if these jumps are entertaining.

A theatrical work of any genre, whether comic, epic, or tragic, can be successful so long as its author understands its true nature, according to Aristotle. This is why there is such a thing as art that is good and art that is bad, quantifiably. One consumer may prefer an artistic-suspense piece to a family-adventure flick but surely
everyone prefers a
good
family adventure to a
bad
suspense film. More important, this system can be used to calculate how much money a particular script will make when produced.

Other books

Psychomech by Brian Lumley
Catherine's Letters by Aubourg, Jean-Philippe
Rebecca Joyce by The Sheriff's Jailbirds
Koban 6: Conflict and Empire by Stephen W. Bennett
Ask Me Why by Marie Force
ARC: The Wizard's Promise by Cassandra Rose Clarke
Moonlight Masquerade by Jude Deveraux