Knocking on Heaven's Door (29 page)

BOOK: Knocking on Heaven's Door
2.39Mb size Format: txt, pdf, ePub

Black holes trap anything nearby and transform it through strong internal forces. Because black holes are characterized entirely by their mass, charge, and a quantity called angular momentum, they don’t keep track of what went in or how it got there—the information that went in appears to be lost. Black holes release that information only slowly, through subtle correlations in the radiation that leaks out. Furthermore, large black holes decay slowly whereas small ones disappear right away. This means that whereas small black holes don’t last very long, large ones are essentially too big to fail. Any of this ring a bell? Information—plus debts and derivatives—that went into banks became trapped and was transformed into indecipherable, complicated assets. And after that, information—and everything else that went in—was only slowly released.

With too many global phenomena today, we really are doing uncontrolled experiments on a grand scale. Once, on the radio show
Coast to Coast,
I was asked whether I would proceed with an experiment—no matter how potentially interesting—if it had a chance of endangering the entire world. To the chagrin of the mostly conservative radio audience, my response was that we are already doing such an experiment with carbon emissions. Why aren’t more people worried about that?

As with scientific advances, rarely do abrupt changes happen without any advance indicators. We don’t know that climate will change cataclysmically, but we have already seen indications of melting glaciers and changing weather patterns. The economy might have suddenly failed in 2008, but many financiers knew enough to leave the markets in advance of the collapse. New financial instruments and high carbon levels have the potential to precipitate radical changes. In such real-world situations, the question isn’t whether risk exists. In these cases we need to determine how much caution to exercise if we are to properly account for possible dangers and decide on an acceptable level of caution.

CALCULATING RISK

Ideally, one of the first steps would be to calculate risks. Sometimes people simply get the probabilities wrong. When John Oliver interviewed Walter Wagner, one of the LHC litigants, about black holes on
The Daily Show,
Wagner forfeited any credibility he might have had when he said the chance of the LHC destroying the Earth was 50—50 since it either will happen or it won’t. John Oliver incredulously responded that he “wasn’t sure that’s how probability works.” Happily, John Oliver is correct, and we can make better (and less egalitarian) probability estimates.

But it’s not always easy. Consider the probability of detrimental climate change—or the probability of a bad situation in the Middle East, or the fate of the economy. These are much more complex situations. It’s not merely that the equations that describe the risks are difficult to solve. It’s that we don’t even necessarily know what the equations are. For climate change, we can do simulations and study the historical record. For the other two, we can try to find analogous historical situations, or make simplified models. But in all three cases, huge uncertainties plague any predictions.

Accurate and trustworthy predictions are difficult. Even when people do their best to model everything relevant, the inputs and assumptions that enter any particular model might significantly affect a conclusion. A prediction of low risk is meaningless if the uncertainties associated with the underlying assumptions are much greater. It’s critical to be thorough and straightforward about uncertainties if a prediction is to have any value.

Before considering other examples, let me recount a small anecdote that illustrates the problem. Early in my physics career, I observed that the Standard Model allowed for a much wider range of values for a particular quantity of interest than had been previously predicted, due to a quantum mechanical contribution whose size depended on the (then) recently measured and surprisingly large value of the top quark mass. When presenting my result at a conference, I was asked to plot my new prediction as a function of top quark mass. I refused, knowing there were several different contributions and the remaining uncertainties allowed for too broad a range of possibilities to permit such a simple curve. However, an “expert” colleague underestimated the uncertainties and made such a plot (not unlike many real-world predictions made today), and—for a while—his prediction was widely referenced. Eventually, when the measured quantity didn’t fall within his predicted range, the disagreement was correctly attributed to his overly optimistic uncertainty estimate. Clearly, it’s better to avoid such embarrassments, both in science and in any real-world situation. We want predictions to be meaningful, and they will be only if we are careful about the uncertainties that we enter.

Real-world situations present even more intractable problems, requiring us to be still more careful about uncertainties and unknowns. We have to be cautious about the utility of quantitative predictions that cannot or do not take account of these issues.

One stumbling block is how to properly account for systemic risks, which are almost always difficult to quantify. In any big interconnected system, the large-scale elements involving the multiple failure models arising from the many interconnections of the smaller pieces are often the least supervised. Information can be lost in transitions or never attended to in the first place. And such systemic problems can amplify the consequences of any other potential risks.

I saw this kind of structural issue firsthand when I was on a committee addressing NASA safety. To accommodate the necessity of appeasing diverse congressional districts, NASA sites are spread throughout the country. Even if any individual site takes care of its piece of equipment, there is less institutional investment in the connections. This then becomes true for the larger organization as well. Information can easily get lost in reporting between different sublayers. In an email to me from the NASA and aerospace industry risk-analyst Joe Fragola, who ran the study, “My experience indicates that risk analyses performed without the joint activity between the subject matter experts, the system integration team and the risk analysis team are doomed to be inadequate. In particular, so called ‘turn-key’ risk analyses become so much actuarial exercise and are only of academic interest.” Too often there is a trade-off between breadth and detail, but both are essential in the long term.

One dramatic consequence of such a failure (among others) was the BP incident in the Gulf of Mexico. In a talk at Harvard in February 2011, Cherry Murray, a Harvard dean and member of the National Commission on the BP Deepwater Horizon Oil Spill and Offshore Drilling, cited management failure as one major contributor to the BP incident. Richard Sears, the commission’s senior science and engineering adviser and former vice president for Deepwater Services at Shell Oil Co., described how BP management addressed one problem at a time, without ever formulating the big picture in what he called “hyper-linear thinking.”

Although particle physics is a specialized and difficult enterprise, its goal is to isolate its simple underlying elements and make clear predictions based on our hypotheses. The challenge is to access small distances and high energies, not to address complicated interconnections. Even though we don’t necessarily know which underlying model is correct, we can predict—given a particular model—what sorts of events should occur when, for instance, protons collide with each other at the LHC. When small scales get absorbed into larger ones, effective theories appropriate to the larger scales tell us exactly how the smaller scales enter, as well as the errors we can make by ignoring small-scale details.

In most situations, however, this neat separation by scale that we introduced in Chapter 1 doesn’t readily apply. Despite the sometimes shared methods, in the words of more than one New York banker, “Finance is not a branch of physics.” In climate or banking, knowledge of small-scale interactions can often be essential to determining large-scale results.

This lack of scale separation can have disastrous consequences. Take as an example the collapse of Barings Bank. Before its failure in that year, Barings, founded in 1762, was Britain’s oldest merchant bank. It had financed the Napoleonic wars, the Louisiana Purchase, and the Erie Canal. Yet in 1995, the bad bets made by a sole rogue trader at a small office in Singapore brought it nearly to financial ruin.

More recently, the machinations of Joseph Cassano at AIG led to its near destruction and the threat of major financial collapse for the world as a whole. Cassano headed a relatively small (400-person) unit within the company called AIG Financial Products, or AIGFP. AIG had made reasonably stable bets until Cassano started employing credit-default swaps (a complex investment vehicle promoted by various banks) to hedge the bets made on collateralized debt obligations.

In what seems in retrospect to be a pyramid scheme of hedging, his group ratcheted up $500 billion in credit-default swaps, more than $60 billion of which were tied to subprime mortgages.
41
If subunits had been absorbed into larger systems as they are in physics, the smaller piece would have yielded information or activity at a higher level in a controlled manner that a midlevel supervisor could readily handle. But in an unfortunate and unnecessarily excessive violation of separation of scales, Cassano’s machinations went virtually unsupervised and infiltrated the entire operation. His activities weren‘t regulated as securities, they weren’t regulated as gaming, and they weren’t regulated as insurance. The credit-default swaps were distributed all over the globe, and no one had worked through the potential implications. So when the subprime mortgage crisis hit, AIG wasn’t prepared and it imploded with losses. American taxpayers subsequently were left to bail the company out.

Regulators attended to conventional safety issues (to some extent) concerning the soundness of individual institutions, but they didn’t assess the system as a whole, or the interconnected risks built into it. More complex systems with overlapping debts and obligations call for a better understanding of these interconnections and a more comprehensive way of evaluating, comparing, and deciding risks and the tradeoffs for possible benefits.
42
This challenge applies to most any large system—as does the time frame that is deemed relevant.

This brings us to a further factor that makes calculating and dealing with risk difficult: our psyches and our market and political systems apply different logic to long-term risks and short-term ones—sometimes sensibly, but often greedily, so. Most economists and some in the financial markets understood that market bubbles don’t continue indefinitely. The risk wasn’t that the bubble would burst—did anyone really think that housing prices would continue doubling within short time frames forever?—but that the bubble would burst in the imminent future. Riding or inflating a bubble, even one that you know is unsustainable, isn’t necessarily shortsighted if you are prepared at any point to take your profits (or bonuses) and close up shop.

In the case of climate change, we don’t actually know how to assign a number to the melting of the Greenland ice cap. The probabilities are even less certain if we ask for the likelihood that it will begin to melt within a definite time frame—say in the next hundred years. But not knowing the numbers is no reason to bury our head in the ice—or the proto-cold water.

We have trouble finding consensus on the risks from climate change and how and when to avert them when the possible environmental consequences arise relatively slowly. And we don’t know how to estimate the cost of action or inaction. Were there to be a dramatic climate-driven event, we would be much more likely to take action immediately. Of course, no matter how fast we were, at that point it would be too late. This means that non-cataclysmic climate changes are worth attending to as well.

Even when we do know the likelihood of certain outcomes, we tend to apply different standards to low-probability events with catastrophic outcomes than to high-probability events with less dramatic results. We hear a lot more about airplane crashes and terrorist attacks than we do about car accidents, even though car accidents kill far more people every year. People talked about black holes even without understanding probabilities because the consequences of the disaster scenario seemed so dire. On the other hand, many small (and not so small) probabilities are neglected altogether when their low visibility keeps them under the radar. Even offshore drilling was considered completely safe by many until the Gulf of Mexico disaster actually occurred.
43

A related problem is that sometimes the greatest benefits or costs arise from the tails of distributions—the events that are the least likely and that we know least well.
44
Ideally, we‘d like our calculations to be objectively determined by midrange estimates or averages of preexisting related situations. But we don’t have these data if nothing similar ever occurred or if we ignore the possibility altogether. If the costs or benefits are sufficiently high at these tail ends, they dominate the predictions—assuming that you know in advance what they are in the first place. In any case, traditional statistical methods don’t apply when the rates are too low for averages to be meaningful.

The financial crisis happened because of events that were outside the range of what the experts had taken into account. Lots of people made money based on the predictable aspects, but supposedly unlikely events determined some of the more negative developments. When modeling the reliability of financial instruments, most applied the data for the previous few years without allowing for the possibility that the economy might turn down, or turn down at a far more dramatic rate. Assessments about whether to regulate financial instruments were based on a short time frame during which markets had only increased. Even when the possibility of a market drop was admitted, the assumed values for the drop were too low to accurately predict the true cost of lack of regulation to the economy. Virtually no one paid attention to the “unlikely” events that precipitated the crisis. Risks that might otherwise have been apparent therefore never came up for consideration. But even unlikely events need to be considered when they can have significant enough impact.
45

Other books

Seduced by Audra Cole, Bella Love-Wins
DEAD: Confrontation by Brown, TW
Crisis of Consciousness by Dave Galanter
MadetoOrder by L.A. Day
As I Rode by Granard Moat by Benedict Kiely
Tempting His Mate by Savannah Stuart
Softly and Tenderly by Sara Evans
Himiko: Warrior by CB Conwy