Read Fooled by Randomness Online

Authors: Nassim Nicholas Taleb

Fooled by Randomness (16 page)

BOOK: Fooled by Randomness
11.67Mb size Format: txt, pdf, ePub
ads

We can look at other aspects of the problem; think of someone involved in scientific research. Day after day, he will engage in dissecting mice in his laboratory, away from the rest of the world. He could try and try for years and years without anything to show for it. His significant other might lose patience with the
loser
who comes home every night smelling of mice urine. Until bingo, one day he comes up with a result. Someone observing the time series of his occupation would see absolutely no gain, while every day would bring him closer
in probability
to the end result.

The same with publishers; they can publish dog after dog without their business model being the least questionable, if once every decade they hit on a Harry Potter string of super-bestsellers—provided of course that they publish quality work that has a small probability of being of very high appeal. An interesting economist, Art De Vany, manages to apply these ideas to two fields: the movie business and his own health and lifestyle. He figured out the skewed properties of the movies payoffs and brought them to another level: the wild brand on nonmeasurable uncertainty we discuss in
Chapter 10
. What is also interesting is that he discovered that we are designed by mother nature to have an extremely skewed physical workout: Hunter-gatherers had idle moments followed by bursts of intense energy expenditure. At sixty-five, Art is said to have the physique of a man close to half his age.

In the markets, there is a category of traders who have
inverse
rare events, for whom volatility is often a bearer of good news. These traders lose money frequently, but in small amounts, and make money rarely, but in large amounts. I call them crisis hunters. I am happy to be one of them.

Why Don’t Statisticians Detect Rare Events?

Statistics to the layman can appear rather complex, but the concept behind what is used today is so simple that my French mathematician friends call it deprecatorily “cuisine.” It is all based on one simple notion: the more information you have, the more you are confident about the outcome. Now the problem: by how much? Common statistical method is based on the steady augmentation of the confidence level, in nonlinear proportion to the number of observations. That is, for an
n
times increase in the sample size, we increase our knowledge by the square root of
n.
Suppose I am drawing from an urn containing red and black balls. My confidence level about the relative proportion of red and black balls after 20 drawings is not twice the one I have after 10 drawings; it is merely multiplied by the square root of 2 (that is, 1.41).

Where statistics becomes complicated, and fails us, is when we have distributions that are not symmetric, like the urn above. If there is a very small probability of finding a red ball in an urn dominated by black ones, then our knowledge about the
absence
of red balls will increase very slowly—more slowly than at the expected square root of
n
rate. On the other hand, our knowledge of the
presence
of red balls will dramatically improve once one of them is found. This asymmetry in knowledge is not trivial; it is central in this book—it is a central philosophical problem for such people as the ancient skeptics David Hume and Karl Popper (on that, later).

To assess an investor’s performance, we either need more astute, and less intuitive, techniques or we may have to limit our assessments to situations where our judgment is independent of the frequency of these events.

A Mischievous Child Replaces the Black Balls

But there is even worse news. In some cases, if the incidence of red balls is itself randomly distributed, we will never get to know the composition of the urn. This is called “the problem of stationarity.” Think of an urn that is hollow at the bottom. As I am sampling from it, and without my being aware of it, some mischievous child is adding balls of one color or another. My inference thus becomes insignificant. I may infer that the red balls represent 50% of the urn while the mischievous child, hearing me, would swiftly replace all the red balls with black ones. This makes much of our knowledge derived through statistics quite shaky.

The very same effect takes place in the market. We take past history as a single homogeneous sample and believe that we have considerably increased our knowledge of the future from the observation of the sample of the past. What if vicious children were changing the composition of the urn? In other words, what if things have changed?

I have studied and practiced econometrics for more than half my life (since I was nineteen), both in the classroom and in the activity of a quantitative derivatives trader. The “science” of econometrics consists of the application of statistics to samples taken at different periods of time, which we called “time series.” It is based on studying the time series of economic variables, data, and other matters. In the beginning, when I knew close to nothing (that is, even less than today), I wondered whether the time series reflecting the activity of people now dead or retired should matter for predicting the future. Econometricians who knew a lot more than I did about these matters asked no such question; this hinted that it was in all likelihood a stupid inquiry. One prominent econometrician, Hashem Pesaran, answered a similar question by recommending to do “more and better econometrics.” I am now convinced that, perhaps, most of econometrics could be useless—much of what financial statisticians know would not be worth knowing. For a sum of zeros, even repeated a billion times, remains zero; likewise an accumulation of research and gains in complexity will lead to naught if there is no firm ground beneath it. Studying the European markets of the 1990s will certainly be of great help to a historian; but what kind of inference can we make now that the structure of the institutions and the markets has changed so much?

Note that the economist Robert Lucas dealt a blow to econometrics by arguing that if people were rational then their rationality would cause them to figure out predictable patterns from the past and adapt, so that past information would be completely useless for predicting the future (the argument, phrased in a very mathematical form, earned him the Swedish Central Bank Prize in honor of Alfred Nobel). We are human and act according to our knowledge, which integrates past data. I can translate his point with the following analogy. If rational traders detect a pattern of stocks rising on Mondays, then, immediately such a pattern becomes detectable, it would be ironed out by people buying on Friday in anticipation of such an effect. There is no point searching for patterns that are available to everyone with a brokerage account; once detected, they would be self-canceling.

Somehow, what came to be known as the
Lucas critique
was not carried through by the “scientists.” It was confidently believed that the scientific successes of the industrial revolution could be carried through into the social sciences, particularly with such movements as Marxism. Pseudoscience came with a collection of idealistic nerds who tried to create a tailor-made society, the epitome of which is the central planner. Economics was the most likely candidate for such use of science; you can disguise charlatanism under the weight of equations, and nobody can catch you since there is no such thing as a controlled experiment. Now, the spirit of such methods, called scientism by its detractors (like myself), continued past Marxism, into the discipline of finance as a few technicians thought that their mathematical knowledge could lead them to understand markets. The practice of “financial engineering” came along with massive doses of pseudoscience. Practitioners of these methods measure risks, using the tool of past history as an indication of the future. We will just say at this point that the mere possibility of the distributions not being stationary makes the entire concept seem like a costly (perhaps
very costly
) mistake. This leads us to a more fundamental question: The problem of induction, to which we will turn in the next chapter.

Seven


THE PROBLEM OF INDUCTION

On the chromodynamics of swans. Taking Solon’s warning into some philosophical territory. How Victor Niederhoffer taught me empiricism; I added deduction. Why it is not scientific to take science seriously. Soros promotes Popper. That bookstore on Eighteenth Street and Fifth Avenue. Pascal’s wager.

FROM BACON TO HUME

N
ow we discuss this problem viewed from the broader standpoint of the philosophy of scientific knowledge. There is a problem in inference well-known as the problem of induction. It is a problem that has been haunting science for a long time, but hard science has not been as harmed by it as the social sciences, particularly economics, even more the branch of financial economics. Why? Because the randomness content compounds its effects. Nowhere is the problem of induction more relevant than in the world of trading—and nowhere has it been as ignored!

Cygnus Atratus

In his
Treatise on Human Nature,
the Scots philosopher David Hume posed the issue in the following way (as rephrased in the now famous black swan problem by John Stuart Mill):
No amount of observations of white swans can allow the inference that all swans are white, but the observation of a single black swan is sufficient to refute that conclusion.

Hume had been irked by the fact that science in his day (the eighteenth century) had experienced a swing from scholasticism, entirely based on deductive reasoning (no emphasis on the obsdervation of the real world) to, owing to Francis Bacon, an overreaction into naive and unstructured empiricism. Bacon had argued against “spinning the cobweb of learning” with little practical result (science resembled theology). Science had shifted, thanks to Bacon, into an emphasis on empirical observation. The problem is that, without a proper method, empirical observations can lead you astray. Hume came to warn us against such knowledge, and to stress the need for some rigor in the gathering and interpretation of knowledge—what is called epistemology (from
episteme,
Greek for learning). Hume is the first modern
epistemologist
(epistemologists operating in the applied sciences are often called methodologists or philosophers of science). What I am writing here is not strictly true, for Hume said things far worse than that; he was an obsessive skeptic and never believed that a link between two items could be truly established as being causal. But we will tone him down a bit for this book.

Niederhoffer

The story of Victor Niederhoffer is both sad and interesting insofar as it shows the difficulty of merging extreme empiricism and logic in one single person—pure empiricism implies necessarily being fooled by randomness. I am bringing up his example because, in a way, similar to Francis Bacon, Victor Niederhoffer stood against the cobweb of learning of the University of Chicago and the efficient-market religion of the 1960s when they were at their worst. In contrast to the scholasticism of financial theorists, his work looked at data in search of anomalies and found some. He also figured out the uselessness of the news, as he showed that reading the newspaper did not confer a predictive advantage to its readers. He derived his knowledge of the world from past data stripped of preconceptions, commentaries, and stories. Since then, an entire industry of such operators, called
statistical arbitrageurs,
flourished; some of the successful ones were initially his trainees. Niederhoffer’s story illustrates how empiricism cannot be inseparable from methodology.

At the center of his
modus
is Niederhoffer’s dogma that any “testable” statement should be tested, as our minds make plenty of empirical mistakes when relying on vague impressions. His advice is obvious, but it is rarely practiced. How many effects we take for granted might not be there? A testable statement is one that can be broken down into quantitative components and subjected to statistical examination. For instance, a conventional-wisdom, empirical style statement like

automobile accidents happen closer to home

can be tested by taking the average distance between the accident and the domicile of the driver (if, say, about 20% of accidents happen within a twelve-mile radius). However, one needs to be careful in the interpretation; a naive reader of the result would tell you that you are more likely to have an accident if you drive in your neighborhood than if you did so in remote places, which is a typical example of naive empiricism. Why? Because accidents may happen closer to home simply because people spend their time driving close to home (if people spend 20% of their time driving in a twelve-mile radius).
*

But there is a more severe aspect of naive empiricism. I can use data to disprove a proposition, never to prove one. I can use history to refute a conjecture, never to affirm it. For instance, the statement

The market never goes down 20% in a given three-month period

can be tested but is completely meaningless if verified. I can quantitatively reject the proposition by finding counterexamples, but it is not possible for me to accept it simply because, in the past, the market never went down 20% in any three-month period (you cannot easily make the logical leap from “has never gone down” to “never goes down”). Samples can be greatly insufficient; markets may change; we may not know much about the market from historical information.

You can more safely use the data to reject than to confirm hypotheses. Why? Consider the following statements:

Statement A:
No swan is black, because I looked at four thousand swans and found none.

Statement B:
Not all swans are white.

I cannot logically make statement A, no matter how many successive white swans I may have observed in my life and may observe in the future (except, of course, if I am given the privilege of observing with certainty all available swans). It is, however, possible to make Statement B merely by finding one single counterexample. Indeed, Statement A was disproved by the discovery of Australia, as it led to the sighting of the
Cygnus atratus,
a swan variety that was jet black! The reader will see a hint of Popper’s ideas, as there is a strong asymmetry between the two statements; and, furthermore, such asymmetry lies in the foundations of knowledge. It is also at the core of my operation as a decision maker under uncertainty.

I said that people rarely test testable statements; this may be better for those who cannot handle the consequence of the inference. The following inductive statement illustrates the problem of interpreting past data literally, without methodology or logic:

I have just completed a thorough statistical examination of the life of President Bush. For fifty-eight years, close to 21,000 observations, he did not die once. I can hence pronounce him as immortal, with a high degree of statistical significance.

Niederhoffer’s publicized hiccup came from his selling naked options based on his testing and assuming that what he saw in the past was an exact generalization about what could happen in the future. He relied on the statement “The market has never done this before,” so he sold puts that made a small income if the statement was true and lost hugely in the event of it turning out to be wrong. When he blew up, close to a couple of decades of performance were overshadowed by a single event that only lasted a few minutes.

Another logical flaw in this type of historical statement is that often when a large event takes place, you hear the “it never happened before,” as if it needed to be absent from the event’s past history for it to be a surprise. So why do we consider the worst case that took place in our own past as the worst possible case? If the past, by bringing surprises, did not resemble the past previous to it (what I call the past’s past), then why should our future resemble our current past?

There is another lesson to his story, perhaps the greatest one: Niederhoffer appears to approach markets as a venue from which to derive pride, status, and wins against “opponents” (such as myself), as he would in a game with defined rules. He was a squash champion with a serious competitive streak; it is just that reality does not have the same closed and symmetric laws and regulations as games. This competitive nature got him into ferocious fighting to “win.” As we saw in the last chapter, markets (and life) are not simple win/lose types of situations, as the cost of the losses can be markedly different from that of the wins. Maximizing the probability of winning does not lead to maximizing the expectation from the game when one’s strategy may include skewness, i.e., a small chance of large loss and a large chance of a small win. If you engaged in a Russian roulette–type strategy with a low probability of large loss, one that bankrupts you every several years, you are likely to show up as the winner in almost all samples—except in the year when you are dead.

I remind myself never to fail to acknowledge the insights of the 1960s empiricist and his early contributions. Sadly, I learned quite a bit from Niederhoffer, mostly by contrast, and particularly from the last example: not to approach anything as a
game to win,
except, of course, if it is a game. Even then, I do not like the asphyxiating structure of competitive games and the diminishing aspect of deriving pride from a numerical performance. I also learned to stay away from people of a competitive nature, as they have a tendency to commoditize and reduce the world to categories, like how many papers they publish in a given year, or how they rank in the league tables. There is something nonphilosophical about investing one’s pride and ego into a “my house/library/car is bigger than that of others in my category”—it is downright foolish to claim to be first in one’s category all the while sitting on a time bomb.

To conclude, extreme empiricism, competitiveness, and an absence of logical structure to one’s inference can be a quite explosive combination.

SIR KARL’S PROMOTING AGENT

Next I will discuss how I discovered Karl Popper via another trader, perhaps the only one I have ever truly respected. I do not know if it applies to other people, but, in spite of my being a voracious reader, I have rarely been truly affected in my behavior (in any durable manner) by anything I have read. A book can make a strong impression, but such an impression tends to wane after some newer impression replaces it in my brain (a new book). I have to discover things by myself (recall the “Stove Is Hot” section in
Chapter 3
). These self-discoveries last.

One exception of ideas that stuck with me are those of Sir Karl, whom I discovered (or perhaps rediscovered) through the writings of the trader and self-styled philosopher George Soros, who seemed to have organized his life by becoming a promoter of the ideas of Karl Popper. What I learned from George Soros was not quite in the way he perhaps intended us to learn from him. I disagreed with his statements when it came to economics and philosophy. First, although I admire him greatly, I agree with professional thinkers that Soros’
forte
is not in philosophical speculation. Yet he considers himself a philosopher—which makes him endearing in more than one way. Take his first book,
The Alchemy of Finance.
On the one hand, he seems to discuss ideas of scientific explanation by throwing in big names like “deductive-nomological,” something always suspicious as it is reminiscent of postmodern writers who play philosophers and scientists by using complicated references. On the other hand, he does not show much grasp of the concepts. For instance, he conducts what he calls a “trading experiment,” and uses the success of the trade to imply that the theory behind it is valid. This is ludicrous: I could roll the dice to prove my religious beliefs and show the favorable outcome as evidence that my ideas are right. The fact that Soros’ speculative portfolio turned a profit proves very little of anything. One cannot infer much from a single experiment in a random environment—an experiment needs a repeatability showing some causal component. Second, Soros indicts wholesale the science of economics, which may be very justified but he did not do his homework. For instance, he writes that the category of people he lumps as “economists” believe that things converge to equilibrium, when that only applies to
some
cases of neoclassical economics. There are plenty of economic theories that believe that departure from a price level can cause further divergence and cause cascading feedback loops. There has been considerable research to that effect in, say, game theory (the works of Harsanyi and Nash) or information economics (the works of Stiglitz, Akerlof, and Spence). Lumping all economics in one basket shows a bit of unfairness and lack of rigor.

But in spite of some of the nonsense in his writing, probably aimed at convincing himself that he was not just a trader, or because of it, I succumbed to the charm of this Hungarian man who like me is ashamed of being a trader and prefers his trading to be a minor extension of his intellectual life even if there is not much scholarship in his essays. Having never been impressed by people with money (and I have met plenty of those throughout my life), I did not look at any of them as remotely a role model for me. Perhaps the opposite effect holds, as I am generally repelled by the wealthy, generally because of the attitude of epic heroism that usually accompanies rapid enrichment. Soros was the only one who seemed to share my values. He wanted to be taken seriously as a Middle European professor who happened to have gotten rich owing to the validity of his ideas (it was only by failing to gain acceptance by other intellectuals that he would try to gain alpha status through his money, sort of like a seducer who, after trying hard, would end up using such an appendage as the red Ferrari to seduce the girl). In addition, although Soros did not deliver anything meaningful in his writings, he knew how to handle randomness, by keeping a critical open mind and changing his opinions with minimal shame (which carries the side effect of making him treat people like napkins). He walked around calling himself fallible, but was so potent because he knew it while others had loftier ideas about themselves. He understood Popper. Do not judge him by his writings: He lived a Popperian life.

BOOK: Fooled by Randomness
11.67Mb size Format: txt, pdf, ePub
ads

Other books

Pleasure by Adrianna Dane
The Alpine Traitor by Mary Daheim
Sinful Desires Vol. 5 by M. S. Parker
The River Burns by Trevor Ferguson
The Other Schindlers by Agnes Grunwald-Spier
Tell Me No Secrets by Joy Fielding
New Sight by Jo Schneider
Just Say Yes by Phillipa Ashley
All the Way by Kristi Avalon