Chances Are (32 page)

Read Chances Are Online

Authors: Michael Kaplan

BOOK: Chances Are
10.29Mb size Format: txt, pdf, ePub
What do we have here, other than a thicket of parentheses? Amazingly, a description of the effect of experience on opinion.
Look: if we call
A
our hypothesis and
B
the evidence, this equation says that the truth of our hypothesis given the evidence (the term to the left of the equals sign) can be determined by its previous probability (the first term on the right) times a “learning factor” (the remaining, thickety term). If we can find probabilities to define our original state of mind and estimate the probabilities for the evidence appearing given our hypothesis, we now have a method for tracking our reasons to believe in guilt or innocence as each new fact appears before us.
How could this work in practice? The old woman's body was found in her apartment, brutally hacked; we know the student, Raskolnikov, had been quarreling with her—something to do with money. Then again, she
was
a pawnbroker: she could have had many enemies among the poor in the neighborhood—all desperate, all in her debt—many no doubt rough men in hard trades. The boy stands in the dock; he seems more pitiable than frightening, his hands none too clean, but soft. Maybe he did it; maybe he didn't—our opinion is evenly balanced.
Then the forensic expert testifies about the ax: the latent fingerprints on it are similar to Raskolnikov's. But are they his? The expert, a scrupulous scientist, will not say; all that his statistics can justify is a statement that such a match would appear by chance only one time in a thousand.
We slot our probabilities into Bayes' formula:
P
(
A
|
B
) is our new hypothesis about the suspect's guilt, given the fingerprint evidence;
P
(
A
) is our previous view (.5),
P
(
B
|
A
) is the chance of a fingerprint match, given he was guilty (1);
P
(
B
|
A
) is the chance of a fingerprint match given he was
not
guilty (.0001); and
P
(
A
) is our previous view of Raskolnikov's innocence (.5). Put it all together:
The iron door swings shut and the haggard figure joins the chain of convicts heading for Siberia.
And the sunrise? Bayes' theorem tells you that you can go to bed confident, if not certain, that it will rise tomorrow.
 
About Bayes' time, a judge, so the story goes, warned the voluble man in the witness box: “I must ask you to tell no
unnecessary
lies; the lies in which you have been instructed by counsel are required to support his fraudulent case—further untruths are a needless distraction.”
Nicholas Bernoulli felt that the world would be a better place if we could compile statistics on people's veracity. Certainly, it helps to begin with an estimate. Rogue
X
and fool
Y
stand up in succession, not knowing each other and with no reason to be in collusion. Rogue and fool each affirm that statement
S
is true.
X
is so shifty it would be hard to consider him truthful any more than about a third of the time; give him a credibility rating of .3.
Y
is so dense that his credibility is little better—say, .4. Moreover,
S
—if it
is
a lie—is only one of the five or so unnecessary lies they each could tell, so we should multiply the probability of their coming up with this particular lie by 1/5. How then, all in all, does their joint testimony affect our impression of the truthfulness of
S
? Our belief swings back and forth—they are two weak reeds; but they support each other; but . . .
Bayes can help us evaluate dubious testimony. The Honorable Sir Richard Eggleston, one of Australia's most prominent jurists, plugged these numbers into Bayes' theorem to show how the stories of two independent but only partly credible witnesses should affect our confidence in the truth of the statement to which they both have sworn. His equation looks daunting:
—but it shows that given the combined testimony of
X
and
Y
, the probability that the statement is true is more than 7 times greater than it was before. A rogue may have his uses and a fool be a present help in trouble.
 
Plugging numbers into a machine and turning the handle seems a high-handed approach to delicate matters of judgment and belief, and some of those numbers seem rather arbitrary. Can we
assume
even chances of guilt and innocence? Can we
assign
credibility ratings? How can any of this be justified?
Ever since it appeared, there have been loud voices raised against the legitimacy of this “inverse probability.” Bayes himself spoke in terms of expectation, as if experience were a game on which we had placed a bet. But what bookie offers the starting price? Where do we get the prior probability that evidence modifies? Laplace offered a grand-sounding justification, the Principle of Insufficient Reason: if there is nothing to determine what the prior probability of two events might be, you can assign them equal probability. This, to many, is heresy; it made Fisher plunge and bound; it made von Mises' smile ever tighter and more frosty.
The opposing argument, expressed by the so-called subjectivist school, gained its point by sacrificing rigor. What, it asked, are we really measuring here? Degrees of ignorance. Evidence slowly clears ignorance away, but it does so only through repeated experience and repeated re-assessment of your hypotheses—what some Bayesian commentators call “conditioning your priors.” Without evidence, what better describes your state of ignorance than an inability to decide between hypotheses? Of
course
you give them equal weight, because you know nothing about them. You are testing not the innate qualities of things, nor the repeatability of experiment, but the logic of your statements and the consistency of your expectations.
 
Law can be unclear, inconsistent, and partial—but only the facts are uncertain. We have leading cases (and, of course, legislation) to correct law, to patch or prop up the palace of justice where its builders have economized or worms attacked its timbers. To improve our understanding of legal
fact,
we ought to have probability—but it hasn't had much success in the courts. After all, few people choose to study law because they want to deal with arithmetic: shining oratory and flashes of forensic deduction are the glory of the courtroom. Formal probability is left to the expert witness; and all too often his expertise is used to baffle.
In the 1894 Dreyfus treason trial, experts were brought in to prove that the reason the handwriting in the suspect documents looked nothing like Dreyfus' was precisely his deliberate effort to make them
look
like a forgery. The experts showed that the documents and Dreyfus' correspondence had words of similar lengths; they pointed out four graphological “coincidences” and—assigning a probability of .2 to these coincidences—calculated the likelihood of finding these four by chance at .0016. But here they made a mistake in elementary probability: .0016 is the chance of finding four coincidences
in four tries;
in fact they had found four in
thirteen
possible locations, so the probability that this would occur by chance was a very generous .7. Both Dreyfus' counsel and the Government commissioner admitted they had not understood a single term of the mathematical demonstration—but everyone was impressed by its exquisitely pure incomprehensibility. Dreyfus was condemned to Devil's Island, and despite widespread agitation for his release, remained there for four wretched years.
The textbook example of probabilistic clumsiness remains
People v. Collins,
a seemingly simple case of mugging in Los Angeles in 1964. Juanita Brooks was coming home from shopping, her groceries in a wheeled wicker basket with her purse perched on top. As she came along the alley behind her house, she stooped to pick up an empty carton and was suddenly pushed over by someone she had neither seen nor heard approaching. Although stunned by her fall, she could still say that she saw a young woman running away, who weighed about 145 pounds, had hair “between a dark blond and a light blond,” and was wearing “something dark.” When she got up, Mrs. Brooks found that her purse, containing around $40, was missing.
John Bass lived at the end of the alley; he was watering his lawn, heard the commotion, and saw a woman with a blond ponytail run out of the alley and jump into a yellow car that, he said, was being driven by a black man with a mustache and beard.
Later that day, Malcolm Ricardo Collins and his wife, Janet Collins, were arrested for the robbery. She worked as a housemaid in San Pedro and had a blond ponytail; he was black, and had a mustache and beard; they drove a yellow car. They were short of money. Their alibi was far from watertight.
Nevertheless, the prosecution had a difficult time identifying them as the robbers. Neither Mrs. Brooks nor Mr. Bass had had a good look at the woman who took the purse, nor had Mr. Bass been able to pick out Malcolm Collins in a police lineup. There was also evidence that Janet Collins had worn light clothing that day—not “something dark.”
So, at an impasse, the prosecution brought in an instructor in mathematics from the local state college. He explained that he could prove identity through probability, by multiplying together the individual probabilities of each salient characteristic. The prosecution started with these assumptions, written out in a table:
Where do these figures come from? You may well wonder. Could anyone honestly say that exactly this proportion of each population of possible cars, wearers or non-wearers of mustaches, or girls with hair in ponytails would have been likely to pass through San Pedro that morning? And, incidentally, that figure for “interracial couple in car”—was this in distinction to
intra
racial couples in cars or to interracial couples on the sidewalk? No such questions were raised.
When the premises are flawed, only worse can follow, and it did. The prosecution represented A through F as independent events; but could you, for instance, genuinely assume that “man with mustache” and “Negro man with beard” are independent? Having assumed independence, the prosecution merrily multiplied all these probabilities, coming up with a chance of 1 in 12 million that all these characteristics would occur together.
Even taking the individual probabilities to be correct and the assumption of their independence to be justified, the prosecution is still not on firm ground. If the chance is 1 in 12 million that any one couple will have this combination of characteristics, does that mean you would need to go through 12 million other couples to find a match? Not quite. Let's look at the problem: we're trying to establish the chance of a match of all these characteristics between two couples drawn at random out of a population—that is, the chance of a random event occurring
at least twice.

Other books

Tell Me One Thing by Deena Goldstone
ComeBackToMe by Mari Kyle
Descenso a los infiernos by David Goodis
Vampire Forgotten by Rachel Carrington
Cocksure by Mordecai Richler
A Spring Betrayal by Tom Callaghan