Bad Science (22 page)

Read Bad Science Online

Authors: Ben Goldacre

Tags: #General, #Life Sciences, #Health & Fitness, #Errors, #Health Care Issues, #Essays, #Scientific, #Science

BOOK: Bad Science
8.57Mb size Format: txt, pdf, ePub

Let’s think about violence. The best predictive tool for psychiatric violence has a sensitivity of 0.75 and a specificity of 0.75. It’s tougher to be accurate when we predict an event in humans, with human minds and changing human lives. Let’s say 5 percent of patients seen by a community mental health team will be involved in a violent event in a year. If we use the same math as we did for the HIV tests, your “0.75” predictive tool would be wrong eighty-six times out of a hundred. For serious violence, occurring at 1 percent a year, with our best “0.75” tool, you inaccurately finger your potential perpetrator ninety-seven times out of a hundred. Will you preventively detain ninety-seven people to prevent three violent events? And will you apply that rule to alcoholics and assorted nasty antisocial types as well?

For murder, the extremely rare crime in question in this report, for which more action was demanded, occurring at one in ten thousand a year among patients with psychosis, the false positive rate is so high that the best predictive test is entirely useless.

This is not a counsel of despair. There are things that can be done, and you can always try to reduce the number of actual stark cock-ups, although it’s difficult to know what proportion of the “one murder a week” represents a clear failure of a system, since when you look back in history, through the retrospectoscope, anything that happens will look as if it were inexorably leading up to your one bad event. I’m just giving you the math on rare events. What you do with it is a matter for you.

Locking You Up

 

In 1999 British lawyer Sally Clark was put on trial for murdering her two babies. In the U.K. this was a major trial, with a successful appeal, and although many have a dim awareness that there was a statistical error in the prosecution case, few know the true story or the phenomenal extent of the statistical ignorance that went on in the case.

At her trial, Professor Sir Roy Meadow, an expert in parents who harm their children, was called to give expert evidence. Meadow famously quoted “one in seventy-three million” as the chance of two children in the same family dying of sudden infant death syndrome (SIDS).

This was a very problematic piece of evidence for two very distinct reasons: one is easy to understand; the other is an absolute mind bender. Because you have the concentration span to follow the next two pages, you will come out smarter than Professor Sir Roy, the judge in the Sally Clark case, her defense teams, the appeal court judges, and almost all the journalists and legal commentators reporting on the case. We’ll do the easy reason first.

The Ecological Fallacy

 

The figure of “one in seventy-three million” itself is iffy, as everyone now accepts. It was calculated as 8,543×8,543, as if the chances of two SIDS episodes in this one family were independent of each other. This feels wrong from the outset, and anyone can see why: there might be environmental or genetic factors at play, both of which would be shared by the two babies. But forget how pleased you are with yourself for understanding that fact. Even if we accept that two SIDS in one family is much more likely than one in seventy-three million—say, one in ten thousand—any such figure is still of dubious relevance, as we shall now see.

The Prosecutor’s Fallacy

 

The real question in this case is: What do we do with this spurious number? Many press reports at the time stated that one in seventy-three million was the likelihood that the deaths of Sally Clark’s two children were accidental—that is, the likelihood that she was innocent. Many in the court process seemed to share this view, and the factoid certainly sticks in the mind. But this is an example of a well-known and well-documented piece of flawed reasoning known as the prosecutor’s fallacy.

Two babies in one family have died. This in itself is very rare. Once this rare event has occurred, the jury needs to weigh up two competing explanations for the babies’ deaths: double SIDS or double murder. Under normal circumstances—before any babies have died—double SIDS is very unlikely, and so is double murder. But now that the rare event of two babies dying in one family has occurred, the two explanations—double murder or double SIDS—are suddenly both very likely. If we really wanted to play statistics, we would need to know which is relatively
more
rare: double SIDS or double murder. People have tried to calculate the relative risks of these two events, and one paper says it comes out at around two to one in favor of double SIDS.

Not only was this
crucial
nuance of the prosecutor’s fallacy missed at the time—by everyone in the court—but it was also clearly missed in the appeal, at which the judges suggested that instead of “one in seventy-three million,” Meadow should have said “very rare.” They recognized the flaws in its calculation, the ecological fallacy, the easy problem above, but they still accepted his number as establishing “a very broad point, namely the rarity of double SIDS.”

That, as you now understand, was entirely wrongheaded; the rarity of double SIDS is irrelevant, because double murder is rare too. An entire court process failed to spot the nuance of how the figure should be used. Twice.

Meadow was foolish, and has been vilified (some might say this process was exacerbated by the witch hunt against pediatricians who work on child abuse), but if it is true that he should have spotted and anticipated the problems in the interpretation of his number, then so should the rest of the people involved in the case: a pediatrician has no more unique responsibility to be numerate than a lawyer, a judge, journalist, jury member, or clerk. The prosecutor’s fallacy is also highly relevant in DNA evidence, for example, in which interpretation frequently turns on complex mathematical and contextual issues. Anyone who is going to trade in numbers, and use them, and think with them, and persuade with them, let alone lock people up with them, also has a responsibility to understand them. All you’ve done is read a popular science book on them, and already you can see it’s hardly rocket science.

Losing the Lottery

 

You know, the most amazing thing happened to me tonight. I was coming here, on the way to the lecture, and I came in through the parking lot. And you won’t believe what happened. I saw a car with the license plate ARW 357. Can you imagine? Of all the millions of license plates in the state, what was the chance that I would see that particular one tonight? Amazing…

—Richard Feynman

 

It is possible to be very unlucky indeed. A nurse named Lucia de Berk has been in prison for six years in Holland, convicted of seven counts of murder and three of attempted murder. An unusually large number of people died when she was on shift, and that, essentially, along with some very weak circumstantial evidence, is the substance of the case against her. She has never confessed, she has continued to protest her innocence, and her trial has generated a small collection of theoretical papers in the statistics literature.

The judgment was largely based on a figure of “one in 342 million against.” Even if we found errors in this figure—and believe me, we will—as in our previous story, the figure itself would still be largely irrelevant. Because, as we have already seen repeatedly, the interesting thing about statistics is not the tricky math, but what the numbers mean.

There is also an important lesson here from which we could all benefit: unlikely things do happen. Somebody wins the lottery every week; children are struck by lightning. It’s only weird and startling when something very, very specific and unlikely happens if you have specifically predicted it beforehand.
16

Here is an analogy.

Imagine I am standing near a large wooden barn with an enormous machine gun. I place a blindfold over my eyes, and laughing maniacally, I fire off many thousands and thousands of bullets into the side of the barn. I then drop the gun, walk over to the wall, examine it closely for some time, all over, pacing up and down. I find one spot where there are three bullet holes close to one another, then draw a target around them, announcing proudly that I am an excellent marksman.

You would, I think, disagree with both my methods and my conclusions for that deduction. But this is exactly what has happened in Lucia’s case: the prosecutors found seven deaths on one nurse’s shifts, in one hospital, in one city, in one country, in the world and then drew a target around them.

This breaks a cardinal rule of any research involving statistics: you cannot find your hypothesis in your results. Before you go to your data with your statistical tool, you have to have a specific hypothesis to test. If your hypothesis comes from analyzing the data, then there is no sense in analyzing the same data again to confirm it.

This is a rather complex, philosophical, mathematical form of circularity, but there were also very concrete forms of circular reasoning in the case. To collect more data, the investigators went back to the wards to see if they could find more suspicious deaths. But all the people who were asked to remember “suspicious incidents” knew that they were being asked because Lucia might be a serial killer. There was a high risk that “an incident was suspicious” became synonymous with “Lucia was present.” Some sudden deaths when Lucia was not present would not be listed in the calculations, by definition: they are in no way suspicious, because Lucia was not present.

It gets worse. “We were asked to make a list of incidents that happened during or shortly after Lucia’s shifts,” said one hospital employee. In this manner more patterns were unearthed, and so it became even more likely that investigators would find more suspicious deaths on Lucia’s shifts. Meanwhile, Lucia waited in prison for her trial.

This is the stuff of nightmares.

At the same time, a huge amount of corollary statistical information was almost completely ignored. In the three years before Lucia worked on the ward in question, there were seven deaths. In the three years that she did work on the ward, there were six deaths. Here’s a thought: it seems odd that the death rate should go
down
on a ward at the precise moment that a serial killer—on a killing spree—arrives. If Lucia killed them all, then there must have been no natural deaths on that ward at all in the whole of the three years that she worked there.

Ah, but on the other hand, as the prosecution revealed at her trial, Lucia did like tarot. And she does sound a bit weird in her private diary, excerpts from which were read out. So she might have done it anyway.

But the strangest thing of all is this. In generating his obligatory, spurious, Meadowesque figure, which this time was “one in 342 million,” the prosecution’s statistician made a simple, rudimentary mathematical error. He combined individual statistical tests by multiplying p-values, the mathematical description of chance, or statistical significance. This bit’s for the hard-core science nerds, and will be edited out by the publisher, but I intend to write it anyway: you do not just multiply p-values together; you weave them with a clever tool, like maybe “Fisher’s method for combination of independent p-values.”

If you multiply p-values together, then harmless and probable incidents rapidly appear vanishingly unlikely. Let’s say you worked in twenty hospitals, each with a harmless incident pattern, say, p=0.5. If you multiply those harmless p-values, of entirely chance findings, you end up with a final p-value of 0.5 to the power of twenty, which is p < 0.000001, which is extremely, very, highly statistically significant. With this mathematical error, by his reasoning, if you change hospitals a lot, you automatically become a suspect. Have you worked in twenty hospitals? For God’s sake, don’t tell the Dutch police if you have.

12
 
The Media’s MMR Hoax
 

In the previous chapter we looked at individual cases. They may have been egregious, and in some respects absurd, but the scope of the harm they can do is limited. We have already seen, with the example of Dr. Spock’s advice to parents on how their babies should sleep, that when your advice is followed by a very large number of people, if you are wrong, even with the best of intentions, you can do a great deal of harm: because the effects of modest tweaks in risk are magnified by the size of the population changing its behavior.

It’s for this reason that journalists have a special responsibility, and that’s also why we will devote the last chapter of this book to examining the processes behind one very illustrative scare story: the MMR vaccine. But as ever, as you know, we are talking about much more than just that single tale, and there will be many distractions along the way.

In the United States, you are currently having your own vaccine scare. Jim Carrey and friends appear regularly on TV and in the newspapers to tell the nation of their concerns on complex matters of epidemiology and immunology. Foreigners—let me tell you a secret—like to sneer at Americans sometimes, for their crude popular debate. This is not justified. In the U.K., our vaccine scare was epic. It is coming to a close, but in the hope that you can learn something from it (and because one of its leading figures, Dr. Andrew Wakefield, has now moved to Texas and become your problem) here is the abysmal tale of MMR, the prototypical health scare, by which all others must be judged and understood.

Even now, it is with great trepidation that I dare mention it by name, because at the quietest hint of a discussion on the subject, an army of campaigners and columnists will still, even today, hammer on editors’ doors, demanding the right to a lengthy, misleading, and emotive response in the name of “balance,” in a world where their demands are always, without exception, accommodated.

At the beginning of this story, way back in 1998, is a man named Andrew Wakefield, who wrote a paper that linked the MMR vaccine to autism and set off a wave of antivaccine sentiment. This very month, as I write this chapter, the General Medical Council in the U.K. has found, after a two-year hearing, that he was “misleading,” “dishonest,” and “irresponsible” in the way he described where the children in his 1998 came from, by implying that they were routine clinic referrals. As the GMC also found, these children were subjected to a program of unpleasant and invasive tests that were performed not in their own clinical interest, but rather for research purposes, and these tests were conducted without ethics committee approval. It’s plainly undesirable for doctors to go around conducting tests like colonoscopy on children for their own interest.

But as we shall see, Dr. Wakefield cannot carry the blame for this scare alone, however much the news media may now try to imply that he should; the blame lies instead with the hundreds of journalists, columnists, editors, and executives, in every single news outlet in the U.K., who drove this story cynically, irrationally, and willfully onto the front pages for nine solid years. As we shall also see, they overextrapolated from one study into absurdity, while studiously ignoring all reassuring data and all subsequent refutations. They quoted “experts” as authorities instead of explaining the science, they ignored the historical context, they set idiots to cover the facts, they pitched emotive stories from parents against bland academics (whom they smeared), and most bizarrely of all, in some cases they simply made stuff up.

Journalists frequently flatter themselves with the fantasy that they are unveiling vast conspiracies, that the entire medical establishment has joined hands to suppress an awful truth. In reality I would guess that the 150,000 doctors in the U.K. could barely agree on second-line management of hypertension, but no matter: this fantasy was the structure of the MMR story, and many others, but it was a similar grandiosity that drove many of the earlier examples in this book in which a journalist concluded that he knew best, including “Cocaine use doubles in the playground.”

In some respects, this reflects changes in the environment for investigative journalism; this kind of work is expensive and risks expensive legal cases from the powerful people you investigate. Concocting a health scare is attractive, because it gives the appearance of challenging power and authority, but with none of the work, and none of the litigation risk if you’re wrong.

But can they ever do good? Undoubtedly there must be some examples, but the imperfect systems of medicine catch errors with far greater frequency. Often, to my surprise, journalists will cite “thalidomide” as if this were investigative journalism’s greatest triumph in medicine, in which they bravely exposed the risks of the drug in the face of medical indifference. It comes up almost every time I lecture on the media’s crimes in science, and that is why I will explain the story in some detail here, because in reality—sadly, really—this finest hour never occurred.

In 1957, a baby was born with no ears to the wife of an employee at Grünenthal, the German drug company. He had taken its new antinausea drug home for his wife to try while she was pregnant, a full year before it went on the market. This is an illustration both of how slapdash things were and of how difficult it is to spot a pattern from a single event.

The drug went to market, and between 1958 and 1962 around ten thousand children all around the world were born with severe malformations, caused by this same drug, thalidomide. Because there was no central monitoring of malformations or adverse reactions, the pattern was missed. An Australian obstetrician called William McBride first raised the alarm in a medical journal, publishing a letter in
The Lancet
in December 1961. He ran a large obstetric unit, seeing a great number of cases, and he was rightly regarded as a hero; but it’s sobering to think that he was in such a good position to spot the pattern only because he had prescribed so much of the drug, without knowing its risks, to his patients.
17
By the time his letter was published, a German pediatrician had noted a similar pattern, and the results of his study had been described in a German Sunday newspaper a few weeks earlier.

Almost immediately afterward, the drug was taken off the market, and pharmacovigilance began in earnest, with notification schemes set up around the world, however imperfect you may find them to be. If you ever suspect that you’ve experienced an adverse drug reaction, I would regard it as your duty as a member of the public, to report it (in the United States anyone, including patients, can report an adverse event at the FDA MedWatch site). These reports can be collated and monitored as an early warning sign, and are a part of the imperfect, pragmatic monitoring system for picking up problems with medications.

Now they claim that the original 1998 Wakefield research has been “debunked” (it was never anything compelling in the first place), and you will be able to watch this year as they try to pin the whole scare onto one man. I’m a doctor too, and I don’t imagine for one moment that I could stand up and create a nine-year-long news story on a whim. It is because of the media’s blindness—and their unwillingness to accept their responsibility—that they will continue to commit the same crimes in the future. There is nothing you can do about that, so it might be worth paying attention now.

To remind ourselves, here is the story of MMR as it appeared in the British news media from 1998 onward:

 
  • Autism is becoming more common, although nobody knows why.
  • A doctor called Andrew Wakefield has done scientific research showing a link between the MMR triple jab and autism.
  • Since then, more scientific research has been done confirming this link.
  • There is evidence that single jabs might be safer, but government doctors and those in the pay of the pharmaceutical industry have simply rubbished these claims.
  • Tony Blair probably didn’t give his young son the vaccine.
  • Measles isn’t so bad.
  • And vaccination didn’t prevent it very well anyway.
 

I think that’s pretty fair. The central claim for each of these bullet points was either misleading or downright untrue, as we shall see.

Vaccine Scares in Context

 

Before we begin, it’s worth taking a moment to look at vaccine scares around the world, because I’m always struck by how circum-scribed these panics are and how poorly they propagate themselves in different soils. Before celebrities got their hands on it, a decade later, the MMR and autism scare, for example, was practically nonexistent outside Britain, even in Europe and the United States. But throughout the 1990s France was in the grip of a scare that hepatitis B vaccine caused multiple sclerosis (it wouldn’t surprise me if I were the first person to tell you that).

In the United States, at the time, the major vaccine fear had been around the use of a preservative called thimerosal, although somehow this hasn’t caught on in the U.K., even though that same preservative was used in Britain. And in the 1970s—since the past is another country too—there was a widespread concern in the U.K., driven again by a single doctor, that whooping cough vaccine was causing neurological damage.

To look even farther back, there was a strong anti–smallpox vaccine movement in Leicester well into the 1930s, despite its demonstrable benefits, and in fact, anti-inoculation sentiment goes right back to its origins: when James Jurin studied inoculation against smallpox (finding that it was associated with a lower death rate than the natural disease), his newfangled numbers and statistical ideas were treated with enormous suspicion. Indeed smallpox inoculation remained illegal in France until 1769.
18
Even when Edward Jenner introduced the much safer vaccination for protecting people against smallpox at the turn of the nineteenth century, he was strongly opposed by the London cognoscenti.

And in an article from
Scientific American
in 1888 you can find the very same arguments that modern antivaccination campaigners continue to use today:

The success of the anti-vaccinationists has been aptly shown by the results in Zurich, Switzerland, where for a number of years, until 1883, a compulsory vaccination law obtained, and smallpox was wholly prevented—not a single case occurred in 1882. This result was seized upon the following year by the anti-vaccinationists and used against the necessity for any such law, and it seems they had sufficient influence to cause its repeal. The death returns for that year (1883) showed that for every 1,000 deaths two were caused by smallpox; In 1884 there were three; in 1885, 17, and in the first quarter of 1886, 85.

 

Meanwhile, WHO’s highly successful global polio eradication program was on target to have eradicated this murderous disease from the face of the earth by now—a fate that has already befallen the smallpox virus, excepting a few glass vials—until local imams from a small province called Kano in northern Nigeria claimed that the vaccine was part of a U.S. plot to spread AIDS and infertility in the Islamic world and organized a boycott that rapidly spread to five other states in the country. This was followed by a large outbreak of polio in Nigeria and surrounding countries and tragically even farther afield. There have now been outbreaks in Yemen and Indonesia, causing lifelong paralysis in children, and laboratory analysis of the genetic code has shown that these outbreaks were caused by the same strain of the polio virus, exported from Kano.

After all, as any trendy MMR-dodging North London middle-class humanities graduate couple with children would agree, just because vaccination has almost eradicated polio—a debilitating disease that as recently as 1988 was endemic in 125 countries—doesn’t necessarily mean it’s a good thing.

The diversity and isolation of these antivaccination panics help illustrate the way in which they reflect local political and social concerns more than a genuine appraisal of the risk data: because if the vaccine for hepatitis B, or MMR, or polio is dangerous in one country, it should be equally dangerous everywhere on the planet, and if those concerns were genuinely grounded in the evidence, especially in an age of the rapid propagation of information, you would expect the concerns to be expressed by journalists everywhere. They’re not.

Andrew Wakefield and His
Lancet
Paper

 

In February 1998 a group of researchers and doctors led by a surgeon called Andrew Wakefield from the Royal Free Hospital in London published a research paper in
The Lancet
that by now stands as one of the most misunderstood and misreported papers in the history of academia. In some respects it did itself no favors: it is badly written and has no clear statement of its hypothesis, or indeed of its conclusions (you can read it free online if you like). It has since been fully retracted by
The Lancet
, whose editor explained it was “utterly clear, without any ambiguity at all, that the statements in the paper were utterly false.”

The paper described twelve children who had bowel problems and behavioral problems (mostly autism) and mentioned that the parents or doctors of eight of these children believed that their children’s problems had started within a few days of their being given the MMR vaccine. It also reported various blood tests and tests on tissue samples taken from the children. The results of these were sometimes abnormal, but varied between children.

12 children, consecutively referred to the department of pediatric gastroenterology with a history of a pervasive developmental disorder with loss of acquired skills and intestinal symptoms (diarrhea, abdominal pain, bloating and food intolerance), were investigated.

…In eight children, the onset of behavioral problems had been linked, either by the parents or by the child’s physician, with measles, mumps, and rubella vaccination…In these eight children the average interval from exposure to first behavioral symptoms was 6.3 days (range 1–14).

Other books

Dollar Down by Sam Waite
Racing for Freedom by Bec Botefuhr
Abbeyford Remembered by Margaret Dickinson
El mundo como supermercado by Michel Houellebecq
Come Back to Me by Patrick, Coleen
A Wedding on the Banks by Cathie Pelletier
The Folly by Ivan Vladislavic