Bad Pharma: How Drug Companies Mislead Doctors and Harm Patients (24 page)

BOOK: Bad Pharma: How Drug Companies Mislead Doctors and Harm Patients
4.96Mb size Format: txt, pdf, ePub

We could tolerate some of these problems, but enduring all of them at once creates a dangerous situation, in which patients are routinely harmed for lack of knowledge. It wouldn’t matter, for example, that the market is flooded with drugs that are of little benefit, or are worse than their competitors, if doctors and patients knew this, could find out immediately and conveniently which are the best options, and could change their behaviour to reflect that. But this is not possible when we are deprived of existing information on risks and benefits by secretive regulators, or where good-quality trial data is not even collected.

In my view, fixing this situation requires a significant cultural shift in how we approach new medicines; but before we get to that, there are several small, obvious steps which should go without saying.

 
  1. Drug companies should be required to provide data showing how their new drug compares against the best currently available treatment, for every new drug, before it comes onto the market. It’s fine that sometimes drugs will be approved despite showing no benefit over current treatments, because if a patient has an idiosyncratic reaction to the current common treatment, it is useful to have other inferior options available in your medical arsenal. But we need to know the relative risks and benefits, if we are to make informed decisions.
  2. Regulators and healthcare funders should use their influence to force companies to produce more informative trials. The German government have led the field here, setting up an agency in 2010 called IQWiG, which looks at the evidence for all newly approved drugs, to decide if they should be paid for by Germany’s healthcare providers. IQWiG has been brave enough to demand good quality trials, measuring real-world outcomes, and has already refused to approve payments for new drugs where the evidence provided is weak. As a result, companies have delayed marketing new drugs in Germany, while they try to produce better evidence that they really do work:
    55
    patients don’t lose out, since there’s no good evidence that these new drugs are useful. Germany is the largest market in Europe, at 80 million patients, and they’re not a poor country. If all purchasers around the world held the line, and refused to buy drugs presented with weak evidence, then companies would be forced to produce meaningful trials much more quickly.
  3. All information about safety and efficacy that passes between regulators and drug companies should be in the public domain, as should all data held by national and international bodies about adverse events on medications, unless there are significant privacy concerns on individual patient records. This has benefits that go beyond immediate transparency. Where there is free access to information about a treatment, we benefit from ‘many eyes’ on the problems around it, analysing them more thoroughly, and from more perspectives. Rosiglitazone, the diabetes drug, was removed from the market because of problems with heart failure, but those problems weren’t identified and acted on by a regulator: they were spotted by an academic, working on data that was, unusually, made more publicly available as the result of a court case. The problems with the pain drug Vioxx were spotted by independent academics outside the regulator. The problems with the diabetes drug benfluorex were spotted, again, by independent academics outside the regulator. Regulators should not be the only people who have access to this data.
  4. We should aim to create a better market for communicating the risks and benefits of medications. The output of regulators is stuffy, legalistic and impenetrable, and reflects the interests of regulators, not patients or doctors. If all information is freely available, then it can be repurposed by those who have access to it, and précised into better forms. These could be publicly funded and given away, or privately funded and sold, depending on business models.

This is all simple. But there is a broader issue, that no government has ever satisfactorily addressed, bubbling under in the culture of medicine: we need more trials. Wherever there is true uncertainty about which treatment is best, we should simply compare them, see which is best at treating a condition, and which has worse side effects.

This is entirely achievable, and at the end of the next chapter I will outline a proposal for how we can carry out trials cheaply, efficiently and almost universally, wherever there is true uncertainty. It could be used at the point of approval of every new drug, and it could be used throughout all routine treatment.

But first, we need to see just how rubbish some trials can be.

4

Bad Trials

So far I’ve taken the idea of a clinical trial for granted, as if there was nothing complicated about it: you just take some patients; split them in half; give one treatment to one group, another to the other; and then, a while later, you see if there is any difference in outcomes between your two groups.

We’re about to see the many different ways in which trials can be fundamentally flawed, by both design and analysis, in ways that exaggerate benefits and underplay harms. Some of these quirks and distortions are straightforward outrages: fraud, for example, is unforgivable, and dishonest. But some of them, as we will see, are grey areas. There can be close calls in hard situations, to save money or to get a faster result, and we can only judge each trial on its own merits. But it is clear, I think, that in many cases corners are cut because of perverse incentives.

We should also remember that many bad trials (including some of the ones discussed in the pages to follow) are conducted by independent academics. In fact, overall, as the industry is keen to point out, where people have compared the methods of independently-sponsored trials against industry-sponsored ones, industry-sponsored trials often come out better. This may well be true, but it is almost irrelevant, for one simple reason: independent academics are bit players in this domain. Ninety per cent of published clinical trials are sponsored by the pharmaceutical industry. They dominate this field, they set the tone, and they create the norms.

Lastly, before we get to the meat, here is a note of caution. Some of what follows is tough: it’s difficult science, that anyone can understand, but some examples will take more mental horsepower than others. For the complicated ones I’ve added a brief summary at the beginning, and then the full story. If you find it hard going, you could skip the details and take the summaries on trust. I won’t be offended, and the final chapter of the book – on dodgy marketing – is filled with horrors that you mustn’t miss.

To the bad trials.

Outright fraud

Fraud is an insult. In the rest of this chapter we will see wily tricks, close calls, and elegant mischief at the margins of acceptability. But fraud disappoints me the most, because there’s nothing clever about it: nothing methodologically sophisticated, no plausible deniability, and no argument about whether it breaks the data. Somebody just made the results up, and that’s that. Delete, ignore, start again.

So it’s lucky – for me and for patients – that fraud is also fairly rare, as far as anyone can tell. The best current estimate of its prevalence comes from a systematic review in 2009, bringing together the results of survey data from twenty-one studies, asking researchers from all areas of science about malpractice. Unsurprisingly, people give different responses to questions about fraud depending on how you ask them. Two per cent admitted to having fabricated, falsified or modified data at least once, but this rose to 14 per cent when they were asked about the behaviour of colleagues. A third admitted other questionable research practices, and this rose to 70 per cent, again, when they were asked about colleagues.

We can explain at least part of the disparity between the ‘myself’ and ‘others’ figures by the fact that you are one person, whereas you know lots of people, but since these are sensitive issues, it’s probably safe to assume that all responses are an underestimate. It’s also fair to say that sciences like medicine or psychology lend themselves to fabrication, because so many factors can vary between studies, meaning that picture-perfect replication is rare, and as a result nobody will be very suspicious if your results conflict with someone else’s. In an area of science where the results of experiments are more straightforwardly ‘yes/no’, failed replication would expose a fraudster much more quickly.

All fields are vulnerable to selective reporting, however, and some very famous scientists have manipulated their results in this way. The American physicist Robert Millikan won a Nobel Prize in 1923 after demonstrating with his oil-drop experiment that electricity comes in discrete units – electrons. Millikan was mid-career (the peak period for fraud) and fairly unknown. In his famous paper from
Physical Review
he wrote: ‘This is not a selected group of drops, but represents all of the drops experimented on during sixty consecutive days.’ That claim was entirely untrue: in the paper there were fifty-eight droplets, but in his notebooks there are 175, annotated with phrases like ‘publish this beautiful one’ and ‘agreement poor, will not work out’. A debate has raged in the scientific literature for many years over whether this constitutes fraud, and to an extent, Millikan was lucky that his results could be replicated. But in any case, his selective reporting – and his misleading description of it – lies on a continuum of all sorts of research activity that can feel perfectly innocent, if it’s not closely explored. What should a researcher do with the outliers on a graph that is otherwise beautifully regular? When they drop something on the floor? When the run on the machine was probably contaminated? For this reason, many experiments have clear rules about excluding data.

Then there is outright fabrication. Dr Scott Reuben was an American anaesthetist working on pain who simply never conducted at least twenty clinical trials published over the previous decade.
1
In some cases, he didn’t even pretend to get approval for conducting studies on patients in his institution, and simply presented the results of trials that were conjured out of nothing. Data in medicine, as we should keep remembering, is not abstract or academic. Reuben claimed to have found that non-opiate medications were as effective as opiates for the management of pain after surgical operations. This pleased everyone, as opiates are generally addictive, and have more side effects. Practice in many places was changed, and now that field is in turmoil. Of all the corners in medicine where you could perpetrate fraud, and change the decisions that doctors and patients make together, pain is one area that really matters.

There are various ways that fraudsters can be caught, but constant vigilant monitoring by the medical and academic establishment is not one of them, as that doesn’t happen to any sufficient extent. Often detection is opportunistic, accidental or the result of local suspicions. Malcolm Pearce, for example, was a British obstetric surgeon who published a case report claiming he had reimplanted an ectopic pregnancy, and furthermore that this had resulted in the successful delivery of a healthy baby. An anaesthetist and a theatre technician in his hospital thought this was unlikely, as they’d have heard if such a remarkable thing had happened; so they checked the records, found no matching records for any such event, and things collapsed from there.
2
Notably, in the same issue of the same journal, Pearce had also published a paper reporting a trial of two hundred women with polycystic ovary syndrome whom he treated for recurrent miscarriage. The trial never happened, and not only had Pearce invented the patients and the results, he had even concocted a fictitious name for the sponsoring drug company, a company that never existed. In the era of Google, that lie might not survive for very long.

Other books

The Tower by Adrian Howell
Man-Kzin Wars XIV by Larry Niven
Rev by Chloe Plume
Star Time by Amiel, Joseph
Blue Hills by Steve Shilstone
Inside Out by Ashley Ladd