The Rise and Fall of Modern Medicine (36 page)

BOOK: The Rise and Fall of Modern Medicine
10.85Mb size Format: txt, pdf, ePub

There was, however, an equally important explanation for the ‘dearth of new drugs'. The most extraordinary aspect of the post-war therapeutic revolution is how it occurred in the absence of the most basic understanding of disease processes – of what, for example, was happening to cause the airways to constrict during an attack of asthma, or the functioning of the neurotransmitters in the brains of patients with schizophrenia. This ocean of ignorance had been bridged by the facility with which pharmaceutical research chemists could synthesise chemical compounds in their millions, which could then be investigated for any potential therapeutic effect.

But the pharmaceutical companies realised that sooner or later they would start to run out of new chemicals to test in this way. From the mid-1960s onwards there was a hope that it should be possible to replace this rather crude method of drug discovery with something altogether more elegant and ‘scientific'. Certainly pharmaceutical researchers now knew much more about the biochemical workings of the cell and had identified many of the chemical transmitters in the brain and elsewhere by which one cell communicated with another. So it seemed much better, rather than stumbling around in the dark
hoping to chance upon some unexpected discovery, to exploit this new-found knowledge and deliberately design drugs to fulfil a defined function. This approach was not exactly new. George Hitchings and Gertrude Elion had discovered a whole string of drugs such as azathioprine by purposefully designing drugs to interfere with the synthesis of DNA. But it was Sir James Black's two classic discoveries, first of propranolol (which blocked the beta receptors in the heart, thus relieving the symptoms of angina) and then of cimetidine (which blocked the histamine receptors in the gut, thus reducing the amount of acid secretions and allowing ulcers to heal), that convinced many that the future lay with ‘designing drugs'.
4

In a curious paradox, this ‘scientific' approach to drug discovery has turned out to be much less fruitful than was hoped, particularly when compared to the blind, random methods it was intended to replace. The philosophical rationale was that if the problems of human disease could be explained at the most fundamental level of the cell and its genes and proteins, it should then be possible to correct whatever was wrong. Though intuitively appealing, this approach presupposes that it is actually possible, given the complexity of biology, to ‘know' enough to be able to achieve this. By contrast, the earlier mode of drug discovery, blind and dependent on chance as it might be, did at least allow for the possibility of the unexpected. Or, put another way, this scientific approach to drug discovery could never have led to penicillin or cortisone.

It would be wrong to suggest the scientific road to discovery from the mid-1970s onwards has not produced some genuinely useful drugs. Its successes include, most recently, a vaccine against the chronic liver infection hepatitis B, and ‘triple therapy' for the treatment of AIDS.
5
But by the mid 1990s the current list of the top ten big ‘blockbuster' drugs – the ones that
generate the billions of dollars of revenue that sustain the industry's profitability – featured, for the most part, new or more expensive variants of the antibiotics, anti-inflammatories and antidepressants that were originally introduced twenty or more years ago.
6
They might well be more effective, have fewer side-effects or be easier to take, but with the occasional exception none can be described as making a significant inroad into previously uncharted therapeutic areas in the way that the discovery of chlorpromazine, for example, transformed the treatment of schizophrenia. There was enormous optimism that biotechnology might generate a further cornucopia of new drugs, but, again with the occasional exception, these compounds – insulin, growth hormone, factor VIII – turned out to be no better therapeutically than those they have replaced. They are certainly a lot more expensive.

The most striking feature of many of the most recently introduced drugs is that there is considerable doubt about whether they do any good at all. Thus, there was much hope that the drug finasteride, ‘scientifically designed' to block the metabolism of testosterone and thus shrink the size of the prostate, would reduce the need for an operation in those in whom the gland is enlarged. This would indeed have been a significant breakthrough but, as an editorial in the
New England Journal of Medicine
observed, ‘the magnitude of the change in symptoms [of patients] is not impressive'.
7
Similarly, a new generation of drugs for the treatment of epilepsy based on interfering with the neurotransmitter GABA were dismissed by an editorial in the
British Medical Journal
as having been ‘poorly assessed' with no evidence that they were any better than the anti-epileptic drugs currently in use.
8
New treatments for multiple sclerosis and Alzheimer's disease appear to offer such marginal benefits that their ‘clinical cost-effectiveness falls at the first hurdle'.
9

Frustrated at the failure to find cures for serious diseases like cancer and dementia, the pharmaceutical industry has been forced to look elsewhere for profitable markets for its products. This explains the rise of so-called ‘lifestyle' drugs, whose prime function is to restore those social faculties or attributes that tend to diminish with age: Regaine for the treatment of baldness, Viagra for male impotence, Xenical for obesity and Prozac for depression. The pharmaceutical industry may have blamed the ‘dearth of new drugs' on over-regulation but the problem seems to run much deeper. It should still have been able to come up with genuine breakthrough drugs irrespective of the new stringent regulatory requirements, but despite investment in research on a scale greater by orders of magnitude than that of the halcyon days of the 1950s and 1960s, they have not materialised. This dispiriting analysis is vulnerable to the charge of oversimplification, but it is confirmed by the one truly objective measurement of the fortunes of the pharmaceutical industry – its performance in the marketplace. Thanks to the ‘blockbuster drugs', the industry remained profitable, but the twin pressures of massive research costs (£6 billion was spent by the top ten companies in 1994 alone) and the imminent prospect that the patent protection on many of the more profitable products would expire around the time of the millennium undermined the viability of many previously gilt-edged companies, leaving them no alternative other than to submerge their identity in a rash of massive billion-pound mergers: Glaxo with Wellcome, Smith, Kline & French (SKF) with Beechams, Upjohn of the United States with Pharmacia of Sweden, Sandoz with Ciba, and so on,
10
Reflecting on this merger mania, John Griffin, formerly director of the Association of the British Pharmaceutical Industry, has observed: ‘These companies are “ideas poor”, resorting to finding new uses and novel delivery systems for
established active products whose patent expiry is imminent . . . real innovations are very obviously not coming from those companies involved in merger mania, whose management currently appears unable to think radically or constructively.'
11

The contrasting fortunes of the pharmaceutical industry before and after the 1970s are underpinned by the profound paradox of an apparent inverse relationship between the scale of investment in research and drug innovation. Recognising this, the pharmaceutical industry in the early 1990s decided to reorient its approach to drug discovery, using automated techniques to screen millions of chemical compounds for their biological activity, hoping to identify the ‘lead compounds' that might have the sort of genuinely novel therapeutic effect that could form the basis for new drugs. This reversion – albeit with techniques much more sophisticated than in the past – to the process by which the important drugs of the 1940s and 1950s were discovered is obviously highly significant, though whether it will ‘deliver the goods' remains to be seen.
12

3
T
ECHNOLOGY'S
F
AILINGS

F
ire was the ‘original technology', acquired for man by Prometheus, who had stolen it from the gods. Zeus was not amused and directed that Prometheus be bound to a rock with chains, to be visited there daily by an eagle who fed off his liver. The punishment may seem a bit harsh, but in one sense Zeus was right: technology is double-edged. It confers prodigious powers, yet such power can also be enslaving, controlling the actions of those who possess it.

Technology was out of step with the major trends of the End of the Age of Optimism. The 1980s were an important decade: for diagnostic imaging (with important developments in CT and MRI scanning, ultrasound and similar techniques);
1
for ‘interventional radiology' (with angioplasty, the dilation of narrowed arteries with plastic catheters);
2
and for ever more sophisticated methods of endoscopy, culminating in the remarkable technical achievement of minimally invasive surgery.
3

Nonetheless, against the background of these innovations, the general and probably correct perception of medical technology
is that it is out of control. The discussion that follows examines the consequences with three examples in ascending order of seriousness: firstly ‘over-investigation' (the overuse of diagnostic technology); secondly, the false premises, and promises, of foetal monitoring; and lastly, the role of intensive care in needlessly prolonging the process of dying.

The Misuse of Diagnostic Technology

The ever-captive Peter Medawar, Nobel Prize winner for his contribution to transplantation, observed that when people spoke about the ‘art and science' of medicine they invariably got them the wrong way round, presuming the ‘art' to be those aspects that involved being sympathetic and talking to the patient, and the ‘science' to be the difficult bit of interpreting the results of sophisticated tests that permits the correct diagnosis to be made. The reverse is the case, argued Medawar. The real ‘science' in medicine is the thorough understanding of the nature of a medical problem that comes from talking at length to the patient, and performing a physical examination to elicit the relevant signs of disease. From this old-fashioned, Tommy Horder-style of medicine it is usually possible to infer precisely what is wrong in 90 per cent of cases. By contrast, the technological gizmos and arcane tests that pass for the ‘science' of medicine can frequently be quite misleading. The logic of Medawar's argument leads to the playful paradox that the more tests doctors can do, the less ‘scientific' (in the sense of generating reliable knowledge) medicine becomes. And throughout the 1970s doctors did ‘do' more tests, twice as many at the end of the decade as at the beginning, resulting in the description of an entirely new syndrome of ‘medical vampirism', where so
much blood was taken for tests from patients while in hospital that they became anaemic, requiring in some instances a blood transfusion.
4
‘The comforting, if spurious, precision of laboratory results has the same appeal as a lifebelt to the weak swimmer,' an editorial in
The Lancet
noted in 1981, before going on to enumerate the several reasons why doctors performed so many unnecessary tests: there was the ‘just-in-case test' requested by junior doctors ‘just in case' the consultant might ask for the result, and the ‘routine test' whose results hardly ever contributed to the diagnosis, and the ‘ah-ha test' whose results were known to be abnormal in certain conditions and which were ordered ‘to advertise the cleverness of the clinician'.
5

This fetishisation of technical data was part of a more generalised phenomenon where the modern physician had become a doctor with technically specialised diagnostic skills. Thus it was no longer sufficient for the gastroenterologist to know a lot about gut diseases; he had also to be skilled in passing the endoscope down into the stomach and up into the colon. Nor was it sufficient for the cardiologist to rely on his traditional skills with the stethoscope, as he also had to acquire the necessary manipulative skills of the ‘catheter laboratory', passing catheters into veins and arteries to measure the pressures within the heart.

There is, of course, no reason why gastroenterologists or cardiologists should not possess these skills, but they can easily become an end in themselves, a means of gathering information that might be gleaned by simpler means. There is, for example, little difficulty in establishing the diagnosis of a peptic ulcer by the traditional clinical methods of taking a history and examining the patient, but for the modern gastroenterologist any patient with stomach pains merits an endoscopy to visualise the ulcer, as well as a further endoscopy after treatment to see if it has healed. This inappropriate use of investigational techniques
was, argued one of their number, Michael Clark of St Bartholomew's Hospital, a sign of intellectual degeneration. ‘The young men of the 1960s became gastroenterologists because it was an expanding speciality with an intellectual challenge to understand more about the gut and apply this to clinical practice,' he wrote, ‘but the young gastroenterologist of today is only happy if he can learn another endoscopic technique: the excitement of the 1960s has been replaced by the decade of the Peeping Tom.'
6

The great virtue of endoscopy for the gastroenterologists was that it earned them a lot of money. The standard fee for a private consultation in Britain is around £100, but if the specialist throws in an endoscopy, graded by the insurance companies as an ‘intermediate operation', he can make four times that sum. (In private medical systems such as the United States, the endoscope and ‘catheter lab' generate 80 per cent of the specialist's income.) This phenomenon of ‘over-investigation' – the performing of large numbers of tests in patients whose medical problems are quite straightforward – may seem a fairly trivial matter, but it is costly and, more seriously, it introduces an alien element into the medical encounter, downgrading the importance of wisdom and experience in favour of spurious objectivity.

Other books

Mummy Dearest by Joan Hess
Beatrice and Virgil by Yann Martel
Intimate Whispers by Dee Carney
Storm the Author's Cut by Vanessa Grant
Refining Felicity by Beaton, M.C.
Trusting Stone by Alexa Sinclaire