Authors: John Abramson
In retrospect one wonders why the NIH and FDA continued to support Rezulin long after it was known to be associated with so many deaths. One particularly troubling aspect of Rezulin’s seemingly privileged treatment was provided by David Willman’s series in the
Los Angeles Times:
Dr. Eastman, while in charge of diabetes research
at the NIH and overseeing the $150 million study in which Rezulin was included, was receiving $78,455 from Warner-Lambert on top of his $144,000 annual salary from the NIH. Between 1991 and 1997, Dr. Eastman had received, according to the
Los Angeles Times,
“at least $260,000 in consulting-related fees from a variety of outside sources, including six drug manufacturers.” None of this was part of the public record, but the financial relationship with Warner-Lambert had been approved by two of Dr. Eastman’s superiors. And Dr. Eastman was by no means alone. In fact, the
Los Angeles Times
reported that
no fewer than 12 of the 22
researchers who were overseeing the $150 million government-sponsored diabetes study as “principal investigators” were receiving fees or research grants from Warner-Lambert.
One would think that, once these drug companies’ lucrative consulting contracts with high-ranking NIH officials with direct responsibility for the companies’ products had been brought to the light of day, a firewall would have been quickly erected. Hardly. In December 2003, David Willman wrote an article titled “Stealth Merger: Drug Companies and Government Medical Research,” in which he identified
multiple examples of NIH officials receiving payments
of
hundreds of thousands of dollars
from drug companies.
“Subject No. 4” died while participating in a drug study at the National Institutes of Health on June 14, 1999. She was Jamie Ann Jackson, a 42-year-old registered nurse, married and a mother of two. Mrs. Jackson was the second person who had died while participating in NIH studies of a drug named Fludara, marketed by Berlex Laboratories. This drug, which had been used to treat leukemia since 1991, was being tested to see if it helped patients with autoimmune diseases. No more patients were enrolled in the study after the second death, but the study continued with the patients already enrolled for another nine months, and terminated only when five of the remaining 12 patients developed abnormalities in their blood tests. Dr. Stephen I. Katz was the director of the NIH’s National Institute of Arthritis and Musculoskeletal and Skin Diseases, which was conducting the study. According to the
Los Angeles Times
, between 1996 and 2002 Dr. Katz received more than $170,000 in consulting fees from the German drug manufacturer Schering AG. (It was during this time period that the fatal study of Berlex’s drug Fludara was being conducted.) These details are important because Berlex is a wholly owned subsidiary of Schering AG, described as its “U.S. business unit.” Dr. Katz told the
Los Angeles Times
that he had been “unaware of any relationship between Berlex and Schering AG,” and therefore unaware of a potential conflict of interest. But, according to the
Los Angeles Times
, “Katz declined to identify when he learned that Berlex was the U.S. affiliate of Schering AG.”
Drs. Eastman and Katz were certainly not the only high-ranking officials at the NIH to receive consulting fees from the drug industry. Another official had accepted $1.4 million plus stock options over an 11-year period, while at least one of the companies for whom he was consulting was involved with the work of the laboratory he directs at the National Institute of Allergy and Infectious Diseases.
The financial conflicts of interest at the NIH are by no means isolated examples of drug company influence on the government oversight of the drug industry. Because crucial recommendations about drug approval and drug labeling are made at the FDA’s Advisory Committee meetings, federal law “generally prohibits” the participation of experts who have financial ties to the products being presented on these committees. An article in
USA Today
in September 2000 shows, however, that the FDA granted so many waivers—800 between 1998 and 2000—that
54 percent of the experts
on these all-important Advisory Committees had “a direct financial interest in the drug or topic they are asked to evaluate.” And this 54 percent figure does not take into account that FDA rules do not even require an Advisory Committee member to declare receipt of amounts less than $50,000 per year from a drug company as long as the payment is for work not related to the drug being discussed.
The storm clouds grew even darker as the government institutions responsible for protecting the public’s interest became dependent on drug company largesse.
None of this would have been possible, of course, without the insatiable appetite of politicians for industry dollars. Lobbying efforts on behalf of the drug industry are unrivaled. It
spent $177 million on lobbying in 1999 and 2000
, $50 million more than the next closest industry, insurance. The drug industry hires 625 lobbyists, more than one for each member of the House and Senate. The drug industry’s $20 million in campaign contributions for the 2000 election seems downright stingy compared with the insurance industry’s $40 million. (Could this be playing any role in President Bush’s desire to privatize Medicare?) The $20 million, however, doesn’t include the approximately
$65 million for so-called issue ads
aired by Citizens for Better Medicare. Though this organization appeared to be a grassroots movement, it was in fact funded primarily, if not exclusively, by the drug industry, and its ads tended to benefit candidates who supported the drug industry’s legislative goals.
Money from the drug industry has been pouring into politics, with the balance of support
tipping progressively more toward the Republicans
, who received about 76 percent of the drug industry’s financial largesse in the 1999–2000 election cycle. It’s not often that we get to see what this money actually buys, the actual quid pro quo laid out in black and white. But a letter from Jim Nicholson, the chairman of the Republican National Committee, to Charles Heimbold, chairman and CEO of Bristol-Myers Squibb,
made public
as a result of legal challenges to the constitutionality of the McCain-Feingold campaign finance reform law, shows how this can work. The letter, written in April 1999, was delivered at a time when pressure for a bill to provide prescription drug benefits to senior citizens was beginning to mount. The drug industry was jockeying for a bill that would enhance its bottom line by providing Medicare funds to purchase its drugs, while at the same time blocking the federal government from using purchasing power to negotiate lower prices (as Medicare has done so successfully with payments to doctors and hospitals).
In the letter, Nicholson expresses his approval of “forming a pharmaceutical coalition” that will provide the “perfect vehicle for the Republican Party to reach out to the health care community and discuss their legislative needs.” The letter goes on to say, “We must keep the lines of communication open if we want to continue passing legislation that will benefit your industry.” The penultimate paragraph describes just how to keep those lines open, including a request for a $250,000 donation from Bristol-Myers Squibb to the Republican National Committee. With tens of billions of dollars a year on the line for the drug industry, what was a mere $250,000?
Perhaps the storm clouds were being actively seeded.
Drug companies
, government, doctors, patients, insurers. Health care costs keep rising, with no end in sight, and despite the myths about the excellence of our medical care, we are not realizing commensurate improvements in our health. The American health care system keeps edging ever closer to the breaking point. Many factors are contributing, but in the eye of the storm is a single factor: the transformation of medical knowledge from a public good, measured by its potential to improve our health, into a commodity, measured by its commercial value. This transformation is the result of the commercial takeover of the process by which “scientific evidence” is produced. How this takeover occurred, and how it affects the quality of the medical information that well-informed, dedicated doctors rely on to make clinical decisions, is the subject of the next chapter.
From their first
day of training, medical students are taught to trust the research published in peer-reviewed medical journals. They learn to take for granted that publication of research findings in these journals ensures that the principles of rigorous science have been followed: that the research has been properly designed to answer the question in a way that can be translated into clinical practice; that the data have been analyzed fairly and completely; that the conclusions drawn are justified by the research findings; and that the scientific evidence that has been published constitutes our best medical knowledge. This medical literature then serves as the source that enables doctors to keep current with new developments in medicine.
As part of my fellowship in the early 1980s, I spent many hours with some very smart people, meticulously analyzing and critiquing scientific articles. Of course there were flaws and limitations in virtually every study, but I can’t remember a single instance when the validity of a study was called into question because of manipulation of the data or compromise of the rules of science to gain commercial advantage. That vision of the medical literature now seems as quaint as Norman Rockwell’s painting of the boy standing on a chair, bending forward slightly, about to get an injection in his backside from his trusted doctor.
It’s not news that medical research has become big business, often with billions of dollars on the line. The problem is that the search for scientific truth is, by its very nature, unpredictable, and this uncertainty is hardly optimal from a business point of view. There is far too much at stake to leave this process to the uncertainties of science. In this context, the role of the drug and medical-device companies has evolved so that their most important products are no longer the things they make. Now their most important product is “scientific evidence.” This is what drives sales. In this commercial context, the age-old standards of good science are being quietly but radically weakened, and in some cases abandoned. Here’s how it works.
Prior to 1970, medical researchers
had relatively little problem obtaining funding from the National Institutes of Health, and few medical studies were sponsored solely by drug companies. An article published in the journal
Science
in 1982 describes medical scientists
thumbing “their academic noses
at industrial money” in the 1970s. But as government support for medical research started to decline, scientists and universities were forced to look for alternative sources of support for their research. The
medical industry was more than willing
to step in and lend a helping hand. Universities had no choice, and researchers’ attitudes about commercial funding changed. Government funding continued to decline so that
by 1990 almost two-thirds of requests
for research funds from the NIH were not granted. Meanwhile, between 1977 and 1990,
drug company expenditures on research and development
increased sixfold, and much of the money went to support university-based clinical research.
This shift in the source of funding set the stage for what was to follow.
In 1991, four out of five
commercially sponsored clinical drug studies were still being conducted by universities and academic medical centers. Academic researchers still played key roles in all phases of the research, from designing studies to recruiting patients to analyzing data to writing the articles and submitting them for publication. This may have been good for medical science and good for universities, but it was certainly not optimal for the drug and medical-device companies. Research done in university medical centers cost more and involved more administrative hoops and delays. Most important, the checks and balances present in an academic environment could be sidestepped if the research dollars were taken elsewhere.
As drug and biotech industries assumed an ever-larger role in funding clinical trials (
reaching 80 percent
by 2002), they increasingly exercised the power of their purse. Control over clinical research changed—quietly at first, but very quickly, and with profound effects on medical practice. The role of academic medical centers in clinical research diminished precipitously during the 1990s as the drug industry turned increasingly to new independent, for-profit medical research companies that emerged in response to commercial funding opportunities. These companies could gain access to patients for clinical research through community-based doctors, or play a larger role in research design, data analysis, and even writing up the findings and submitting complete articles to journals for publication.
By 2000, only one-third
of clinical trials were being done in universities and academic medical centers, and the rest were being done by for-profit research companies that were paid directly by the drug companies.
Increased reliance on private research companies allowed the drug industry to kill two birds with one stone: It could now call the shots on most of the studies that were evaluating its own products without having to accept input from academics who were grounded in traditional standards of medical science. And the increasing competition for commercial research dollars put academic centers under even more pressure to accept the terms offered by the commercial sponsors of research, threatening the independence and scientific integrity that had been the hallmark of the academic environment. In 1999 Dr. Drummond Rennie, deputy editor of the
Journal of the American Medical Association,
characterized the response of academic institutions to this changing climate:
“They are seduced by industry
funding, and frightened that if they don’t go along with these gag orders, the money will go to less rigorous institutions. It’s a race to the ethical bottom.”