Read Statistics for Dummies Online

Authors: Deborah Jean Rumsey

Tags: #Non-Fiction, #Reference

Statistics for Dummies (45 page)

BOOK: Statistics for Dummies
12.8Mb size Format: txt, pdf, ePub
ads

 

Chapter 17:
Experiments—Medical Breakthroughs or Misleading Results?

Medical breakthroughs seem to come and go quickly in today's age of information. One day, you hear about a promising new treatment for a disease, only to find out later that the drug didn't live up to expectations in the last stage of testing. Pharmaceutical companies bombard TV viewers with commercials for pills, sending millions of people to their doctors clamoring for the latest and greatest cures for their ills, sometimes without even knowing what the drugs are for. Anyone can search the Internet for details about any type of ailment, disease, or symptom and come up with tons of information and advice. But how much can you really believe? And how do you decide which options are best for you if you get sick, need surgery, or have an emergency?

In this chapter, you get behind the scenes of experiments, the driving force of medical studies and other investigations in which comparisons are made — comparisons that test, for example, which building materials are best, which soft drink teens prefer, which SUV is safest in a crash, and so on. You find out the difference between experiments and observational studies and discover what experiments can do for you, how they're supposed to be done, how they can go wrong, and how you can spot misleading results. With so many news headlines, sound bites, and pieces of "expert advice" coming at you from all directions, you need to use all of your critical thinking skills to evaluate the sometimes-conflicting information you're presented with on a regular basis.

Determining What Sets Experiments Apart

Although many different types of studies exist, you can boil them all down to basically two different types: experiments and observational studies. This section examines what, exactly, makes experiments different from other studies.

An
observational study
is just what it sounds like: a study in which the researcher merely observes the subjects and records the information. No intervention takes place, no changes are introduced, and no restrictions or controls are imposed. An
experiment
is a study that doesn't simply observe subjects in their natural state, but deliberately applies treatments to them in a controlled situation and records the outcomes.

Examining experiments

The basic goal of an experiment is to find out whether a particular treatment causes a change in the response. (The operative word here is "cause.") The way an experiment does this is by creating a very controlled environment — so controlled that the researcher can pinpoint whether a certain factor or combination of factors causes a change in the response variable, and if so, the extent to which that factor (or that combination of factors) influences the response.

For example, in order to gain government approval for a proposed drug, pharmaceutical researchers set up experiments to determine whether that drug helps lower blood pressure, what dosage level is most appropriate for each different population of patients, what side effects (if any) occur, and to what extent those side effects occur in each population.

Observing observational studies

In certain situations, observational studies are the optimal way to go. The most common observational studies are polls and surveys (see
Chapter 16
). When the goal is simply to find out what people think and to collect some demographic information (such as gender, age, income, and so on), surveys and polls can't be beat, as long as they're designed and conducted correctly.

In other situations, especially those looking for cause-and-effect relationships (discussed in detail in
Chapter 18
), observational studies aren't appropriate. For example, suppose you took a couple of Vitamin C pills last week; is that what helped you avoid getting that cold that's going around the office? Maybe the extra sleep you got recently or the extra hand-washing you've been doing helped you ward off the cold. Or maybe you just got lucky this time. With so many variables in the mix, how can you tell which one had an influence on the outcome of your not getting a cold?

HEADS UP 

When looking at the results of any study, first determine what the purpose of the study was and whether the type of study fits the purpose. For example, if an observational study was done instead of an experiment in order to establish a cause-and-effect relationship (see
Chapter 18
), any conclusions that are drawn should be carefully scrutinized.

Respecting ethical issues

The trouble with experiments is that some experimental designs are not always ethical. That's why so much evidence was needed to show that smoking causes lung cancer, and why the tobacco companies only recently had to pay huge penalties to victims. You can't force research subjects to smoke in order to see what happens to them. You can only look at people who have lung cancer and work backward to see what
factors
(variables being studied) may have caused the disease. But because you can't control for the various factors you're interested in — or for any other variables, for that matter — singling out any one particular cause becomes difficult with observational studies.

Although the causes of cancer and other diseases can't be determined ethically by conducting experiments on humans, treatments for cancer can be (and are) tested using experiments. Medical studies that involve experiments are called
clinical trials
. Check out
http://www.clinicaltrials.gov
for more information.

REMEMBER 

Surveys, polls, and other observational studies are fine if you want to know people's opinions, examine their lifestyles without intervention, or examine some demographic variables. If you want to try to determine the cause of a certain outcome or behavior (that is, a reason why something happened), an experiment is a much better way to go. If an experiment isn't possible (because it's unethical, too expensive, or otherwise unfeasible), a large body of observational studies examining many different factors and coming up with similar conclusions is the next best thing. (See
Chapter 18
for more about cause-and-effect relationships.)

 

Designing a Good Experiment

How an experiment is designed can mean the difference between good results and garbage. Because most researchers are going to write the most glowing press releases that they can about their experiments, you have to be able to sort through the hype in order to determine whether to believe the results you're being told.

To decide whether an experiment is credible, check to see whether it follows these steps for a good experiment:

  1. Includes a large enough sample size so that the results are accurate.

  2. Chooses subjects that most accurately represent the target population.

  3. Assigns subjects randomly to the treatment group(s) and the control group.

  4. Controls for possible confounding variables.

  5. Double-blinds the study to avoid bias.

  6. Collects good data.

  7. Contains the proper data analysis.

  8. Doesn't make conclusions that go beyond the scope and limitations of the study.

In the following sections, each of these criteria is explained in more detail and illustrated using various examples.

Selecting the sample size

The size of a sample greatly affects the accuracy of the results. The larger the sample size, the more accurate the results and the more powerful the statistical tests (in terms of being able to detect real results when they exist). See
Chapters 10
and
14
for more details.

Understanding that small samples don't make for big conclusions

You may be surprised at the number of research headlines that have been made that were based on very small samples. This is an issue of great concern to statisticians, who know that in order to detect most differences between groups you need sample sizes that are large (at least larger than 30; see
Chapter 10
). When sample sizes are small and big conclusions have been made, either the researchers didn't use the right hypothesis test to analyze their data (they often should be using the t-distribution rather than the Z-distribution; see
Chapter 14
) or the difference was so large that a small sample size was all that was needed to detect that difference. The latter usually isn't the case, however.

HEADS UP 

Be wary of research conclusions that find significant results based on small sample sizes (especially samples much smaller than 30). If the results are important to you, ask for a copy of the research report and look to see what type of analysis was done on the data. Also look at the sample of subjects to see whether this sample truly represents the population about which the researchers are drawing conclusions.

Checking your definition of "sample size"

When asking questions about sample sizes, be specific about what you mean by sample size. For example, you can ask how many subjects were selected to participate and also ask for the number who actually completed the experiment; these two numbers can be very different. Make sure the researchers can explain any situations in which the research subjects decided to drop out or were unable (for some reason) to finish the experiment.

For example, an article in the
New York Times
entitled "Marijuana Is Called an Effective Relief in Cancer Therapy" says in the opening paragraph that marijuana is "far more effective" than any other drug in relieving the side effects of chemotherapy. When you get into the details, you find out that the results are based on only 29 patients (15 on the treatment, 14 on a placebo). To add to the confusion, you find out that only 12 of the 15 patients in the treatment group actually completed the study; so what happened to the other three subjects?

HEADS UP 

Sometimes, researchers draw their conclusions based on only those subjects who completed the study. This can be misleading, because the data don't include information about those who dropped out (and why). This can lead to biased data. For a discussion of the sample size you need in order to achieve a certain level of accuracy, see
Chapter 12
.

HEADS UP 

Accuracy isn't the only issue in terms of having "good" data. You still need to worry about eliminating bias by selecting a random sample (see
Chapter 3
for more on how random samples are taken).

Choosing the subjects

The first step in designing an experiment is selecting the sample of participants, called the research
subjects
. Although researchers would like for their subjects to be selected randomly from their respective populations, in most cases, this just isn't feasible. For example, suppose a group of eye researchers wants to test out a new laser surgery on nearsighted people. They need a random sample of subjects, so they randomly select various eye doctors from across the country and randomly select nearsighted patients from these doctors' files. They call up each person selected and say, "We're experimenting with a new laser surgery treatment for nearsightedness, and you've been selected at random to participate in our study. When can you come in for the surgery?"

Something tells me that this approach wouldn't go over very well with many people receiving the call (although some would probably jump at the chance, especially if they didn't have to pay for the procedure). The point is, getting a truly random sample of people to participate in an experiment is generally more difficult than getting a random sample of folks to participate in a survey.

Volunteering can have side effects

In order to find subjects for their experiments, researchers often advertise for volunteers and offer them incentives such as money, free treatments, or follow-up care for their participation. Medical research on humans is complicated and difficult, but it's necessary in order to really know whether a treatment works, how well it works, what the dosage should be, and what the side effects are. In order to prescribe the right treatments in the right amounts in real-life situations, doctors and patients depend on these studies being representative of the general population. In order to recruit such representative subjects, researchers have to do a broad advertisement campaign and select enough participants with enough different characteristics to represent a cross-section of the populations of folks who will be prescribed these treatments in the future.

The U.S. National Institutes of Health has a Web site (
http://www.clinicaltrials.gov
) providing current information about clinical research studies.

Randomly assigning subjects to groups

After the sample has been selected, the researchers divide the research subjects into different groups. The subjects are assigned to one or more
treatment groups
, which receive various dosage levels of the drug or treatment being studied, and a
control group
, which receives either no treatment or a fake treatment.

Realizing the importance of random assignment

Suppose a researcher wants to determine the effects of exercise on heart rate. The subjects in his treatment group run five miles and have their heart rates measured before and after the run. The subjects in his control group will sit on the couch the whole time and watch reruns of
The Simpsons.
Which group would you rather be in? Some of the health nuts out there would no doubt volunteer for the treatment group. If you're not crazy about the idea of running five miles, you may opt for the easy way out and volunteer to be a couch potato. (Or maybe you hate
The Simpsons
so much that you'd even run five miles to avoid watching an episode.) What impact would this selective volunteering have on the results of the study? If only the health nuts (who probably already have excellent heart rates) volunteer to be in the treatment group, the researcher will be looking only at the effect of the treatment (running five miles) on very healthy and active people. He won't see the effect that running five miles has on the heart rates of couch potatoes. This non-random assignment of subjects to the treatment and control group could have a huge impact on the conclusions he draws from this study.

REMEMBER 

In order to avoid major bias in the results of an experiment, subjects must be randomly assigned to treatments and not allowed to choose which group they will be in. Keep this in mind when you evaluate the results of an experiment.

Controlling for the placebo effect

A fake treatment takes into account what researchers call the placebo effect. The
placebo effect
is a response that people have (or think they're having) just because they believe they're getting some sort of "treatment" (even if that treatment is a fake treatment, such as sugar pills). See
Chapter 3
for more on the placebo effect.

When you see an ad for a drug in a magazine, look for the fine print. Embedded there, you'll see a table that lists the side effects reported by a group who took the drug, compared to the side effects reported by the control group. If the control group is on a placebo, you may expect them not to report any side effects, but you would be wrong. Placebo groups often report side effects in percentages that seem quite high; this is because their minds are playing tricks on them and they're experiencing the placebo effect. If you want to be fair about examining the reported side effects of a treatment, you have to also take into account the side effects that the control group reports — side effects that are due to the placebo effect only.

HEADS UP 

In some situations, such as when the subjects have very serious diseases, offering a fake treatment as an option may be unethical. In 1997, the U.S. government was harshly criticized for financing an HIV study that examined dosage levels of AZT, a drug known at that time to cut the risk of HIV transmission from pregnant mothers to their babies by two-thirds. This particular study, in which 12,000 pregnant women with HIV in Africa, Thailand, and the Dominican Republic participated, had a deadly design. Researchers gave half of the women various dosages of AZT, but the other half of the women received sugar pills. Of course, had the U.S. government realized that a placebo was being given to half of the subjects, they wouldn't have supported the HIV study.

Here's the way these situations are supposed to be handled. When ethical reasons bar the use of fake treatments, the new treatment is compared to an existing or standard treatment that is known to be an effective treatment. After researchers have enough data to see that one of the treatments is working better than the other, they will generally stop the experiment and put everyone on the better treatment, again for ethical reasons.

REMEMBER 

When examining the results of an experiment, make sure the researchers compared a treatment group to a control group, in order to make sure the experiences of the treatment group go beyond what the control group experienced. The control group may receive either a fake treatment or a standard treatment, depending on the situation.

Controlling for confounding variables

Suppose you're participating in a research study that looks at factors influencing whether or not you catch a cold. If a researcher records only whether you got a cold after a certain period of time and asks questions about your behavior (how many times per day you washed your hands, how many hours of sleep you get each night, and so on), the researcher is conducting an observational study. The problem with this type of observational study is that without controlling for other factors that may have had an influence and without regulating which action you were taking when, the researcher won't be able to single out exactly which of your actions (if any) actually impacted the outcome.

The biggest limitation of observational studies is that they can't really show true cause-and-effect relationships, due to what statisticians call confounding variables. A
confounding variable
is a variable or factor that was not controlled for in the study, but can have an influence on the results.

For example, one news headline boasted, "Study links older mothers, long life." The opening paragraph said that women who have a first baby after age 40 have a much better chance of living to be 100, compared to women who have a first baby at an earlier age. When you get into the details of the study (done in 1996) you find out, first of all, that it was based on 78 women in suburban Boston who had lived to be at least 100, compared to 54 women who were born at the same time (1896), but died in 1969 (the earliest year the researchers could get computerized death records). This so-called "control group" lived to be exactly 73, no more and no less. Of the women who lived to be at least 100 years of age, 19% had given birth after age 40, whereas only 5.5% of the women who died at age 73 had given birth after age 40.

I have a real problem with these conclusions. What about the fact that the "control group" was based only on mothers who died in 1969 at age 73? What about all of the other mothers who died before age 73, or who died between the ages of 73 and 100? Maybe the control group (being so limited in scope) included women who had some sort of connection; maybe that connection caused many of them to die in the same year, and maybe that connection is further linked to why more of them had babies earlier in life. Who knows? What about other variables that may affect both mothers' ages at the births of their children and longer lifespans — variables such as financial status, marital stability, or other socio-economic factors? The women in this study were 33 years old during the Depression; this may have influenced both their life span and if or when they had children.

How do researchers handle confounding variables? The operative word is "control." They control for as many confounding variables that they can anticipate. In experiments involving human subjects, researchers have to battle many confounding variables. For example, in a study trying to determine the
effect of different types and volumes of music on the amount of time grocery shoppers spend in the store (yes, they do think about that), researchers have to anticipate as many possible confounding variables ahead of time, and then control for them. What other factors besides volume and type of music could influence the amount of time you spend in a grocery store? I can think of several factors: gender, age, time of day, whether I have children with me, how much money I have, the day of the week, how clean and inviting the store is, how nice the employees are, and (most importantly) what my motive is — am I shopping for the whole week, or am I just running in to grab a candy bar?

How can researchers begin to control for so many possible confounding factors? Some of them can be controlled for in the design of the study, such as the time of the day, day of the week, and reason for shopping. But other factors (such as the perception of the store environment) depend totally on the individual in the study. The ultimate form of control for those person-specific confounding variables is to use pairs of people that are matched according to important variables, or to just use the same person twice: once with the treatment and once without. This type of experiment is called a
matched-pairs design.
(See
Chapter 14
for more on this.)

HEADS UP 

Before believing any medical headlines (or any headlines for that matter), look to see how the study was conducted. Observational studies can't control for confounding variables, so their results are not as statistically meaningful (no matter what the statistics say) as the results of a well-designed experiment. In cases where an experiment can't be done (after all, no one can force you to have a baby after or before age 40), make sure the observational study is based on a large enough sample that represents a cross-section of the population.

BOOK: Statistics for Dummies
12.8Mb size Format: txt, pdf, ePub
ads

Other books

Snow Time for Love by Zenina Masters
No Sleep till Wonderland by Tremblay, Paul
Grist 06 - The Bone Polisher by Hallinan, Timothy
The Saffron Malformation by Walker, Bryan
The Silent Sea by Cussler, Clive with Jack Du Brul
Mr Knightley’s Diary by Amanda Grange
Dream Country by Luanne Rice
The Reason I Stay by Patty Maximini