Read Statistics Essentials For Dummies Online

Authors: Deborah Rumsey

Tags: #Reference

Statistics Essentials For Dummies (52 page)

BOOK: Statistics Essentials For Dummies
4.75Mb size Format: txt, pdf, ePub
ads
 

Saying that "these results aren't scientific, but . . ." and then presenting the results as if they are scientific

 

To avoid common errors made when drawing conclusions:

1. Check whether the sample was selected properly and that the conclusions don't go beyond the population presented by that sample.

 

2. Look for disclaimers about surveys
before
reading the results, if you can.

 

That way, you'll be less likely to be influenced by the results if, in fact, the results aren't based on a scientific survey. Now that you know what a
scientific survey
(the media's term for an accurate and unbiased survey) actually involves, you can use those criteria to judge whether survey results are credible.

 

3. Be on the lookout for statistically incorrect conclusions.

 

If someone reports a difference between two groups based on survey results, be sure the difference is larger than the reported margin of error. If the difference is within the margin of error, you should expect the sample results to vary by that much just by chance, and the so-called "difference" can't really be generalized to the entire population; see Chapter 7.

 

4. Tune out anyone who says, "These results aren't scientific, but. . . ."

 

Know the limitations of any survey and be wary of any information coming from surveys in which those limitations aren't respected. A bad survey is cheap and easy to do, but you get what you pay for. Before looking at the results of any survey, investigate how it was designed and conducted, so that you can judge the quality of the results.

Chapter 13
:
A Checklist for Judging Experiments

In This Chapter

The added value of experiments

Criteria for a good experiment

Action items for evaluating an experiment

In this chapter, you go behind the scenes of experiments — the driving force of medical studies and other investigations in which comparisons are made. You find out the difference between experiments and observational studies and discover what experiments can do for you, how they're supposed to be done, and how you can spot misleading results.

Experiments versus Observational Studies

Although many different types of studies exist, you can boil them all down to basically two different types: experiments and observational studies. An
observational study
is just what it sounds like: a study in which the researcher merely observes the subjects and records the information. No intervention takes place, no changes are introduced, and no restrictions or controls are imposed. For example, a survey is an observational study. An
experiment
is a study that doesn't simply observe subjects in their natural state, but deliberately applies treatments to them in a controlled situation and records the outcomes (for example, medical studies done in a laboratory). Experiments are generally more powerful than observational studies; for example, an experiment can identify a cause-and-effect relationship between two variables, whereas an observational study can only point out a connection.

Criteria for a Good Experiment

To decide whether an experiment is credible, check the following items:

1. Is the sample size large enough to yield precise results?

 

2. Do the subjects accurately represent the intended population?

 

3. Are the subjects randomly assigned to the treatment and control groups?

 

4. Was the placebo effect measured (if applicable)?

 

5. Are possible confounding variables controlled for?

 

6. Is the potential for bias minimized?

 

7. Was the data analyzed correctly?

 

8. Are the conclusions appropriate?

 

In the following sections I present action items for evaluating an experiment based on each of the above criteria.

Inspect the Sample Size

The size of a sample greatly affects the accuracy of the results. The larger the sample size, the more accurate the results are, and the more powerful the statistical analysis will be at detecting real differences due to treatments.

Small samples — small conclusions

You may be surprised at the number of research headlines that were based on very small samples. If the results are important to you, ask for a copy of the research report and find out how many subjects were involved in the study.

Also be wary of research that finds significant results based on very small sample sizes (especially those much smaller than 30). It could be a sign of what statisticians call
data fishing
, where someone fishes around in their data set using many different kinds of analyses until they find a significant result (which is not repeatable because it was just a fluke).

Original versus final sample size

Be specific about what a researcher means by 
sample size
. For example, ask how many subjects were selected to participate in an experiment and then ask for the number who actually completed the experiment — these two numbers can be very different. Make sure the researchers can explain any situations in which the research subjects decided to drop out or were unable (for some reason) to finish the experiment.

An article in the
New York Times
entitled "Marijuana Is Called an Effective Relief in Cancer Therapy" says in the opening paragraph that marijuana is "far more effective" than any other drug in relieving the side effects of chemotherapy. When you get into the details, you find out that the results are based on only 29 patients (15 on the treatment, 14 on a placebo). To add to the confusion, you find out that only 12 of the 15 patients in the treatment group actually completed the study; so what happened to the other three subjects?

Examine the Subjects

An important step in designing an experiment is selecting the sample of participants, called the research
subjects.
Although researchers would like for their subjects to be selected randomly from their respective populations, in most cases this just isn't possible. For example, suppose a group of eye researchers wants to test out a new laser surgery on nearsighted people. To select their subjects, they randomly select various eye doctors from across the country and randomly select nearsighted patients from these doctors' files. They call up each person selected and say, "We're experimenting with a new laser surgery treatment for nearsightedness, and you've been selected at random to participate in our study. When can you come in for the surgery?" This may sound like a good random sampling plan, but it doesn't make for an ethical experiment.

The point is, getting a truly random sample of people to participate in an experiment would be great, but is typically not feasible or ethical to do. Rather than select people at random, experimenters do the best they can to gather volunteers that meet certain criteria so they're doing the experiment on an appropriate cross-section of the population. The randomness part comes in when individuals are assigned to the groups (treatment group, control group, and so forth) in a random fashion, as explained in the next section.

Check for Random Assignments

After the sample has been selected, the subjects are assigned to either a
treatment group,
which receives a certain level of some factor being studied, or a
control group,
which receives either no treatment or a fake treatment. How the subjects are assigned to their respective groups is extremely important.

Suppose a researcher wants to determine the effects of exercise on heart rate. The subjects in his treatment group run five miles and have their heart rates measured before and after the run. The subjects in his control group will sit on the couch the whole time and watch reruns of
The Simpsons.
If only the health nuts (who probably already have excellent heart rates) volunteer to be in the treatment group, the researcher will be looking only at the effect of the treatment (running five miles) on very healthy and active people. He won't see the effect that running five miles has on the heart rates of couch potatoes. This nonrandom assignment of subjects to the treatment and control groups can have a huge impact on his conclusions.

To avoid bias, subjects must be assigned to treatment/control groups at random. This results in groups that are more likely to be fair and balanced, yielding more credible results.

Gauge the Placebo Effect

A fake treatment takes into account what researchers call the placebo effect. The
placebo effect
is a response that people have (or think they're having) because they know they're getting some sort of "treatment" (even if that treatment is a fake treatment, aka placebo, such as sugar pills).

If the control group is on a placebo, you may expect them not to report any side effects, but you would be wrong. Placebo groups often report side effects in percentages that seem quite high; this is because the knowledge that some treatment is being taken (even if it's a fake treatment) can have a psychological (even a physical) effect. If you want to be fair about examining the side effects of a treatment, you have to take into account the side effects that the control group reports; that is, side effects that are due to the placebo effect.

In some situations, such as when the subjects have very serious diseases, offering a fake treatment as an option may be unethical. When ethical reasons bar the use of fake treatments, the new treatment is compared to an existing or standard treatment that is known to be effective. After researchers have enough data to see that one of the treatments is working better than the other, they will generally stop the experiment and put everyone on the better treatment, again for ethical reasons.

BOOK: Statistics Essentials For Dummies
4.75Mb size Format: txt, pdf, ePub
ads

Other books

The Gifting by Katie Ganshert
The Rise of Ransom City by Felix Gilman
Delicious by Unknown
The Mousehunter by Alex Milway
Till the Break of Dawn by Tracey H. Kitts
License to Thrill by Dan Gutman
Teaching Miss Maisie Jane by Mariella Starr
The Invitation by Samantha Hyde