Read Statistics for Dummies Online

Authors: Deborah Jean Rumsey

Tags: #Non-Fiction, #Reference

Statistics for Dummies (46 page)

BOOK: Statistics for Dummies
10.96Mb size Format: txt, pdf, ePub
ads
Double-blinding the experiment

Well-designed experiments are done in a double-blind fashion.
Double-blind
means that neither the subjects nor the researchers know who got what treatment or who is in the control group. The research subjects need to be oblivious to which treatment they're getting so that the researchers can measure the placebo effect. But why shouldn't the researcher know who got what treatment? So that they don't treat subjects differently by either expecting (or not expecting) certain responses from certain groups. For example, if a researcher knows you're in the treatment group to study the side effects of a new drug, she may expect you to get sick and pay more attention to you than if she knew you were in the control group. This can result in biased data and misleading results.

If the researcher knows who got what treatment but the subjects don't know, the study is called a
blind
study (rather than a double-blind study). Blind studies are better than nothing, but double-blind studies are best. In case you're wondering: In a double-blind study, does
anyone
know which treatment was given to which subjects? Relax; typically a third party lab assistant does that part.

REMEMBER 

When analyzing an experiment, look to see whether it was a double-blind study. If not, the results can be biased.

Collecting good data

What constitutes "good" data? Statisticians use three criteria for evaluating data quality; each of the criteria really relates most strongly to the quality of the measurement instrument that's used in the process of collecting the data.

To decide whether or not you're looking at good data from an experiment, look for these characteristics:

  • Reliable (you can get repeatable results with subsequent measurements):
    Many bathroom scales give unreliable data. You get on the scale, and it gives you one number. You don't believe the number, so you get off, get back on, and get a different number. (If the second number is lower, you'll most likely quit at this point; if not, you may continue getting on and off until you see a number you like.)

    The point is, unreliable data come from unreliable measurement instruments. These instruments can go beyond scales to more intangible measurement instruments, like survey questions, which can give unreliable results if they're written in an ambiguous way (see
    Chapter 16
    for more on this).

    HEADS UP 

    Find out how the data were collected when examining the results of an experiment. If the measurements are unreliable, the data could be inaccurate.

  • Unbiased (the data contain no systematic errors that either add to or subtract from the true values):
    Biased data are data that systematically over measure or under measure the true result. Bias can occur almost anywhere during the design or implementation of a study. Bias can be caused by a bad measurement instrument (like a bathroom scale that's "always" five pounds over) by survey questions that lead participants in a certain way, or by researchers who know what treatment each subject received and have preconceived expectations.

    Bias is probably the number-one problem in data. And worse yet, it can't really be measured. (For example, the margin of error doesn't measure bias. See
    Chapter 10
    for details on the margin of error.) However, steps can be taken to minimize bias, as discussed in
    Chapter 16
    and in the "
    Randomly assigning subjects to groups
    " section earlier in this chapter.

    HEADS UP 

    Be aware of the many ways that bias can come into play during the design or implementation of any study, and evaluate the study with an eye toward detecting bias. If a study contains a great deal of bias, you have to ignore the results.

  • Valid (the data measure what they're supposed to measure):
    Checking the validity of data requires you to step back and look at the big picture. You have to ask the question: Do these data measure what they should be measuring? Or should the researchers have been collecting altogether different data? The appropriateness of the measurement instrument used is important. For example, asking students to report their high school math grades may not be a valid measure of actual grades. A more valid measure would be to look at each student's transcript. Measuring the prevalence of crime by the number of crimes is not valid; the
    crime rate
    (number of crimes per capita) should be used.

    HEADS UP 

    Before accepting the results of an experiment, find out what data were measured and how they were measured. Be sure the researchers are collecting data that are appropriate for the goals of the study.

Analyzing the data properly

After the data have been collected, they're put into that mysterious box called the
statistical analysis
. The choice of analysis is just as important (in terms of the quality of the results) as any other aspect of a study. A proper analysis should be planned in advance, during the design phase of the experiment. That way, after the data are collected, you won't run into any major problems during the analysis.

Here's the bottom line when selecting the proper analysis. Ask yourself the question, "After the data are analyzed, will I be able to answer the question that I set out to answer?" If the answer is "no", that analysis isn't appropriate.

The basic types of statistical analyses include confidence intervals (used when you're trying to estimate a population value, or the difference between two population values); hypothesis tests (used when you want to test a claim about one or two populations, such as the claim that one drug is more effective than another); and correlation and regression analyses (used when you want to show if and/or how one variable can predict or cause changes in another variable). See
Chapters 13
,
15
, and
18
, respectively, for more on each of these types of analyses.

TECHNICAL STUFF 

When choosing how you're going to analyze your data, you have to make sure that the data and your analysis will be compatible. For example, if you want to compare a treatment group to a control group in terms of the amount of weight lost on a new (versus an existing) diet program, you need to collect data on how much weight each person lost (not just each person's weight at the end of the study).

Drawing appropriate conclusions

In my opinion, the biggest mistakes researchers make when drawing conclusions about their studies are:

  • Overstating their results

  • Making connections or giving explanations that aren't backed up by the statistics

  • Going beyond the scope of the study in terms of who the results apply to

Each of these problems is discussed in the following sections.

Overstating the results

Many times, the headlines in the media will overstate actual research results. When you read a headline or otherwise hear about a study, be sure to look further to find out the details of how the study was done and exactly what the conclusions were.

Press releases often overstate results, too. For example, in a recent press release by the National Institute for Drug Abuse, the researchers claimed that Ecstasy use was down in 2002 from 2001. However, when you look at the actual statistical results in the report, you find that the percentage of teens in the sample who said they'd used Ecstasy was lower in 2002 than it was in 2001, but this difference was not found to be statistically significant (even though many other results were found to be statistically significant). This means that although fewer teens in the sample used Ecstasy in 2002, the difference wasn't enough to account for more than chance variability from sample to sample. (See
Chapter 14
for more about statistical significance.)

Headlines and leading paragraphs in press releases and newspaper articles often overstate the actual results of a study. Big results, spectacular findings, and major breakthroughs are what make the news these days, and reporters and others in the media are constantly pushing the envelope in terms of what is and is not newsworthy. How can you sort out the truth from exaggeration? The best thing to do is to read the fine print.

Taking the results one step beyond the actual data

A study that links having children later in life to longer lifespans illustrates another point about research results. Do the results of this observational study mean that having a baby later in life can make you live longer? "No", said the researchers. Their explanation of the results was that having a baby later in life may be due to women having a "slower" biological clock, which presumably would then result in the aging process being slowed down.

My question to these researchers is, "Then why didn't you study
that
, instead of just looking at their ages?" I don't see any data in this study that would lead me to conclude that women who had children after age 40 aged at a slower rate than other women, so in my view, the researchers shouldn't make that conclusion yet. Or the researchers should state clearly that this view is only a theory and requires further study. Based on the data in this study, the researchers' theory seems like a leap of faith (although as a 41-year-old new mom, I'll hope for the best!).

HEADS UP 

Frequently, in a press release or news article, the researcher will give an explanation about
why
he thinks the results of the study turned out the way they did and what implications these results have for society as a whole. (These explanations may have been in response to a reporter's questions about the research, questions that were later edited out of the story, leaving only the juicy quotes from the researcher.) Many of these after-the-fact explanations are no more than theories that have yet to be tested. In such cases, you should be wary of conclusions, explanations, or links drawn by researchers that aren't backed up by their studies.

Generalizing results to people beyond the scope of the study

You can make conclusions only about the population that's represented by your sample. If you sample men only, you can't make conclusions about women. If you sample healthy young people, you can't make your conclusions about everyone. But many researchers try to do just that. This is a common practice that can give misleading results. Watch out for this one!

Here's how you can determine whether a researcher's conclusions measure up:

  • Find out what the target population is (that is, the group that the researcher wants to make conclusions about).

  • Find out how the sample was selected and see whether the sample is representative of that target population (and not some more narrowly defined population).

  • Check the conclusions made by the researchers; make sure they're not trying to apply their results to a broader population than they actually studied.

 

Making Informed Decisions about Experiments

Just because someone says they conducted a "scientific study" or a "scientific experiment", doesn't mean it was done right or that the results are credible. Unfortunately, I've come across a lot of bad experiments in my days as a
statistical consultant. The worst part is that if an experiment was done poorly, you can't do anything about it after the fact except ignore the results — and that's exactly what you need to do.

Here are some tips that help you make an informed decision about whether to believe the results of an experiment, especially one whose results are very important to you.

  • When you first hear or see the result, grab a pencil and write down as much as you can about what you heard or read, where you heard or read it, who did the research, and what the main results were. (I keep pencil and paper in my TV room and in my purse just for this purpose.)

  • Follow up on your sources until you find the person who did the original research, and then ask them for a copy of the report or paper.

  • Go through the report and evaluate the experiment according to the eight steps for a good experiment described in the "
    Designing a Good Experiment
    " section of this chapter. (You really don't have to understand everything written in a report in order to do that.)

  • Carefully scrutinize the conclusions that the researcher makes regarding his or her findings. Many researchers tend to overstate results, make conclusions beyond the statistical evidence, or try to apply their results to a broader population than the one they studied.

  • Never be afraid to ask questions of the media, the researchers, and even your own experts. For example, if you have a question about a medical study, ask your doctor. He or she will be glad that you're an empowered and well-informed patient!

 

Chapter 18:
Looking for Links—Correlations and Associations
Overview

Everyone seems to want to tell you about the latest relationships, correlations, associations, or links they've found. Many of these links come from medical research, as you may expect. The job of medical researchers is to tell you what you should and shouldn't be doing in order to live a longer and healthier life.

Here are some recent news releases provided by the National Institutes of Health (NIH):

  • Sedentary activities (like TV watching) are associated with an increase in obesity and an increase in the risk of diabetes in women.

  • Anger expression may be
    inversely related
    to the risk of heart attack and stroke. (Those who express anger have a decreased risk.)

  • Light-to-moderate drinking reduces the risk of heart disease in men.

  • Immediate treatment helps delay the progression of glaucoma.

Reporters love to tell people about the latest links, because these stories can make big news. Some of the recommendations seem to change from time to time, though; for example, one minute, zinc is recommended to prevent colds, and the next minute it's "out in the cold." Many of the relationships you see in the media are touted as cause-and-effect relationships, but can you believe these reports? (For example, does having her first baby later in life cause a woman to live longer?) Are you so skeptical that you just don't believe much of anything anymore?

If you're a confused information consumer when you hear about links and correlations, take heart; this chapter can help. You discover what it truly means for two factors to be correlated, associated, or have a cause-and-effect relationship, and when and how to make predictions based on those relationships. You also gain the skills to dissect and evaluate research claims and to make your own decisions about those headlines and sound bites alerting you to the latest correlation.

BOOK: Statistics for Dummies
10.96Mb size Format: txt, pdf, ePub
ads

Other books

Lantern Lake by Lily Everett
Dying to Tell by Rita Herron
Estacion de tránsito by Clifford D. Simak
Olympia by Dennis Bock
Carter & Lovecraft by Jonathan L. Howard
Front Row by Jerry Oppenheimer
All the Old Knives by Olen Steinhauer