Read Statistics for Dummies Online

Authors: Deborah Jean Rumsey

Tags: #Non-Fiction, #Reference

Statistics for Dummies (9 page)

BOOK: Statistics for Dummies
9.77Mb size Format: txt, pdf, ePub
ads
Counting on the sample size

Sample size isn't everything, but it does count for a great deal in terms of surveys and studies. If the study is designed and conducted correctly, and if the participants are selected randomly (that is, with no bias; see
Chapter 3
for more on random samples), sample size is an important factor in determining the accuracy and repeatability of the results. (See
Chapters 16
and
17
for more information on designing and carrying out studies.)

You may think that all studies are based on large numbers of participants. This is true for most surveys, but it isn't always true for other types of research, such as studies involving carefully controlled experiments. Experiments can be very time consuming; sometimes they take months
or years to conduct in a variety of situations. Experimental studies can also be costly. Some experiments involve examining not people but products, such as computer chips or military equipment costing thousands or even millions of dollars. If the experiment involves destroying the product in the process of testing it, the cost of each experiment can be quite high. Because of the high cost of some types of research, some studies are based on a small number of participants or products. But fewer participants in a study (or fewer products tested) means less information overall, so studies with small numbers of participants (or products) in general are less accurate than similar studies with larger sample sizes.

Most researchers try to include the largest sample size they can afford, and they balance the cost of the sample size with the need for accuracy. Sometimes, though, people are just lazy and don't want to bother with a large enough sample. Sometimes, those researchers don't really understand the ramifications of having a small sample. And some folks hope you won't understand the importance of sample size, but now you do.

HEADS UP 

The worst examples of woefully inadequate sample size I've seen are TV ads where the sample size is only one. Usually, these commercials present what look like experiments to try to persuade the viewers that one product is superior to another. You've probably seen the TV commercial pitting one paper towel brand against another, where one piece of each type of paper towel is used to try to absorb the same amount of red juice. These examples may sound silly, but anyone can easily fall into the trap of drawing conclusions based on a sample size of one. (Have you ever told someone not to buy a product because you had a bad experience with it?) Remember that an anecdote (or story) is really a study with a sample size of only one.

REMEMBER 

Check the sample size to be sure the researchers have enough information on which to base their results. The margin of error (see
Chapter 10
) also gives you an idea of the sample size, because a small margin of error most likely means that a large sample was used.

Your doctor's time: Quantity or quality?

Headlines are the media's bread and butter, but headlines can also be misleading. Oftentimes, the headlines are more grandiose than the "real" information, especially when the stories involve statistics and the studies that generated the statistics. In fact, you'll often see a real gap between a headline and the "fine print" in such media stories.

A study conducted a few years back evaluated videotaped sessions of 1,265 patient appointments with 59 primary-care physicians and 6 surgeons in Colorado and Oregon. This study found that physicians who had
not
been sued for malpractice spent an average of 18 minutes with each patient, compared to 16 minutes for physicians who
had
been sued for malpractice.

Wow, is two minutes really that important? When the study was reported by the media with the headline, "Bedside manner fends off malpractice suits", this study seemed to say that if you are a doctor who gets sued, all you have to do is spend more time with your patients, and you're off the hook.

What's really going on? Am I supposed to believe that a doctor who has been sued needs only add a couple more minutes of time with each patient, and he can avoid being sued? Think about some of the other possibilities that may be involved here. It could be the case that doctors who don't get sued are just better doctors, ask more questions, listen more, and tell their patients more of what to expect, thereby taking more time; if so, what a doctor does during that time counts much more than how much time the doctor actually spends with each patient. But what about this possibility: Maybe the doctors who get sued are doing more difficult types of operations, or maybe they're specialists of some kind. Unfortunately, the article doesn't give you this information. Another possibility is that the doctors who don't get sued have fewer patients and, therefore, are able to spend more time with each patient and keep better track of them. At any rate, the fine print here doesn't quite match the hype, and when you read or hear about stories like these, watch out for similar gaps between what the headline claims and what the study actually found.

Reporting beyond the scope

You may wonder how political candidates find out about how their constituents feel. They conduct polls and surveys. Many surveys are done by an independent group, such as The Gallup Organization; others are done by representatives of the candidates themselves, and their methods can differ greatly from candidate to candidate, or from survey to survey.

In the 1992 presidential election, Ross Perot made quite a splash on the political scene. His group, United We Stand America, gained momentum and, ultimately, Ross Perot and his supporters made an impact on the election results. Often during debates and campaign speeches, Perot would give statistics and make conclusions about how Americans felt about certain issues. But was Mr. Perot always clear on how "Americans" felt, or was he simply clear about how his supporters felt? One of the vehicles Ross Perot used to get a handle on the opinions of Americans was to publish a questionnaire in the March 21, 1992,
TV Guide
, asking people to fill it out and send it to the address provided. Then he compiled the results of the survey and made them part of his campaign platform. From these results, he concluded that over 80% of the American people agreed with him on these issues. (Note, however, that he received only 18.91% of the vote in 1992.)

Part of the trouble with Mr. Perot's claims is the way the survey was conducted. In order to respond, you had to purchase the
TV Guide
, you had to have strong enough feelings about the survey to fill it out, and you had to
mail it in yourself with your own stamp. Who is most likely to do this? Those who have strong opinions. In addition, the wording of the questions in this survey probably encouraged the people who agreed with Ross Perot to fill out the survey and send it in; those who didn't agree with him were more likely to ignore the survey.

Tip 

If you can tell, based on the wording of the question, how the researcher wants you to respond to it, you know you're looking at a
leading question.
(See
Chapter 16
for more information on how to spot this and other problems with surveys.)

Here is a sampling of some questions Mr. Perot used in his questionnaire. I paraphrased them, but the original intent is intact. (And this is not to pick on Mr. Perot; many political candidates and their supporters do the same type of thing.)

  • Should the line-item veto be able to be used by the president to eliminate waste?

  • Should Congress exclude itself from legislation it passes for us?

  • Should major new programs be first presented to the American people in detail?

The opinions of the people who knew about the survey and chose to participate in it were more likely to be those who agreed with Mr. Perot. This is one example where the conclusions of a study went beyond the scope of the study, because the results didn't represent the opinions of "all Americans" as some voters were led to believe. How can you get the opinions of all Americans? You need to conduct a well-designed and well-implemented survey based on a random sample of individuals. (See
Chapter 16
for more information about conducting a survey.)

HEADS UP 

When examining the conclusions of any study, look closely at both the group that was actually studied (or the group that actually participated) and the larger group of people (or lab mice, or fleas, depending on the study) that the studied group is supposed to represent. Then look at the conclusions that are made. See whether they match. If not, make sure you understand what the real conclusions are, and be realistic about the claims being made before you make any decisions for yourself.

Looking for lies in all the right places

You've seen examples of honest errors that lead to problems and how stretching, inflating, or exaggerating the truth can lead to trouble. Occasionally, you may also encounter situations in which statistics are simply made up, fabricated, or faked. This doesn't happen very often, thanks to peer-reviewed journals, oversight committees, and government rules and regulations.

But every once in a while, you hear about someone who faked his or her data, or "fudged the numbers." Probably the most commonly committed lie involving statistics and data is when people throw out data that don't fit their hypothesis, don't fit the pattern, or appear to be "outliers." In cases when someone has clearly made an error (for example, someone's age is recorded as being 200 years) it makes sense to try to clean up the data by either removing that erroneous data point or by trying to correct the error. But just because the data don't go your way, you can't just throw out some portion of it. Eliminating data (except in the case of a documented error) is ethically wrong; yet, it happens.

Regarding missing data from experiments, a commonly used phrase is "Among those who completed the study

." What about those who didn't complete the study, especially a medical one? Did they die? Did they get tired of the side effects of the experimental drug and quit? Did they feel pressure to give certain answers or to conform to the researcher's hypothesis? Did they experience too much frustration with the length of the study and didn't feel they were getting any better, so they gave up?

Not everyone responds to surveys, and even people who generally try to take part in surveys sometimes find that they don't have the time or interest to respond to every single survey that they're bombarded with. American society today is survey-crazy, and hardly a month goes by when you aren't asked to do a phone survey, an Internet survey, or a mail survey on topics ranging from your product preferences to your opinion on the new dog-barking ordinance for the neighborhood. Survey results are only reported for the people who actually responded, and the opinions of those who chose to respond may be very different from the opinions of those who chose not to respond. Whether the researchers make a point of telling you this, though, is another matter.

For example, someone can say that he or she sent out 5,000 surveys, received 1,000 surveys back, and based the results on those responses. You may then think, "Wow, 1,000 responses. That's a lot of data; that must be a pretty accurate survey." Wrong. The problem is, 4,000 people who were selected to participate in the survey chose not to, and you have no idea what they would have said if they had responded. You have no guarantee that the opinions of these 4,000 people are represented by the folks who responded. In fact, the opposite could be true.

Tip 

What constitutes a high
response rate
(that is, the number of respondents divided by the number of surveys sent out)? Some statisticians would settle for nothing less than 70%, but as TV's Dr. Phil would say, statisticians need to "get real." Rarely does a survey achieve that high of a response rate. But in general, the lower the response rate, the less credible the results and the more the results will favor the opinions of those who responded. (And keep in mind that respondents tend to have stronger opinions about the issues they choose to respond to.)

REMEMBER 

To watch for fake or missing data, look for information about the study including how many people were chosen to participate, how many finished the study, and what happened to all the participants, not just the ones who experienced a positive result.

 

Feeling the Impact of Misleading Statistics

How do misleading statistics affect your life? They can affect you in small ways or in large ways, depending on the type of statistics that cross your path and what you choose to do with the information that you're given. The most important way that statistics (good or bad) affect you is in your everyday decision making.

Think about the examples discussed throughout this chapter and how they could affect your decision making. You probably won't stay up at night wondering whether the remaining 14% of those surveyed actually microwave their leftovers. But you may run into other situations involving statistics that can affect you greatly, and you need to be ready and able to sort it all out. Here are some examples:

  • Someone may try to tell you that four out of five people surveyed agree that taxes should be raised, so you should too! Will you feel the pressure, or will you try to find out more information first? (Were you one of those kids that lived on the phrase "everyone else is doing it"?)

  • A political candidate sends you a newsletter giving campaign information based on statistics. Can you believe what he/she is telling you?

  • If you're ever chosen to be on a jury, chances are that somewhere along the line, you'll see a lawyer use statistics as part of an argument. You have to sort through all of the information and determine whether the evidence is convincing "beyond a reasonable doubt." In other words, what's the chance that the defendant is guilty? (For more on how to interpret probabilities, see
    Chapters 7
    and
    8
    .)

  • The radio news on the top of the hour says cellphones cause brain tumors. Your spouse uses his or her cellphone all the time. Should you be concerned?

  • What about those endless drug company advertisements? Imagine the pressure doctors must feel from their patients who come in convinced by advertisements that they need to take certain medications
    now.
    Being informed is one thing, but feeling informed because of an ad sponsored by the maker of the product is another.

  • If you have a medical problem, or know someone who does, you may be on the lookout for new treatments or therapies that could help. The world of medical results is full of statistics that can be very confusing.

In life, you come across everything from honest arithmetic errors to exaggerations and stretches of the truth, data fudging,
data fishing
(fishing for results), and reports that conveniently leave out information or communicate only those parts of the results that the researcher wants you to hear. While I need to stress that not all statistics are misleading and not everyone is out to get you, you do need to be vigilant. Sort out the good information from the suspicious and bad information, and you can steer clear of statistics that go wrong.

Sneaking statistics into everyday life

You make decisions every day based on statistics and statistical studies that you've heard about or seen, many times without even realizing it. Here are some examples of the types of decisions you may be making as you go through the day.

  • "Should I wear boots today? What did the weather report say last night? Oh yeah, a 30% chance of snow."

  • "How much water should I be drinking? I used to think eight eight-ounce glasses was the best thing to do, but now I hear that too much water could be bad for me!"

  • "Should I go out and buy some vitamins today? Mary says they work for her, but vitamins upset my stomach." (When
    is
    the best time to take a vitamin, anyway?)

  • "I'm getting a headache; maybe I should take an aspirin. Maybe I should try being out in the sun more, I've heard that helps stop migraines."

  • "Gee, I hope Rex doesn't chew up my rugs again while I'm at work. I heard somewhere that dogs on Prozac deal better with separation anxiety. A dog on Prozac? How would they get the dosage right? And what would I tell my friends?"

  • "Should I just do drive through again for lunch? I've heard of something called ‘bad cholesterol.’ But I suppose all fast food is the same — bad for you, right?"

  • "I wonder whether the boss is going to start cracking down on employees who send e-mail. I heard about a study that showed that people spend on average two hours a day checking and sending personal e-mails from work. No way do I spend that much time doing that!"

  • "Not another guy weaving in and out of traffic talking on his cellphone! I wonder when they're going to outlaw cellphones! I'm sure they cause a huge number of accidents!"

Not all of the examples involve numbers, yet they all involve the subject called statistics. Statistics is really about the process of making decisions, testing theories, comparing groups or treatments, and asking questions. The number crunching goes on behind the scenes, leaving you with lasting impressions and conclusions that ultimately get embedded into your daily decisions.

BOOK: Statistics for Dummies
9.77Mb size Format: txt, pdf, ePub
ads

Other books

The Last Witness by John Matthews
Sun on Fire by Viktor Arnar Ingolfsson
The Choosing (The Arcadia Trilogy Book 1) by James, Bella, Hanna, Rachel
A Heart Divided by Kathleen Morgan
Deep Deception 2 by McKinney, Tina Brooks
Ruby and the Stone Age Diet by Millar, Martin