Statistics for Dummies (51 page)

Read Statistics for Dummies Online

Authors: Deborah Jean Rumsey

Tags: #Non-Fiction, #Reference

BOOK: Statistics for Dummies
5.56Mb size Format: txt, pdf, ePub

 

Good Follow-Up Minimizes Non-Response

After the sample size has been chosen and the sample of individuals has been randomly selected from the target population, you have to get the information you need from the people in the sample. If you've ever thrown away a survey or refused to answer a few questions over the phone, you know that getting people to participate in a survey isn't easy. If a researcher wants to minimizes bias, the best way to handle non-responses is to "hound" the people in the sample: Follow up one, two, or even three times, offering dollar bills, coupons, self-addressed stamped return envelopes, chances to win prizes, and so on. Note that offering more than a small token of incentive and appreciation for participating would create bias, as well, because then people who really need the money are more likely to respond than those who don't.

Consider what motivates you to fill out a survey. If the incentive provided by the researcher doesn't get you, maybe the subject matter peaks your interest. Unfortunately, this is where bias comes in. If only those folks who feel very strongly respond to a survey, only their opinions will count; because the other people who don't really care about the issue don't respond, each "I don't care" vote doesn't count. And when people do care but don't take the time to complete the survey, those votes doesn't count, either.

For example, suppose 1,000 people are given a survey on whether to change the park rules to allow dogs. Who will respond? Most likely, the respondents will be those who feel strongly for allowing dogs and who feel strongly against dogs in the park. Suppose 100 of each are the only respondents and the other 800 surveys aren't returned. This means that 800 opinions aren't counted. If none of those 800 people really cares about the issue either way
and if you could count their opinions, the results would be 800 ÷ 1000 or 80% saying "no opinion", with 10% (100 ÷ 1,000) saying they support the issue and 10% (100 ÷ 1,000) against the issue. Without those votes of the 800 nonrespondents, however, researchers can report, "Of the people who responded, 50% were for the issue, and 50% were against it." This gives the impression of a very different (and biased!) result.

HEADS UP 

The
response rate
of a survey is a percentage found by taking the number of respondents divided by the total sample size and multiplying by 100%. A good response rate is anything over 70%. However, most response rates fall well short of that, unless the survey is done by a very reputable organization, such as The Gallup Organization. Look for the response rate when examining survey results. If the response rate is too low (much less than 70%) the results may be biased and should be ignored.

REMEMBER 

Selecting a smaller initial sample and following up aggressively is better than selecting a bigger sample that ends up with a low response rate. Aggressive follow-up reduces bias.

Tip 

The next time you're asked to participate in a good survey (according to the criteria listed in this chapter), consider responding. You'll be doing your part to rid the world of bias!

 

The Type of Survey Used Is Appropriate

Surveys come in many types: mail surveys, telephone surveys, Internet surveys, house-to-house interviews, and man-on-the-street surveys (in which someone comes up to you with a clipboard and asks, "Do you have a few minutes to participate in a survey?"). One very important yet sometimes overlooked criterion of a good survey is whether the type of survey being used is appropriate for the situation.

For example, if the target population is the population of people who are visually impaired, sending them a survey in the mail that has a tiny font isn't a good idea (yes, this has happened!). If you want to conduct a survey of victims of domestic violence, interviewing them in their homes isn't appropriate.

Suppose your target population is the homeless people in your city. How do you contact them? They don't have addresses or phones, so neither type of survey is appropriate. (This is a very difficult problem, and one that the government wrestles with when census time comes around.) You can physically go and talk to people one on one, wherever they are located, but to find out where they're located is no easy task. Asking local shelters, churches, or other groups who help the homeless may give you some good leads to start the search.

REMEMBER 

When looking at the results of a survey, be sure to find out what type of survey was used and reflect on whether this type of survey was appropriate.

 

The Questions Are Well Worded

The way that a question is worded in a survey can affect the results. For example, while President Bill Clinton was in office and the Monica Lewinsky scandal broke, a CNN/Gallup Poll conducted August 21–23, 1998, asked respondents to judge Clinton's favorability, and about 60% gave him a positive result. (The sample size for this survey was 1,317, and the margin of error was reported to be plus or minus 3 percentage points.) When CNN/Gallup reworded the question to ask respondents to judge Clinton's favorability "as a person", the results changed: Only about 40% gave him a positive rating.

The next night, CNN/Gallup conducted another survey on the same topic. Here are some of the questions and responses:

  • Do you approve or disapprove of the way President Clinton is handling his job? (Sixty-two percent approved and 35% disapproved.)

  • Do you have a favorable or unfavorable overall opinion of Clinton? (Fifty-four percent said favorable, 43% said unfavorable.)

  • All things considered, are you glad Clinton is president? (Fifty-six percent said yes, 42% said no.)

  • If you could vote again for the 1996 candidates for president, who would you vote for? (Forty-six percent said they would vote for Bill Clinton, 34% for Bob Dole, 13% for Ross Perot.)

These questions are all getting at the same issue: how people felt about President Clinton during the time of the Monica Lewinsky scandal. And while these questions are similar, all are worded slightly differently, and you can see how different the results are. So question wording does matter.

Probably the biggest problem with question wording though is the use of
leading questions.
In other words, questions that are worded in such a way that you know how the researcher wants you to answer. This leads to biased results that give too much weight to a certain answer because of the wording of the question, not because people actually have that opinion.

Many surveys contain leading questions (either unintentionally or by design), to get you to say what the pollster wants you to say. Here are a couple of examples of leading questions similar to those I've seen in print:

  • Which position most agrees with yours? Democrats favor responsible, realistic fiscal planning to balance the budget in a reasonable period of time, while still meeting their responsibilities to the most vulnerable Americans. Republicans propose to enforce a mandatory balanced budget while allowing for severe cuts in education and health care.

  • Do you agree that the president should have the power of a line-item veto to eliminate waste?

  • Do you think that Congress and White House officials should set a good example by eliminating the perks and special privileges they currently receive at each taxpayer's expense?

HEADS UP 

When you see the results of a survey that's important to you, ask for a copy of the questions that were asked and analyze them to ensure that they were neutral and minimized bias.

 

The Survey Is Properly Timed

The timing of a survey is everything. Current events shape people's opinions, and while some pollsters try to determine how people really feel, others take advantage of these situations, especially the negative ones. For example, when shootings have occurred in schools, the issue of gun control has often been raised in surveys. Of course, during the time immediately following such a tragedy, more people are in favor of gun control than before; in other words, the results spike upward. After a period of time, however, opinions go back to what they were before; meanwhile the pollsters project the results of the survey as if the public feels that way all the time.

Timing of any survey, regardless of the subject matter, can still cause bias. For example, suppose your target population is people who work full time. If you conduct a telephone survey by calling their homes between the hours of 9 a.m. and 5 p.m., you're going to have a lot of bias in your results, because those are the hours that the majority of fulltime workers are at work!

HEADS UP 

Check the date when a survey was conducted and see whether you can determine any relevant events that may have temporarily influenced the results. Also look at what time of the day or night the survey was conducted: Was it during a time that is most convenient for the target population to respond?

 

The Survey Personnel Are Well Trained

The people who actually carry out surveys have tough jobs. They have to deal with hang ups, take-us-off-your-list responses, and answering machines. And then, after they do get a live respondent at the other end of the phone or face to face, their job becomes even harder. That's when they have to collect data in an accurate and unbiased way.

Here are some problems that can come up during the survey process:

  • The respondent doesn't understand the question and needs more information. How much do you tell that person, while still remaining neutral?

  • Information gets miscoded. For example, I tell the survey person I'm 40 years old, but he accidentally writes down 60.

  • The person carrying out the survey has to make a judgment call. For example, suppose the question asks how many people are living in the house, and the respondent asks, "Do I count my cousin Bob who is just staying with us while he looks for a job?" A decision needs to be made.

  • Respondents may give misinformation. For example, some people hate surveys so much that they go beyond refusing to do them and actually complete them, but give all misleading answers. For example, when asked how old she is, a woman may say 101 years old.

How do the survey personnel handle these and a host of other challenges that occur during the survey process? The key is to be clear and consistent about every possible scenario that may come up, discuss how they should be handled, and have this discussion well before participants are ever contacted. That means the personnel need to be well trained.

Tip 

You can also avoid problems by running a
pilot study
(a practice run with a only a few respondents) that's recorded, so that personnel can practice and be evaluated on how accurate and consistent they are in collecting their data. In this way, researchers can anticipate problems before the survey process starts and put policies in place for how they are to be handled.

Surveys should avoid unclear, ambiguous, or leading questions. A pilot study can also screen potentially difficult questions by gauging how the small group responds and what questions they ask. And to avoid miscoding the information (that is, typing the information incorrectly), the survey needs to have all possible answers clearly marked. For example: If strongly disagree is to be coded with a 1, and strongly agree is to be coded with a 5 (and not the other way around) the survey needs to clearly specify that. Taping the interview for a crosscheck later would also help.

HEADS UP 

Be sure you've decided how to handle
prank respondents
, people who are merely joking around. One suggestion is to flag that person's response as possibly unusable, and then call the number back later and try again.

 

The Survey Answers the Original Question

Suppose a researcher starts out with a statement like, "I want to find out about shoppers' buying habits." That sounds good, right? But then you look at the survey, and the questions are all about how people feel about shopping ("What do you like best/least about shopping?" or "On a scale of 1–10, how much do you enjoy shopping?"). The questions don't ask about buying behavior. While attitudes toward shopping do influence shoppers' buying habits, the real measure of buying habits is how shoppers behave: what they're buying, how much they spend, where they're shopping, with whom they shop, when they shop, how often they shop, and so on. Sometimes, researchers don't realize that they have missed the boat until after their survey results are in. After the fact (and too late for them to do anything), they see that they can't answer their research questions with the data they collected, which is not good!

HEADS UP 

Before participating in a survey, ask what the researcher is trying to find out — what the purpose of the survey is. Then as you read or listen to the questions, if they appear to have an agenda or to be leading you in a different direction, my advice is to stop your participation and explain why, either in writing or in person.

REMEMBER 

Before designing any survey, first write down the goals of the survey. What do you want to know? Then design the questions to meet those goals. That way, you're sure to get your questions answered.

 

Chapter 21:
Ten Common Statistical Mistakes

This book is about not only understanding the statistics that you come across in the media and in your workplace, but digging deeper to examine whether those statistics are correct, reasonable, and fair. You have to be vigilant — and a bit skeptical — to deal with today's information explosion, because many of the statistics you come across are wrong or misleading, either by error or by design. If you don't critique the information you're consuming, who will? In this chapter, I outline some common statistical mistakes made by researchers and by the media, and I share ways to recognize and avoid those mistakes.

Other books

Updrift by Errin Stevens
Left for Dead by J.A. Jance
Blackbird House by Alice Hoffman
The Circle: Rain's Story by Blue, Treasure E.
Getting High by Paolo Hewitt
Bella Vita by Jesse Kimmel-Freeman
Kyle's Island by Sally Derby
Braced to Bite by Serena Robar