Read Statistics Essentials For Dummies Online

Authors: Deborah Rumsey

Tags: #Reference

Statistics Essentials For Dummies (51 page)

BOOK: Statistics Essentials For Dummies
10.15Mb size Format: txt, pdf, ePub
ads

Beware of surveys that have a large sample size but it's not randomly selected; internet surveys are the biggest culprit. A company can say that 50,000 people logged on to its Web site to answer a survey, but that information is biased, because it represents opinions of those who had access to the Internet, went to the Web site, and chose to complete the survey.

Nonresponse Is Minimized

After the sample size has been chosen and the sample of individuals has been randomly selected from the target population, you have to get the information you need from the people in the sample. If you've ever thrown away a survey or refused to answer a few questions over the phone, you know that getting people to participate in a survey isn't easy.

The importance of following up

If a researcher wants to minimizes bias, the best way to handle nonresponse is to "hound" the people in the sample: Follow up one, two, or even three times, offering dollar bills, coupons, self-addressed stamped return envelopes, chances to win prizes, and so on. Note that offering more than a small token of incentive and appreciation for participating can create bias as well, because then people who really need the money are more likely to respond than those who don't.

Consider what motivates
you
to fill out a survey. If the incentive provided by the researcher doesn't get you, maybe the subject matter piques your interest. Unfortunately, this is where bias comes in. If only those folks who feel very strongly respond to a survey, only their opinions will count; because the other people who don't really care about the issue don't respond, each "I don't care" vote doesn't count. And when people do care but don't take the time to complete the survey, those votes don't count, either.

The response rate of a survey is a percentage found by taking the number of respondents divided by the total sample size and multiplying by 100%. The ideal response rate according to statisticians is anything over 70%. However, most response rates fall well short of that, unless the survey is done by a very reputable organization, such as Gallup.

Look for the response rate when examining survey results. If the response rate is too low (much less than 70%) the results may be biased and should be ignored. Selecting a smaller initial sample and following up aggressively is better than selecting a bigger sample that ends up with a low response rate. Plan several follow-up calls/mailings to reduce bias. It also helps increase the response rate to let people know up front whether their results will be shared or not.

Anonymity versus confidentiality

If you were to conduct a survey to determine the extent of personal email usage at work, the response rate would probably be low because many people are reluctant to disclose their use of personal email in the workplace. You could encourage people to respond by letting them know that their privacy would be protected during and after the survey.

When you report the results of a survey, you generally don't tie the information collected to the names of the respondents, because doing so would violate the privacy of the respondents. You've probably heard the terms
anonymous
and
confidential
before, but you may not realize that they have totally different meanings in terms of privacy issues. Keeping results confidential means that I could tie your information to your name in my report, but I promise that I won't do that. Keeping results
anonymous
means that I have no way of tying your information to your name in my report, even if I wanted to.

If you're asked to participate in a survey, be sure you're clear about what the researchers plan to do with your responses and whether or not your name can be tied to the survey. (Good surveys always make this issue very clear for you.) Then make a decision as to whether you still want to participate.

The Survey Is of the Right Type

Surveys come in many types: mail surveys, telephone surveys, Internet surveys, house-to-house interviews, and man-on-the-street surveys (in which someone comes up to you with a clipboard and asks, "Do you have a few minutes to participate in a survey?"). One very important yet sometimes overlooked criterion of a good survey is whether the type of survey being used is appropriate for the situation. For example, if the target population is the population of people who are visually impaired, sending them a survey in the mail that has a tiny font isn't a good idea (yes, this has happened!).

When looking at the results of a survey, be sure to find out what type of survey was used and reflect on whether this type of survey was appropriate.

Questions Are Well Worded

The way in which a question is worded in a survey can affect the results. For example, while President Bill Clinton was in office and the Monica Lewinsky scandal broke, a CNN/Gallup Poll conducted August 21-23, 1998, asked respondents to judge Clinton's favorability, and about 60% gave him a positive result. When CNN/Gallup reworded the question to ask respondents to judge Clinton's favorability "as a person," only about 40% gave him a positive rating. These questions were both getting at the same issue; even though they were worded only slightly differently you can see how different the results are. So question wording does matter.

One huge problem is the use of misleading questions (in other words, questions that are worded in such a way that you know how the researcher wants you to answer). An example of a misleading question is, "Do you agree that the president should have the power of a line-item veto to eliminate waste?" This question should be worded in a neutral way, such as "What is your opinion about the line-item veto ability of a president?" Then give a scale from 1 to 5 where 1 = strongly disagree and 5 = strongly agree.

When you see the results of a survey that's important to you, ask for a copy of the questions that were asked and analyze them to ensure that they were neutral and minimized bias.

The Timing Is Appropriate

The timing of a survey is everything. Current events shape people's opinions, and while some pollsters try to determine how people really feel, others take advantage of these situations, especially the negative ones. For example, polls regarding gun control often come out right after a shooting that is reported by the national media. Timing of any survey, regardless of the subject matter, can still cause bias. Check the date when a survey was conducted and see whether you can determine any relevant events that may have temporarily influenced the results.

Personnel Are Well Trained

The people who actually carry out surveys have tough jobs. They have to deal with hang ups, take-us-off-your-list responses, and answering machines. After they do get a live respondent at the other end of the phone or face to face, the job becomes even harder. For example, if the respondent doesn't understand the question and needs more information, how much can you say, while still remaining neutral?

For a survey to be successful, the survey personnel must be trained to collect data in an accurate and unbiased way. The key is to be clear and consistent about every possible scenario that may come up, discuss how they should be handled, and have this discussion well before participants are ever contacted.

You can also avoid problems by running a pilot study (a practice run with only a few respondents) to make sure the survey is clear and consistent and that the personnel are handling responses appropriately. Any problems identified can be fixed before the real survey starts.

Proper Conclusions Are Made

Even if a survey is done correctly, researchers can misinterpret or over-interpret results so that they say more than they really should. Here are some of the most common errors made in drawing conclusions from surveys:

Making projections to a larger population than the study actually represents

 

Claiming a difference exists between two groups when a difference isn't really there

BOOK: Statistics Essentials For Dummies
10.15Mb size Format: txt, pdf, ePub
ads

Other books

Dear Austin by Elvira Woodruff
Comeback by Richard Stark
The Sheikh's Offer by Brooke, Ella, Brooke, Jessica
Moonflower by Leigh Archer
The Counting-Downers by A. J. Compton
The man who mistook his wife for a hat by Oliver Sacks, Оливер Сакс
Greedy Bones by Carolyn Haines
You Don't Love Me Yet by Jonathan Lethem
The Great Fog by H. F. Heard