Statistics for Dummies (35 page)

Read Statistics for Dummies Online

Authors: Deborah Jean Rumsey

Tags: #Non-Fiction, #Reference

BOOK: Statistics for Dummies
5.24Mb size Format: txt, pdf, ePub
Tip 

While performing any calculations involving sample percentages, you must use the decimal form. After the calculations are finished, you may convert to percentages by multiplying by 100. To avoid round-off error, keep at least 2 decimal places throughout.

Suppose you work for the Las Vegas Chamber of Commerce, and you want to estimate with 95% confidence the difference between the proportion of females who have ever gone to see an Elvis impersonator and the percentage of males who have ever gone to see an Elvis impersonator, in order to help determine how you should market your entertainment offerings.

Because you want a 95% confidence interval, your Z-value is 1.96.

Suppose your random sample of 100 females includes 53 females who have seen an Elvis impersonator, so
is 53 ÷ 100 = 0.53. Suppose also that your random sample of 110 males includes 37 males who have ever seen an Elvis impersonator, so
is 37 ÷ 110 = 0.34.

The difference between these sample proportions (females

males) is 0.53

0.34 = 0.19.

Take 0.53 times (1

0.53) and divide that by 100 to get 0.2491 ÷ 100 = 0.0025. Then take 0.34 times (1

0.34) and divide that by 110 to get 0.2244 ÷ 110 = 0.0020. Add these two results to get 0.0025 + 0.0020 = 0.0045; the square root is 0.0671.

1.96 × 0.0671 gives you 0.13, or 13%, which is the margin of error.

Your 95% confidence interval for the difference between the percentage of females who have seen an Elvis impersonator and the percentage of males who have seen an Elvis impersonator is 0.19 or 19% (which you got in Step 3), plus or minus 13%. The lower end of the interval is 0.19

0.13 = 0.06 or 6%; the upper end is 0.19 + 0.13 = 0.32 or 32%. So you can say with 95% confidence that a higher percentage of females than males have seen an Elvis impersonator, and the difference in these percentages is somewhere between 6% and 32%, based on your sample. Now would the guys actually admit they'd ever seen an Elvis impersonator? This may create some bias in the results. (The last time I was in Vegas, I thought I really saw Elvis; he was driving a van taxi to and from the airport

.)

Tip 

Notice that you could get a negative value for (

). For example, if you had switched the males and females, you would have gotten

0.19 for this difference. A positive difference means the first group has a larger value than the second group; a negative difference means the first group has a smaller value than the second group. You can avoid negative differences by always having the group with the larger value serve as the first group.

 

Part VI:
Putting a Claim to the (Hypothesis) Test
Chapter List
Chapter 14:
Claims, Tests, and Conclusions
Chapter 15:
Commonly Used Hypothesis Tests—Formulas and Examples

Many statistics form the basis of claims, like "Four out of five dentists surveyed recommend this gum" or "Our diapers are 25 percent more absorbent than the leading brand." How can you tell whether the claim is true? Researchers (who know what they're doing) use what's called a hypothesis test.

In this part, you explore the basics of hypothesis tests, determining how to set them up, carry them out, and interpret the results (all the while knowing that you're trying to make a statement about an entire population based on only a sample). You also get quick references and examples for the most commonly used hypothesis tests.

 

Chapter 14:
Claims, Tests, and Conclusions
Overview

You hear claims involving statistics all the time; the media has no shortage of them:

  • Twenty-five percent of all women in the United State have varicose veins. (Wow, are some claims better left unsaid, or what?)

  • Ecstasy use in teens dropped for the first time in recent years. The oneyear decline ranged from about one-tenth to nearly one-third, depending on what grade they were in.

  • A 6-month-old baby sleeps an average of 14 to 15 hours in a 24-hour period. (Yeah, right!)

  • A name-brand ready-mix pie takes only 5 minutes to make.

Many claims involve numbers that seem to come out of thin air. Some claims make comparisons between one product or group and another. You may wonder whether such claims are valid, and you should. Not all claims are life changing (after all, what's the harm in using a soap that isn't 99.99 percent pure?) but some claims are — for example, which cancer treatment works best, which minivan is the safest, or whether certain drugs should be approved. While many claims are backed up by solid scientific (and statistically sound) research, other claims are not. In this chapter, you find out how to use statistics to determine whether a claim is actually valid and get the lowdown on the process that researchers
should
be using to validate every claim they make.

 

Responding to Claims: Some Do's and Don'ts

In today's age of information (and big money), a great deal rides on being able to back up your claims. Companies that say their products are better than the leading brand better be able to prove it, or they could face lawsuits. Drugs that are approved by the FDA have to show strong evidence that their products actually work without producing life-threatening side effects. Manufacturers have to make sure their products are being produced according to specifications to avoid recalls, customer complaints, and loss of business.

Research can also result in claims that can mean the difference between life and death, such as which cancer treatment is best, which side effects of a type of surgery are most common, what the survival rate of a certain treatment is, and whether or not a new experimental drug increases life expectancy. The research that goes into answering these questions needs to be sound, so that the right decision (at least the most statistically informative decision) can be made. If not, researchers can lose their reputations, credibility, and funding. (And sometimes, they feel pressure to produce results, which can lead to other problems, as well.)

Knowing your options

As a consumer in this age of information, when you hear a claim being made (for example, "Our ice cream was the top choice of 80% of taste testers"), you basically have three options:

  • Believe it automatically (or go the other way and reject it outright)

  • Conduct your own test to verify or refute the claim

  • Dig deeper for more information so you can make your own decision

Believing results without question (or rejecting them out of hand) isn't wise; the only times you may want to do this are when the source has already established a good (or bad) name with you or the result simply isn't that important (after all, you can't go around checking
every
single claim that comes your way). More on the other two options in the two following sections.

Steering clear of anecdotes

The second option for responding to a claim, the test-it-yourself approach, is one that is taken by many organizations, such as The Gallup Organization, which conducts its own polls; the Insurance Institute for Highway Safety, which crash tests and reports on the safety of vehicles;
Consumer Reports
,
which tests and reports on product quality and value; and the Good Housekeeping Institute, which tests products before giving them its Seal of Approval.

The test-it-yourself approach can be effective if done correctly, with data that are based on well-designed studies that collect accurate, unbiased data (see
Chapters 16
and
17
for more on study designs).

This approach is often taken in the workplace; for example, a competitor may make claims about its product that you think are untrue and should be tested. Or, you may think your product does a better job than a competitor's product, and you want to put the products to the test. Many manufacturers also do their own quality control (see
Chapter 19
), so they make a practice of testing their products to see whether they are within specifications.

Yet while this option is viable for groups that have the resources and the knowledge to undertake a proper study to test a claim, it can lead to misleading results if handled improperly.

One way that the media tests product claims is by sending people out into the field to check products out for themselves. This is an overused and unscientific (yet fun) method for testing a hypothesis. For example, suppose some TV show has determined that the world has to know: Does it really take five minutes to make a certain name-brand five-minute pie? Maybe it actually takes more, maybe it takes less. Statistically speaking, the variable of interest here is numerical — preparation time — and the population is all of the pies made using that name-brand recipe. The parameter of interest is the
average
preparation time for all pies made with that recipe. (A parameter is a single number that summarizes the population and is typically what the claim is about.) The claim here is that the average preparation time is 5 minutes. Their mission: Test this claim. How many pies will they use? Take a guess — it's just one!

They'd have cameras rolling, and the co-hosts would banter about how much fun it is to make the pie, how good it looks, keeping an eye on the time to prepare it (after all, they have to go to a commercial break soon). In the end, they'd report that it took them say 5.5 minutes, pretty close to the claim, but not exactly. And they'd end with a comment that using Snickers bars on top of this name-brand pie is a good candy bar choice (it is, by the way).

If these TV shows ever had a resident statistician who could give the statistical breakdown of the results (a real ratings booster, I know), I'd jump at that chance. The main idea I'd want to get across to the audience is that sample results vary (from person to person, from pie to pie) — see
Chapter 9
for more on this. Measuring and understanding this variability is where the real statistics comes in. The bottom line is: To get credible, conclusive results about any claim requires real data (that is, more than a single observation). Many people don't realize that in order to properly test a claim, it takes much more than a sample size of 1 (or even 2 or 3), because of the fact that sample results vary.

You can't (or at least shouldn't) build any kind of lasting conclusions based on an anecdote, which is what a sample size of 1 really is. In statistics, a sample size of 1 doesn't make any sense. You can't measure any amount of variability with only one value (see
Chapter 5
for the standard deviation formula to see what I mean). That's the trouble with most TV segments that show people testing claims by testing 1 or 2 individual products; they aren't doing a scientific test, and they send the wrong message about the way you test a hypothesis. Now while making conclusions about five-minute pies without sufficient data doesn't seem earth shattering, think about how many times hearing one person's single experience has influenced a decision you've made in your life.

HEADS UP 

Beware of any study results based on extremely small sample sizes, especially those based on a sample size of 1. For example, if a study sends an individual out to test one package of meat, examine one child's toy, or test the accuracy of one individual pharmacy filling one prescription on one particular day, steer clear. These make for interesting stories and may uncover problems to be more fully investigated, but these results alone aren't scientific, and you shouldn't make any conclusions based on them.

Digging deeper

Digging deeper to get more information is the way you want to respond to claims that are important to you. Digging deeper gives you the information you need to ask the hard questions and make an informed decision.

The biggest difference between a statistically sound test of a claim and the man-on-the-street test of a claim is that a good test uses data that have been collected in a scientific, unbiased way, based on random samples that are large enough to get accurate information. (See
Chapter 2
for more on this.) Most scientific research, including medical, pharmaceutical, engineering, and government research, is based on using statistical studies to test, verify, or refute claims of various types. Being a consumer of much of this information, oftentimes just from tiny sound bytes on TV, you need to be able to know what to look for to evaluate the study, understand the results, and make your own decisions about the claims being made.

HEADS UP 

You may wonder how much protection you have as a consumer regarding claims that researchers make. The U.S. government regulates and monitors a great deal of the research and production that goes on (for example, the FDA regulates drug research and distribution, the USDA monitors food production, and so on). But some areas, such as dietary supplements (vitamins, herbal and mineral supplements, and so on), aren't as rigorously regulated.

As a consumer of all the results thrown at you in today's society, you need to be armed with information to make good decisions. A good first step is to contact the researcher (or the journalist) to see whether any scientific studies back up his or her claim. If he or she says yes, ask whether you can
see the descriptions and results of those studies, and then evaluate that information critically (see
Chapters 16
and
17
for more on this).

Other books

Thirst by Ken Kalfus
My Biker Bodyguard by Turner, J.R.
Negroland: A Memoir by Margo Jefferson
The Art Whisperer (An Alix London Mystery) by Charlotte Elkins, Aaron Elkins
West End Girls by Lena Scott
Final Surrender by Jennifer Kacey
The Weight of Love by Perry, Jolene Betty