Sample This: Making Sense of Surveys
There are a lot of shoddy polls out there. Some are frank about their shortcomings and some aren’t. Here are some ideas for getting an accurate picture of what a poll can tell you.
“But mom, everyone is going. Why can’t I?”
The anxious parent typically responds with numerous reasons why going to the big party is not going to happen, especially since it’s at a friend’s house while the parents are out of town. Yet, it’s the critical-thinking parent who might instead reply: “Everyone? Did you collect data to support your position? Let’s see your sampling methodology.”
OK, not all skeptical parents will pose such a geeky response, but I’m sure they know that not every teen is going to the bash. Making generalized claims based on limited samples of people is a major problem, and not only in parent-child relationships. Indeed, learning to evaluate the quality of public opinion polls, scientific research and proclamations by politicians and pundits involves understanding some basic principles of random sampling. It’s an especially important lesson in the U.S. today as Americans prepare for midterm elections.
Many times we make decisions about everyday issues based on a non-scientific survey of friends. Think about how you get recommendations for movies, restaurants or travel tips. You ask people you know, or maybe even depend on a Zagat guide or a website like TripAdvisor. None of these are in any way more representative of opinions than the “polls” on various news websites that have hundreds of thousands of voters. CNN even reminds you, “This is not a scientific poll” when you click on the results to its daily “Quick Vote.”
Asking a few people’s opinions is a reasonable process when deciding whether to forage for the perfect pizza, but not a method you should use exclusively if you had to make some life-or-death decisions about health issues or public policy.
As we enter the election season, we ask how can political polls like Gallup be so accurate when surveying about 2,000 people? The answer can be traced back to an infamous event when the Literary Digest announced that Republican candidate Alf Landon would win the 1936 presidential election against Democrat Franklin Roosevelt in a landslide. The magazine’s claim was based on around 2.3 million people responding out of 10 million receiving a ballot.
Roosevelt won that election (and two more after it); the Literary Digest was out of business by 1938.
Explanations include that the magazine used car registration lists and phone books to construct the sample, biasing the results toward the better off; that fewer than 25 percent of those receiving ballots responded; and that nonresponse bias occurred — those who did not return their ballots were more likely to be Roosevelt supporters. Pollsters today also make a distinction between the general adult population, registered voters and likely voters when conducting election surveys, something the Literary Digest did not do. Larger sample sizes do not, on their own, guarantee accurate results.
When the original population from which a subset of respondents is drawn is not clearly defined and not representative of its diversity, samples are unlikely to be good predictors of opinion or behavior. Statisticians have developed random probability techniques to insure that samples are representative of the population and can reasonably conclude that polling 2,000 people will result in findings accurate within, say, plus or minus 4 percent.
Sure, now that we’re dealing with genuinely representative samples, doubling the sample size to 4,000 will be more accurate, but is reducing the estimated outcome to 2 percent error worth the added costs and time to survey that many more people?
There will always be some margin of error, but just as Gallup found in its presidential polls, the results are pretty close to how the population actually voted, thanks to these random sampling techniques.
When interpreting findings from polls and research studies, assess whether the data are based on at least a representative sample and preferably a random sample in which each person had an equal chance of being selected.
Sometimes it’s difficult to tell: The New York Times printed a correction in 2006 when it included data from an American Medical Association report on binge drinking, stated to be based on a random sample of 644 college women on spring break. The Times later concluded: “The sample, it turned out, was not random. It included only women who volunteered to answer questions — and only a quarter of them had actually ever taken a spring break trip. They hardly constituted a reliable cross section, and there is no way to calculate a margin of sampling error for such a ‘sample.’”
Also use your critical skills and constructive skepticism in evaluating sampling in everyday situations. For example, when facing an advertisement bragging about a survey that showed how loved a particular movie is, determine which people leaving the theater were surveyed. You should conclude that the results apply only to those respondents; you cannot know what the people who left the theater right away or in the middle of the movie think about it. These surveys typically depend on convenience sampling with the risk of Literary Digest nonresponse bias.
Like the mountebanks of old pushing their wondrous elixirs, and today’s charlatans with their magical diet techniques, testimonials ads and anecdotes of a few end up substituting for the scientific sampling of the many. Obviously, it’s not practical to generate a random sample each time you need to find a great hotel or restaurant; just be careful in interpreting all those website opinions.
Then maybe you can skeptically ask a friend whose tastes are similar to your own impeccable ones about where to eat or what party to attend. Or if their kids are attending that awesome party.
Peter M. Nardi, Ph.D, is an emeritus professor of sociology at Pitzer College, a member of the Claremont Colleges. He is the author of "Doing Survey Research: A Guide to Quantitative Methods.”