Brian J. Gaines is a professor of political science at the University of Illinois at Urbana-Champaign who specializes in elections, political behavior and political institutions.
Updated: October 10, 2012 6:18AM
During election season, poll results are everywhere.
Did the Democratic National Convention give President Barack Obama a bump in the polls?
How did Mitt Romney’s latest ad play among swing voters?
Even in a state such as Illinois, where the presidential election result is not in doubt, polls indicate that several legislative races appear to be close.
Here are a few tips for reading these useful but imperfect devices for measuring public opinion.
† Pay little attention to “point estimates”: Suppose a poll finds that Candidate X leads Y 52 to 48 percent. Those estimates come with a margin of error, usually reported as plus or minus three or four percentage points. It is tempting to ignore this complication, and read 52 to 48 as a small lead, even if the poll does say “+/- 4” somewhere. But the appropriate conclusion is “too close to call.”
† Even taking the margins of error into account does not guarantee accurate estimates: 52 percent +/- 4 percent is the interval 48-to-56 percent.
Are we positive that the true percentage planning to vote for Candidate X is in that range? No.
When we measure the attitudes of millions by contacting only hundreds, there is no escaping uncertainty. Usually, we compute intervals that will be wrong five times out of 100, simply by chance.
† Reported margins of error are often underestimated, so it is prudent to regard even gaps that slightly exceed a poll’s reported margin to be negligible.
Calculating margins is not hard, but the classic method works only if strong assumptions are met.
In particular, pollsters assume that every member of their target population is equally likely to be included in the small sample that is actually contacted — typically 500 to 2,000 people. This assumption is never really met.
For instance, it is difficult to include cellphones in telephone polls, for both legal and logistical reasons. If those without landlines differ from everyone else in political preferences, that’s a problem the pollster has to try to fix.
Some polls are conducted online, with respondents matched to a truly random sample by traits such as sex, age and location. But if people willing to participate in online polls are different from those who are unwilling, that’s a problem too.
No matter how the poll is conducted, there will be multiple sources of error, and adjusting for these errors is difficult.
† How a question is worded matters: Even small nuances in wording can affect results. Two polls that disagree will often differ in question or response wording. For instance, including the names of minor-party candidates as a response option can change the estimates of how the front-runners compare.
† When it comes to the presidency, national polls answer the wrong question: After the chaotic resolution of the 2000 election, most now realize that the presidency is won in electoral votes, not national popular votes. So if you must forecast the presidential race, there is no getting away from juggling many different polls from the few competitive states, and these may well err in different ways or different directions.
† Exit polls are a different animal: The tips so far have had in mind pre-election surveys featuring questions about vote intentions. Polls that follow voting pose distinct problems, but, in brief, they too can go awry in many ways.
When official results do not match exit polls, do not assume that only fraud could explain the difference.
Erroneous and even crooked vote tabulation sometimes occurs, but in the United States, flawed polling methods are far more common.
Brian Gaines is a professor of political science at the University of Illinois at Urbana-Champaign who specializes in elections, political behavior and political institutions.