Join us
Covering the Election

How to read the polls

November 3, 2020
AP Photo/Sue Ogrocki

Sign up for The Media Today, CJR’s daily newsletter.

In the aftermath of the 2016 elections, the public looked at the polling industry with jaundice. Pollsters were blamed for leading everyone astray; journalists, in turn, had used polling data to trumpet misleading forecasts. Some of us are still feeling traumatized. Drew Linzer—the cofounder, director, and chief scientist of Civiqs, an online polling company—has not lost faith in numbers, though he believes we should treat them as “only one part of understanding and explaining the current political environment.”

Linzer, who is forty-four, has been studying public opinion for some twenty years. In 2013, when he was teaching political science at Emory University, he noticed that response rates to traditional telephone polling were on the decline and decided that his field needed to embrace the internet. So he left to start Civiqs, based in Oakland, where he conducts political polls online using a nationally representative panel of people who have agreed to answer questions periodically. With hundreds, sometimes thousands, of daily surveys, he said, “a picture emerges that’s really unlike anything else that could be achieved by talking to five hundred people every month.”

Civiqs didn’t release presidential polls in 2016, but after the election, everyone in the industry gave themselves a long, hard look. The data on the nationwide popular vote had been largely accurate, they reasoned, but in a handful of battleground states, surveys slightly overestimated Hillary Clinton’s vote share and underestimated Donald Trump’s; the margin of error tipped the result against expectation. A postmortem by the American Association for Public Opinion Research found that the worst miss was the education variable: the share of the electorate with higher levels of education was more likely both to appear in surveys and to support Clinton. The same evaluation also identified the problem of late-revealing Trump voters and undecideds who “broke for Trump” in the week just before the election. This time around, pollsters revised their methodology; Linzer did some tinkering with the process at Civiqs. Nobody wanted to screw up. “This is a profession that people take seriously,” Linzer said. “And the primary motivating factor here is towards accuracy.”

Journalists have also had to rethink how they handle polls. To identify the most trustworthy data, Linzer advised looking at the date on which a survey was conducted—and how it was executed. Were people called on their landline or their cellphone? Was the poll done using automated recordings, or interactive voice response? If a survey were done online, where did the respondents come from—people clicking on Facebook ads? Reporters should also check out the number of people interviewed, and who they are. Are they all adults? Registered voters? Likely voters? People under a certain age, or people who have a particular racial identity?

When analyzing surveys, Linzer said, consider whether or not the pollster weighted the results in any way, and if so, by which demographic characteristics (age, gender, race, education, party). “We just want people who are reading the results to be able to see the results through the lens of the methodological choices that the survey organizations have made,” he explained. That includes a look at the wording of a questionnaire. Reporters should cultivate “an awareness of potential differences in the result as a consequence of survey techniques,” he said. “They can compare those to other results in an informed way.” 

Once you’ve got a poll in hand, Linzer continued, avoid cherry-picking. “There have been times when we’ve seen reports where the information is selectively reported or people misrepresent what we’ve put out,” he said. (When asked for examples, he demurred.) To Linzer, the “number one issue” with coverage of polls is that journalists sometimes report stories about transformations in public opinion, rather than use surveys to show “a combination of changes in people’s attitudes.” Shifts in popular view tend to be small and gradual, despite our tendency to describe big swings. “Reporters write about changes in polling results over time,” he said, “that treat every single poll as an exact measure of truth.”

As we’re scanning the numbers on Election Day, Linzer’s tip was to bear in mind the margin of error—one to three percentage points, which, as we’ve learned, can be decisive. Even a well-executed poll is fallible. “It’s a very challenging job,” he said. “There is uncertainty involved. It’s fair to be skeptical of any poll result. I think that the responsible thing to do is to look at the aggregate of the polling evidence.”

ICYMI: Seeking political news, Nicholson Baker ventures to the wrong clip and back again

Has America ever needed a media defender more than now? Help us by joining CJR today.

Shinhee Kang is a freelance journalist and former CJR fellow.