Join us
iStock / Illustration by Katie Kosma

Hanging off the Line

Election polls were largely accurate. But the stories we tell about them are often distorting.

November 15, 2024

Sign up for The Media Today, CJR’s daily newsletter.

Now that the smoke has cleared from Election Day, it appears that the polls and statistical models mostly got the story right. But if it doesn’t quite feel that way—the race seemed to be a nail-biter, then Donald Trump won decisively—that may have been a result of how the numbers were presented, and the conclusions that journalists (and news consumers) drew from the data. “If we want to minimise the risk of nasty shocks,” John Burn-Murdoch, a chief data reporter for the Financial Times, wrote last week, “and we want pollsters to get a fair hearing when the results are in, both sides need to accept that polls deal in fuzzy ranges, not hard numbers.”

Burn-Murdoch, who writes regularly about election polling, believes that recalibrating our understanding of the data starts with presentation. “If you keep putting out two nice, neat, thin lines, and one number above the other, then you’re still making it very, very easy for people to think, ‘Well, it’s going to go this way,’ even though it has all those caveats in the footnotes,” he told me. “Research pretty consistently shows that if you do put a central point to a line, people are going to focus mainly on that point.” The FT, in its poll tracker during this past election cycle, showed a “range of likely values.” The graph included distinct lines—red for Trump, blue for Kamala Harris, representing the midpoint of their respective ranges—but also depicted their overlap and reach, the wide brush of purple where their chances overlapped. 

For Burn-Murdoch, the ideal would be a visual that shows just ranges, with a toggle allowing someone to see the central lines only if they want to. But that’s a tough sell in newsrooms where simple story lines are prized. “I don’t think that would necessarily be an easy conversation,” he said.

That tension—between editorial desire for a straightforward narrative and blurry reality—is made more complicated by herding, when polling firms toss results that don’t align with a dominant plotline. The 2024 election had “an implausibly small number of outliers,” per Mark P. Jones, who works on opinion polling as a senior research fellow at the University of Houston’s Hobby School of Public Affairs and as a codirector of the Baker Institute Presidential Elections Program at Rice University. If the data showed Trump winning Wisconsin, say, by a single percentage point, “we should have seen far more polls with Trump winning by four and with Harris winning by three than we actually did, as opposed to everyone being at zero or one point to either side.”

There was at least one notable pollster—Ann Selzer—who presented outlier numbers, just ahead of the election, showing Harris leading in Iowa by three points. She’s since faced intense media scrutiny, of the kind that could drive her peers away from reporting results outside the herd in years to come. “There are pollsters that are doing this to make a living, but also to try to gain future business,” Jones told me. “There is a real potential cost to getting it wrong.” To Jones, polling aggregators—such as Nate Silver, FiveThirtyEight, and RealClearPolitics—are partly to blame for the rise of herding. “If you have people who are grading pollsters based on their accuracy in an election result, you’ve created an incentive for those pollsters to ensure that they don’t take a position that could cause them to fall into ridicule.” (Silver, on his Substack, Silver Bulletin, observed that the likelihood of so many polls being as close as they appeared was one in ninety and a half trillion. “There’s more herding in swing state polls than at a sheep farm in the Scottish Highlands,” he wrote.)

When it comes time for reporters to cover what the polls mean, that can have a distorting effect. Take broad national polls, which cite demographic information. “People are naturally curious, and so they dig in,” Burn-Murdoch said. “What do the numbers look like for Hispanic Americans? What do they look like for Black Americans?” But the data—often drawn from community sample sizes too small to support meaningful conclusions, or insufficiently representative based on factors such as age or level of education—are “not meant to be representative of those groups.” 

The same can apply to exit polls, even with their larger scale. “We still see national media outlets writing these big splashy headlines about what those polls, which have long been seen as flawed, tell us,” Marcia Robiou, who worked as a producer on a PBS documentary, Latino Vote 2024, told me. For groups with relatively small populations but significant potential impact—Arab, Muslim, and Jewish voters, for instance—demographic-specific polling would help fill the picture. And in some communities, the language in which a poll is conducted could influence the results. “My mother speaks Spanish and English, but she feels much more comfortable speaking in Spanish,” Robiou said. “If a pollster called her and asked her questions about the election in English, I don’t think she would want to respond.”

In making her documentary, Robiou referred to polls, but she and her team then investigated on the ground—speaking with Latino voters in Arizona, Wisconsin, and around the country. “The Latinos in Pennsylvania, for the most part, are Puerto Ricans, who tend to be quite left of center,” she observed. “That’s just very different from Latinos in Florida, who are overwhelmingly Cuban and are right of center.” For a holistic view, she said, it’s essential to look beyond the numbers and certain battleground states. “That could really skew the conversation as to what Latino voters think.”

That’s not to suggest it’s time to disregard polls, especially where they may shed light on demographic trends. “In the absence of good polling on these groups,” Jones said, “you get narratives that might not be accurate.”

Has America ever needed a media defender more than now? Help us by joining CJR today.

Meghnad Bose is a Delacorte fellow at CJR.