Sign up for The Media Today, CJRâs daily newsletter.
Now that the smoke has cleared from Election Day, it appears that the polls and statistical models mostly got the story right. But if it doesnât quite feel that wayâthe race seemed to be a nail-biter, then Donald Trump won decisivelyâthat may have been a result of how the numbers were presented, and the conclusions that journalists (and news consumers) drew from the data. âIf we want to minimise the risk of nasty shocks,â John Burn-Murdoch, a chief data reporter for the Financial Times, wrote last week, âand we want pollsters to get a fair hearing when the results are in, both sides need to accept that polls deal in fuzzy ranges, not hard numbers.â
Burn-Murdoch, who writes regularly about election polling, believes that recalibrating our understanding of the data starts with presentation. âIf you keep putting out two nice, neat, thin lines, and one number above the other, then youâre still making it very, very easy for people to think, âWell, itâs going to go this way,â even though it has all those caveats in the footnotes,â he told me. âResearch pretty consistently shows that if you do put a central point to a line, people are going to focus mainly on that point.â The FT, in its poll tracker during this past election cycle, showed a ârange of likely values.â The graph included distinct linesâred for Trump, blue for Kamala Harris, representing the midpoint of their respective rangesâbut also depicted their overlap and reach, the wide brush of purple where their chances overlapped.
For Burn-Murdoch, the ideal would be a visual that shows just ranges, with a toggle allowing someone to see the central lines only if they want to. But thatâs a tough sell in newsrooms where simple story lines are prized. âI donât think that would necessarily be an easy conversation,â he said.
That tensionâbetween editorial desire for a straightforward narrative and blurry realityâis made more complicated by herding, when polling firms toss results that donât align with a dominant plotline. The 2024 election had âan implausibly small number of outliers,â per Mark P. Jones, who works on opinion polling as a senior research fellow at the University of Houstonâs Hobby School of Public Affairs and as a codirector of the Baker Institute Presidential Elections Program at Rice University. If the data showed Trump winning Wisconsin, say, by a single percentage point, âwe should have seen far more polls with Trump winning by four and with Harris winning by three than we actually did, as opposed to everyone being at zero or one point to either side.â
There was at least one notable pollsterâAnn Selzerâwho presented outlier numbers, just ahead of the election, showing Harris leading in Iowa by three points. Sheâs since faced intense media scrutiny, of the kind that could drive her peers away from reporting results outside the herd in years to come. âThere are pollsters that are doing this to make a living, but also to try to gain future business,â Jones told me. âThere is a real potential cost to getting it wrong.â To Jones, polling aggregatorsâsuch as Nate Silver, FiveThirtyEight, and RealClearPoliticsâare partly to blame for the rise of herding. âIf you have people who are grading pollsters based on their accuracy in an election result, youâve created an incentive for those pollsters to ensure that they donât take a position that could cause them to fall into ridicule.â (Silver, on his Substack, Silver Bulletin, observed that the likelihood of so many polls being as close as they appeared was one in ninety and a half trillion. âThereâs more herding in swing state polls than at a sheep farm in the Scottish Highlands,â he wrote.)
When it comes time for reporters to cover what the polls mean, that can have a distorting effect. Take broad national polls, which cite demographic information. âPeople are naturally curious, and so they dig in,â Burn-Murdoch said. âWhat do the numbers look like for Hispanic Americans? What do they look like for Black Americans?â But the dataâoften drawn from community sample sizes too small to support meaningful conclusions, or insufficiently representative based on factors such as age or level of educationâare ânot meant to be representative of those groups.â
The same can apply to exit polls, even with their larger scale. âWe still see national media outlets writing these big splashy headlines about what those polls, which have long been seen as flawed, tell us,â Marcia Robiou, who worked as a producer on a PBS documentary, Latino Vote 2024, told me. For groups with relatively small populations but significant potential impactâArab, Muslim, and Jewish voters, for instanceâdemographic-specific polling would help fill the picture. And in some communities, the language in which a poll is conducted could influence the results. âMy mother speaks Spanish and English, but she feels much more comfortable speaking in Spanish,â Robiou said. âIf a pollster called her and asked her questions about the election in English, I donât think she would want to respond.â
In making her documentary, Robiou referred to polls, but she and her team then investigated on the groundâspeaking with Latino voters in Arizona, Wisconsin, and around the country. âThe Latinos in Pennsylvania, for the most part, are Puerto Ricans, who tend to be quite left of center,â she observed. âThatâs just very different from Latinos in Florida, who are overwhelmingly Cuban and are right of center.â For a holistic view, she said, itâs essential to look beyond the numbers and certain battleground states. âThat could really skew the conversation as to what Latino voters think.â
Thatâs not to suggest itâs time to disregard polls, especially where they may shed light on demographic trends. âIn the absence of good polling on these groups,â Jones said, âyou get narratives that might not be accurate.â
Has America ever needed a media defender more than now? Help us by joining CJR today.