Join us
united states project

What we can learn from the factcheckers’ ratings

Sure, the factcheckers have their biases. It still means something that Republicans get the worst scores
June 4, 2013

Sign up for The Media Today, CJR’s daily newsletter.

What should we make of the latest tally showing that Republicans fare worse with factcheckers than Democrats do? Last week the Center for Media and Public Affairs, a nonpartisan research group based at George Mason University, reported that, so far during Obama’s second term, GOP statements were three times as likely as claims from Democrats to earn “False” and “Pants on Fire!” verdicts from PolitiFact’s Truth-O-Meter–and only half as likely to be rated “True.” The lead of a brief write-up by Alex Seitz-Wald at Salon.com seemed to take the results at face value:

Many politicians stretch the truth or obfuscate to some degree or another — but does one party do it more than the other? According to a new study from the Center for Media and Public Affairs at George Mason University the answer is an unequivocal yes.

Or, maybe, not so unequivocal: As conservative media watchdog NewsBusters was quick to point out (and as Seitz-Wald acknowledges), the results can also be read as evidence of selection bias at PolitiFact. The press release from the CMPA hints at this interpretation; it notes that the GOP fared worse even in May, despite “controversies over Obama administration statements regarding Benghazi, the IRS, and the Associated Press.” A quote from the group’s president, Robert Lichter, sounds the note again: “While Republicans see a credibility gap in the Obama administration, PolitiFact rates Republicans as the less credible party.”

PolitiFact itself, meanwhile, did its best to stay out of the fray. A brief letter from founder Bill Adair noted simply that the factchecking outlet rates individual statements and doesn’t claim to gauge which party lies more. “We are journalists, not social scientists,” Adair wrote. “We select statements to fact-check based on our news judgment–whether a statement is timely, provocative, whether it’s been repeated and whether readers would wonder if it is true.”

This story has a familiar ring by now. In 2009, political scientist John Sides tallied a few dozen Truth-O-Meter verdicts on claims about healthcare reform, and found that Republican statements earned the two worst ratings almost three times as often as Democrats. He noted the potential for selection bias but concluded, “the data accord with what casual observation would suggest: opponents of health care reform have been more dishonest than supporters.” In 2011 another political scientist, Eric Ostermeier, found the same three-to-one ratio after counting up more than 500 PolitiFact rulings over 13 months. He drew the opposite conclusion: “it appears the sport of choice is game hunting–and the game is elephants.”

Whatever the reason, a similar pattern seems to hold at The Washington Post‘s Fact Checker blog, where by his own counts Glenn Kessler hands out more Pinocchios, on average, to Republican statements. The differences tend to be slight–e.g., a 2.5-Pinocchio average for the GOP versus 2.1 for Democrats in the first half of 2012–and Kessler attributes them to electoral dynamics rather than to any difference between the parties. But an analysis of more than 300 Fact Checker rulings through the end of 2011, by Chris Mooney, found a telling detail: Republicans received nearly three times as many four-Pinocchio rulings. Even controlling for the number of statements checked, they earned the site’s worst rating at twice the rate of Democrats.

Sign up for CJR’s daily email

These tallies cover different periods and weren’t compiled according to a single methodology. Still, the broad pattern is striking: Republican statements evaluated by factcheckers are consistently two to three times as likely to earn their harshest ratings.

So–for the proverbial engaged citizen (or journalist, or political scientist) who’s looking for clues about the nature of our political discourse, is there any meaning in that pattern? Obviously, the issue of selection bias can’t be ignored, since factcheckers don’t pick statements at random. Does that mean, as Sides wrote last week (seeming to depart from his earlier view) that the data simply don’t “say all that much about the truthfulness of political parties”? Or even, as Jonathan Bernstein added in the Post, that while we should be grateful for the research factcheckers assemble, we should throw out their conclusions altogether?

Or to put the question another way: Does it say nothing that retiring Rep. Michele Bachmann–the Minnesota Republican who is famous for claiming, for instance, that the HPV vaccine causes mental retardation–compiled almost unbelievably bad records with the factcheckers during her years in the House? Bachmann set a new bar for four-Pinocchio statements in Kessler’s column, and as a presidential contender in 2012 averaged 3.08 Pinocchios across 13 checked statements, the worst of all the candidates. Meanwhile her first 13 statements checked by PolitiFact earned “False” or “Pants on Fire” verdicts; in all, a remarkable 60 percent of her 59 Truth-O-Meter rulings fall into those two categories. No doubt Bachmann has made many true statements in office, but she kept the factcheckers well-supplied with irresistible falsehoods (as other journalists have pointed out).

Cases like Bachmann’s show why general acknowledgments of “selection bias” are so unsatisfying. Her extraordinarily bad ratings, compiled over so many statements, offer a window onto the particular ways in which factcheckers deviate from random selection as they choose claims to check every day. (It’s important to remember that this is part of a work routine. In my own experience watching and working with fact-checkers, as field research for my dissertation, to find claims worth investigating day after day took real digging.)

There are a few obvious ways the factcheckers behave differently from a computer algorithm plucking claims at random from political discourse. First, they ignore statements that are self-evidently true. Second, they try to stay away from things that aren’t “checkable,” like statements of opinion. (Critics often accuse them of failing this test). And finally, the factcheckers are susceptible to a constellation of biases tied up in journalistic “news sense.” They want to be relevant, to win awards, to draw large audiences. They pick statements that seem important, or interesting, or outlandish. They have a bias toward things that stand out.

In practice, then, while factchecking is non-random, it’s non-random in ways that do tend to support certain inferences–cautious, qualified inferences–about the state of public discourse. Factcheckers don’t reliably index the truth of all political speech. Some kinds of dishonesty won’t show up in their ratings at all. A Republican (or a Democrat, for that matter) could argue that, for instance, the president’s rhetoric misrepresents his policies in a way that’s far more significant than anything a Michelle Bachmann might say.

But the factcheckers’ actual, working biases make a reliable filter for a certain kind of crazy, a flagrant disregard for established facts. Collectively and over time, their ratings seem to offer a mechanism for identifying patterns of political deception at the extremes. If a cluster of prominent Republicans consistently draws the worst ratings, we can start to ask questions and draw conclusions about political discourse on the right. And if the counter-argument is that the factcheckers consistently ignore or downplay outrageous claims from Democrats, that case needs to be made on the merits.

It’s clear why the factcheckers don’t make pronouncements about which party is more deceptive. To do so would invite charges of bias, and run the risk of coloring their judgment of individual claims. But that doesn’t mean we should dismiss their data outright, or that we can’t draw reasonable conclusions from it over time.

Has America ever needed a media defender more than now? Help us by joining CJR today.

Lucas Graves is an assistant professor in the school of journalism and mass communication at the University of Wisconsin. Follow him on Twitter at @gravesmatter.