I love the idea of organizations like Politifact. These organizations fact-check the statements of politicians and other entities to explain the truth or fiction behind their claims. I think that people make better decisions when they’re better informed, and having a convenient way to cut through the fog of lies and misstatements can add some grease to a healthy democratic process.
Unfortunately, Politifact makes representations that distort, rather than expose, the truth. Politifact keeps “scorecards” of many prominent figures, summarizing the level of truth for all of the statements from that individual that Politifact has investigated. From this, it’s easy (if not expected) to infer the general trustworthiness of the individual.
Take Obama’s scorecard. It seems to weigh a bit more heavily on the true side, with a notable pile of “pants on fire” statements. But this tells us neither that Obama is trustworthy, nor that he’s a big fat liar. Here are 3 reasons why you can’t (or at least shouldn’t) make decisions about candidates based on their scorecards.
1. Selection Bias
Under-reporting
Politifact (and similar organizations) can’t investigate everything. Thousands upon thousands of words slip out of the mouths of the many politicians during and between each election cycle, and many of their claims would require the full time of several people to adequately vet.
For this reason alone the scorecards should be viewed with suspicion. Politifact may try to capture as many statements as possible, but it couldn’t possibly vet them all. This can leave out heavy quantities of truths (or lies) that a candidate actually says.
But even if Politifact could investigate every claim, they may choose not to. Fallible humans still steer the ship, and will always be subject to biases and preferences. It’s probable (even likely) that the political preferences of the writers slip in to their decisions of which statements to vet (and if they think that their preferences don’t leak into their decisions, I would bet that their preferences leak in more than if they were aware enough to admit it).
To illustrate this, take a look at Bernie Sanders’ twitter feed. It’s a constant barrage of statements, claims, assertions, ideas, values, arguments, and pictures. With 10-20 tweets a day, about 50% of which tend to have some sort of verifiable assertion, an individual working 8 hours a day would need to fact check a little less than 1 statement per hour from Sanders’ Twitter feed alone. Of course, some assertions will be the same and you could reuse the fact check, but even the process of finding and matching statements with prior checks and consolidating them onto a scorecard takes time, effort, and money. Doing that for all of the candidates all of the time is unrealistic.
In other words, using Politifact’s scorecard to evaluate a candidate’s level of honesty is like your mom making a a judgement concerning your honesty based on that one timeyou stole a piece of chocolate without considering the many times you left the chocolate alone.
Over-reporting
If under-reporting is an issue, so is over-reporting. This happens when an individual gets multiple marks down (or up) for saying the same statement multiple times.
There’s certainly room to argue that a candidate who repeats a lie should be repeatedly penalized, but since most repetition isn’t recorded, and there aren’t clear standards for when and when not to double-count, duplicative marks end up skewing scorecards.
For example, Ted Cruz recently got hit twice for saying that he was beating Hillary in comparative polls. Politifact was open about the fact that it had recently rated a similar (basically the same) claim already, but both claims still contribute to his overall scorecard. This would be fine if all candidates were hit for every time that they repeated a lie, but that’s not how the system works.
In other words, using Politifact’s scorecard to evaluate a candidate’s level of honesty is like your mom making a a judgement concerning your honesty compared to your borther’s based on those two times you stole a piece of chocolate without considering the many times you left the chocolate alone, or considering that your brother stole pieces, like, a bazillion times.
Whether they count too little or count too much, Politifact’s scorecards misrepresent reality.
2. Substance
Quantity issues
The amount of content on a candidate’s scorecard will presumably be somewhat dependent on how much the candidate says. This could be impacted by a number of factors, such as how long the candidate has been in the public eye, how much time they get at debates, their frequency of tweets, the quantity of ads they run, the amount of interviews and rallies they agree to do, etc. Notably, it also depends on how interesting their statements would be to fact-check, already setting up a system subject to the same problems as other new outlets.
Fortunately, presidential candidates are typically in the limelight long and brightly enough to get a substantial amount of statements rated. For example, here’s how many statements each of these current and former candidates had on their scorecards as of April 16, 2016:
Clinton: 190
Rubio: 139
Trump: 126
Cruz: 105
Sanders: 88
Bush: 79
Kasich: 63
Carson: 27
O’Malley: 14
How many statements does it take to become statistically relevant? Not sure, but I wouldn’t be surprised if even Obama’s 581 statements doesn’t give an accurate picture due to the selection bias we discussed above.
In other words, using Politifact’s scorecard to evaluate a candidate’s level of honesty is like your mom making a a judgement concerning your honesty based on those two times you were even around a chocolate box, without considering that you might be very honest with lots of exposure to interesting chocolates.
Significance issues
The quality of graded statements probably matters more than the quantity. Sadly, all statements are counted equal on Politifact’s scorecard.
Taken at it’s extreme, you could imagine that if Politifact gave someone a false rating for saying that “everyone loves Spongebob Squarepants” or a true rating for saying that “Spiderman is the best superhero” that it shouldn’t be equated with statements about Obamacare, ISIS, unemployment, racism, etc.
Sure, Politifact isn’t rating statements about Spiderman, but that doesn’t mean that all of the rated statements are equally important. For example, Bernie landed a healthy true” rating for accurately stating that military spending by the Saudi government is the third highest in the world. Does anyone really care that he got that right? Will that make him a good, honest president? Nope. And yet it shows up with equal weight on his scorecard with statements like “when you’re white, you don’t know what it’s like to be poor” (False (why isn’t this pant’s on fire?)) and “Hillary Clinton voted for virtually every trade agreement that has cost the workers of this country millions of jobs” (half true).
It would probably be impossible to give an accurate weighting to statements, or to fact-check a comparable balance of significance across candidate and truth-level categories. As such, even if Politifact could fact check everything out of a candidate’s mouth, the scorecard still wouldn’t be a great representation of the functional honesty of a candidate.
In other words, using Politifact’s scorecard to evaluate a candidate’s level of honesty is like your mom making a a judgement concerning your honesty based on the one time you stole a piece of chocolate without considering all the times you didn’t steal a car (or more significantly, steal her chocolate).
3. Information availability
The final problem I’ll address with the scorecards involves the availability of evidence. The truth of some things that candidates say simply can’t be known. Just because the truth can’t be known by the public doesn’t mean that the candidate isn’t lying, or that the candidate doesn’t know he or she is lying, or that what the truthfulness of his or her statement isn’t important.
For example, Rand Paul got bashed for saying that Hillary Clinton knew about pre-attack security requests from Benghazi. Politifact stated that since they couldn’t find evidence that Hillary saw the requests that she must not have seen the requests, so Paul got a “mostly false” rating. However, a lack of evidence is not the same thing as being untrue. Such statements should get some sort of “inconclusive” rating, rather than codifying a fallacious “argument from ignorance (assuming that Hillary’s statement that she did not see the requests is true because it hasn’t been proven false).
It should be clear from this that a scorecard heavy on the truth side may indicate that the person is a good liar, not that he or she is particularly honest. A good cover-up can register on the Politifact ratings.
In other words, using Politifact’s scorecard to evaluate a candidate’s level of honesty is like your mom making a a judgement concerning your honesty based on your excellent cover-up of the bazillion times you stole pieces of chocolate and framed your brother.
Politifalse
Hopefully you look at Politifact scorecards with a healthy dose of skepticism. I haven’t even gotten into the ways Politifact makes errors within specific rulings (there are whole blogs, like Politifact Bias that cover that issue). Fact-checking sites like Politifact are useful, but you should never outsource your own thinking.
Keep seeking truth.