In the final hours before polls open on Election Day, a preponderance of hard data going back at least to the 2018 midterms is indicating strongly that President Trump is in for an enormous beatdown. Not a historic landslide, but it’s also not going to be close. Long-term polling trends and analysis of early voting patterns suggest that Trump is on a path to lose all the same states he lost in 2016, plus most of the battleground states that are close enough to go either way. On FiveThirtyEight’s model, Biden’s chances of winning in a landslide are almost three times better than Trump’s chances of winning at all.
And yet, a whole lot of voters on both sides are either certain that Trump is going to win, or very uncertain about Biden’s chances. Trump’s surrogates and a few maverick pollsters are also showing up everywhere in the media, confidently predicting another surprise victory for the president. A Gallup poll from October showed 56% of Americans expecting Trump to win, at a time when polling averages had Biden ahead by eight points.
When one candidate is this far ahead in the last few days before an election, both sides have a vested interest in downplaying the polling data. The frontrunner needs to motivate turnout to avoid a last-minute collapse, and the underdog needs to motivate turnout to preserve any hope of catching up. But what we’re seeing right now in the public perceptions of the race goes far beyond that. There’s been a crisis of confidence in the science of polling itself, ever since Trump won four years ago. But it’s based more or less entirely on misperceptions and bad reasoning.
In no particular order, here are the top five voter fallacies of the 2020 election.
#1 — The polls predicted a Hillary victory in 2016. Trump won, so the polls were wrong.
This one relies on two false assumptions at once. The first is that polls actually predict anything in the first place. Polls don’t make predictions; they report data. Analysts and models make predictions, using data from polls. If predictions were wrong in 2016, it was because models were wrong, or analysts were wrong, or both.
It’s likely that some models actually were wrong, or at least inaccurate. A lot of models gave Clinton better than a 90% chance of winning. But FiveThirtyEight’s model gave Trump about a 1-in-3 shot, and that turned out to be closer to the truth.
The second false assumption is that the polls were wrong at all. The national polling averages were essentially dead-on, by the time Election Day arrived. Clinton was up by three to four points, and she won the popular vote by 2.1%. An analysis by FiveThirtyEight showed that the 2016 polls were no better or worse than in any earlier election year. Several polling averages in key battleground states were off by a wider margin, anywhere from five to seven points. But that was partially because the state-level polling was more scarce, and didn’t reflect the last-minute shift of undecided voters in Trump’s direction. The general consensus now is that most polls in the Midwestern battleground states also underestimated Trump’s support because they didn’t correct for educational status — after the election, those polls changed their methodology to fix the issue.
#2 — The polls were wrong in 2016, so we can’t trust what they say now.
This sounds a lot like the first one, but it’s a different fallacy. Because polling results are based on small samples of voters, every poll necessarily includes a margin for error, usually somewhere between two and six percentage points. But the swing could be in either direction. A poll in 2016 that was off by three points in Clinton’s favor might be off this year by three points in Trump’s favor, and that might be an entirely random result. Even if nothing changes about the way a poll is administered, a result from one year gives us essentially no predictive power as to whether that same poll will be more or less accurate in the future.
This fallacy also ignores the fact that all major polls have made adjustments in their methodology based on the final results from 2016 — adjustments intended to make the polls more accurate, given the new information they collected about American voters as a group. Even where the polls were wrong in 2016, they’re not likely to have the same problems again.
#3 — Some pollsters have biases. So their polls can’t be trusted.
Many of the best polls are administered by nonpartisan organizations, but that doesn’t mean that the individual pollsters asking the questions won’t have biases of their own. Other good polls are administered by clearly biased organizations, which make no efforts to hide their political leanings. But it doesn’t mean that their biases necessarily infect their polling data. Fox News is known for its heavy conservative bias, but Fox News polls are typically highly reliable and methodologically sound. The same goes for CNN in the other direction. Not only that, but the individual candidates’ campaigns always conduct their own internal polling, and they make crucial decisions based on it. The campaigns obviously want to win, and expect to win, but they work hard not to let those biases get in the way of collecting good information. They need to know the truth — they can’t afford to act on a biased poll. What matters is not the personal preferences of the pollsters, but the scientific rigor of their studies.
#4 — Polls undersample Republicans, so they always show Democrats ahead.
It’s been a common complaint for Republicans in the past two years that polls are systematically underrepresenting Republicans relative to their numbers in the overall population. It’s largely a false complaint: the best polls always work hard to get both Republicans and Democrats fairly represented, because that’s what you need for accurate results. But if it’s not possible to start with a representative sample, they perform statistical manipulations to bring the results in line with what they would have gotten if both sides had been ideally represented.
In general it’s important not to forget that the purpose of a poll is not to tell partisan voters what they want to hear. The purpose of a poll, and the goal of virtually all the trained professionals working in the field of opinion polling, is to be right. If your polls are consistently wrong, no one pays attention to them, and you lose your job. Not only for political polls, but also in market research and every other kind of polling, the only result that pays is an accurate result. There’s pretty much never a reward for anything else but accuracy.
#5 — If the “latest poll” is different from the previous averages, or from previous versions of the same poll, it shows a “shift” in the race.
One poll is never enough to hang any kind of a conclusion on. A lot of attention has been focused this weekend on a new poll that shows Trump leading by seven points in Iowa, where before the polling averages showed the race was basically even. It was widely reported that this poll showed a “tightening” in the race in Iowa, because the previous version of the same poll showed the candidates dead even at 47% each. And this poll was conducted by Ann Selzer, who’s widely regarded as one of the very best in the country. But the margin of error for this poll was 3.4%, and the margin of error for the previous poll was 3.5%. That is, both of these polls are consistent with a scenario in which not a single Iowa voter has changed sides since September. That doesn’t mean that we know no one has changed sides in Iowa; it only means that we can’t rule it out. And that without more polling, we still don’t know whether that race has truly tightened to that extent.
These five fallacies are far from the only ways to be wrong about political polls. But if you’re a voter who is currently worried about the possibility of Joe Biden losing in a fair election this year, the chances are that you’ve been convinced by one or more of these arguments at some point in the past four years. Put it this way: if the polls are no less accurate than they were in 2016, then Joe Biden will win by a large margin.
Now if it turns out not to be a fair election, anything could still happen. But that would be no reason to doubt public polling in the future, although it could be a reason to doubt that polls will ever matter again.