George Rebane
We must always remember that, more often than not, the public mind is fickle, frenzied, and foolish.
There was no sign of the ‘Blue Wave’ predicted by the pollsters and ballyhooed by the lamestream. This is the second (third?) national election in a row about which the polls have been drastically off the mark. And, of course, the leftwing lamestream media has already demonstrated itself to be an extremely unreliable source of news, relegating itself to serve the DNC as its propaganda trumpet. Fox News didn’t become the top news channel through only its own efforts; it has had a lot of help from the progressive journalist industry continuing to pander to the people and shooting itself in the foot over and over again.
The bottom line is that according to the prominent leftwing outlet The Atlantic, the pollsters blew it, which resulted in ‘The Polling Crisis Is a Catastrophe for American Democracy’. It correctly concludes that “If public-opinion data are unreliable, we’re all flying blind.” But the catastrophe is avoidable since we know its cause and how to avoid it. I explained all this in the excellent article – ‘Polling Phollies’ – that I wrote in September 2016 leading up to Trump’s unpredicted victory.
Polling has some very obvious and very subtle twists and turns to it. The reasons that give rise to bad results are mainly due to a few poll design factors that involve sample size, sampling method, the size/share of true population preferences, and the number of ‘bins’ into which the poll wants to estimate the population’s preferences. In the latter, you can have a poll that asks for preference among two or, say, three or more candidates.
(For those wanting more detailed and technical information on these factors and polling errors, you can go here and here for a good place to start.)
The punchline is that the reason for all the polling errors is almost always due to a too small sample size that yields huge error boundary overlaps, especially when the actual population ratios for two alternatives/candidates is in the 0.5 region. As explained and illustrated in my 2016 article referenced above, it is in that region where any given sample size experiences its largest error bounds that more often than not (like today between Trump and Biden) overlap significantly. (For example, a sample size of 1,000 is way too small for error bounds separation.) Such large overlaps also give rise to the back and forth changes of the lead as the vote count proceeds.
So before we get all worked up about voter fraud, it’s prudent to look at other viable causes for what looks like someone is monkeying with the count as the hours pass while pundits are trying to make their points from ‘I told you so’ to exquisite CYA explanations for piss-poor prognostications (more here)
Finally, I’m more than disappointed that no one has presented in easy to understand graphic form (again my 2016 report) the reasons for all the polling follies, and what’s required to eliminate them. The bottom line is that the pollsters must get their technical act together, explain the accessible particulars to their audiences, and work hard now to regain their credibility with the public.


Leave a comment