The other important thing I want to say is that if the Comey quote is true, then he actually had to listen to well election forecasts that showed the figure was more like 70 percent. So that becomes an argument for further forecasts.
Well, what is a “good” forecast? If we go back to 2016, as you say, Nate Silver’s forecast gave Trump a 30 percent chance of winning. Other models set Trumps chances at more than 1 percent or low single digits. The sense is that because Trump won, Nate Silver was therefore “right.” But of course we can not really say that. If you say something has a 1-in-100 chance of happening, and it does happen, it could mean you underestimated it, or it could just mean that the 1-in-100 chance hits.
This is the problem of finding out if election forecast models are set up correctly for real-world events. Going back to 1940, we have only 20 presidential elections in our sample size. So there is no real statistical justification for an exact probability here. 97 versus 96 – it’s insanely difficult with our limited test size to know if these things are being calibrated correctly to 1 percent. This whole exercise is much more uncertain than the press, I think, makes consumers of polls and forecasts believe.
In your book, you talk about Franklin Roosevelt’s pollster, who was an early genius in polling – but even his career eventually went up in flames later, right?
This guy, Emil Hurja, was Franklin Roosevelt’s pollster and extraordinary election forecast. He devised the first kind of aggregate of polls, the first tracking poll. A truly fascinating character in the history of polling. He’s insanely accurate at first. In 1932, he predicts that Franklin Roosevelt will win by 7.5 million votes, even though other people predict that Roosevelt will lose. He wins by 7.1 million votes. So Hurja is better calibrated than the other pollsters at the time. But then he flops in 1940, and later he’s pretty much just as accurate as your average pollster.
When investing, it is difficult to beat the market over a long period of time. In the same way, with voting, you must constantly rethink your methods and your assumptions. Although Emil Hurja is early called “The Wizard of Washington” and “Crystal Gazer of Crystal Falls, Michigan”, his record slips with time. Or maybe he was just lucky early on. It’s hard to tell afterwards if he really was this ingenious predictor.
I’m bringing this up because – well, I’m not trying to scare you, but it could be that your biggest scumbag is somewhere in the future that’s not coming yet.
It’s such a lesson here. What I want people to think about is that just because the polls were skewed in one direction for the last few elections does not mean that they will be biased in the same way for the same reasons at the next election. The smartest thing we can do is read every single poll with an eye on how this data was generated. Are these questions formulated correctly? Does this poll reflect Americans across their demographic and political tendencies? Is this connector a reputable connector? Is there anything going on in the political environment that can get Democrats or Republicans to answer the phone or answer online surveys at higher or lower prices than the other party? You need to think through all these possible outcomes before accepting the data. And so it is an argument for treating opinion polls with more uncertainty than the way we have treated them in the past. I think that is a pretty self-evident conclusion from the last few elections. But more importantly, it’s more true for how pollsters arrive at their estimates. They are uncertain estimates at the end of the day; they are not the truth of public opinion. And that’s how I want people to think about it.