I want to finish with Philip Tetlock’s[1] ideas
concerning our abilities to predict our futures in the realm of social and
political developments. There are those
among us who count on TV pundits and newspaper columnists to do that for us,
assuming there exists a basic agreement between the consumer and provider of
such a service. In our midst, there are
those who, with a heightened interest in political and social affairs, regularly
take in the predictive insights of such commentators; the problem is, according
to Tetlock, these dispensers of wisdom do not enjoy very good “batting averages”
in being able to predict accurately.
Lucky for the pundits, their listeners are not so much interested in
accuracy as they are hungry for general rationales and talking points for their
already held beliefs. By the time their
failure to forecast becomes evident, the attention of their audience has moved
on to other matters and so the mistakes go unnoticed. I saw a video the other day in which
President Obama spouts the predictions that pundits warned us about if the
president were to be reelected in 2012.
To remind you, one was gas prices in the $5.00-$6.00 range and another
was unemployment at levels nearing ten percent.
He then juxtaposed those figures with rates prevailing at the time he
was giving this comparison. Actually,
the disparities made for a fairly humorous bit of comedy.
Tetlock’s castigation of these pundits was not so much critical
of these predictors, but to address how we might approach this fine skill of
forecasting in order to be more successful.
For better prognostication, it turns out, being modest in our ability to
predict is helpful. As I reported in my
last posting, research indicates that being less than confident in this
endeavor, unlike in playing golf, seems to up our chances of getting it
right. This modesty is enhanced by
realizing that political realities are best anticipated with apprehension. Over what?
Over what is referred to as “black swans.”
Black swans are unexpected developments or events that
disrupt the equilibrium of forces that are holding sway at any given time. These swans appear recurrently, but without
any regularity in terms of their timing.
It’s as if we are going along within certain patterns – for example, the
patterns of our presidential campaign cycles – and boom, up comes a Donald
Trump to belie all of our expectations as to what will happen next – have you
picked out your Canadian domicile?
Tetlock goes on to describe this whole process by which one attempts to
responsibly forecast the political future as a cloudlike endeavor as opposed to
a clocklike endeavor. That is, it’s best
to see the whole movement of politics as clouds coming and going in no particular
pattern, here and there, and not to see it as the fine, mechanical movements of
a clock.
Tetlock next draws our attention to the type of mistakes
these pundits can make. But before I share
these with you, I want to remind you of what a former secretary of defense once
said: there are things we know we know;
there are things we know we don’t know, and there are things we don’t know we
don’t know.[2] Of course, black swans fall into the last
category. With this as context, let’s
look at Tetlock’s two-pronged possible prediction outcomes. There are false-positive mistakes and there
are false-negative mistakes. I will
quickly add a third possibility, an accurate prediction. A false-positive mistake would be an
erroneous prediction of something going to happen. Around this time every year, for example, I predict
the Yankees will win the World Series. I
usually make a false-positive mistake, but my “batting average” is higher than
any other type of similar prediction for any other professional franchise here
in the US. It is a prediction situation
in which I know what I don’t know, namely how the baseball season will unfold. That is, if there is a World Series this
year, there will be a winner, but I don’t know which team will win; I believe a
particular team will win, but I don’t know which one. If I am wrong – heaven forbid – then I would
have committed a false-positive mistake.
Implied in this prediction is the assumption, as I just pointed out,
that there will be a World Series, but what if there is some development within
baseball that I am not aware of that threatens that eventuality? After all, I can remember a year when we didn’t
have a World Series due to a player work dispute. Now, I have no reason to believe that will
happen this year, but say it does. In my
prediction of who will win, I didn’t predict a necessary precondition. In other words, I didn’t predict something
will not happen – playing the World Series.
That is a false-negative mistake.
It also demonstrates an unknown, unknown. People who study this sort of thing point out
that sellers of health insurance face this lack of relevant information all the
time. In selling you a policy, they don’t
know what you are not telling them about your health or what you don’t even know
about your health.[3] As for what you know you know, even when the
reality is not a hundred percent one way or another, you do know probabilities
and can manage any entailed risk the situation presents, but in the world of
political developments, those are often not with what one is dealing; hence, the high occurrence of false-positive
and false-negative mistakes in predicting future events or developments.
This gets us back to the poker player’s way of thinking about
this game of predictions. Probability
studies come into play and the use of math becomes useful – i.e. visualizing
the different possible hands one is confronting. The irony is that such probability studies
are highly dependent on a history of false-positive mistakes and false-negative
mistakes as well. It is the analysis of
these mistakes, of reviewing how and why mistakes were made and placing
numerical values to the correct factors, that ups our chances (and let me
emphasize “chances”) of getting it right.
It turns out that mathematical algorithms have been developed and
programmed into computers to work on these types of predictive projects and
they actually, by a long shot, do better than their human counterparts. There are a lot of simulated ways to test
this. For example, there is the playing
of chess in which algorithm based programs beat the best players regularly. But as Tetlock points out, this need not be a
competition between humans and machines but a possible collaboration in which
the use of machines can up our chances to get it right. So while the political world is not
clocklike, the use of exacting mathematics can enhance our ability to foretell
where those clouds are apt to go and where those swans might be lurking.
Of course, the likelihood that the regular viewer of Fox News
or MSNBC shunning the more flamboyant pundit for the drier math “nerd” to
provide them with a view of the future might not be what is wanted. I must say, I have seen the not nerdy Nate
Silver on MSNBC. He is one of the more
mathematically based predictors who has been relatively very successful at
predicting elections and does, apparently, rely on the type of analyses
indicated above. But of course,
elections and their counting on votes and responses to survey questionnaires
lend themselves readily to such algorithms.
But how about punditry that applies such rigor to foreign relations or
welfare policies? Can we stand watching
and listening to this type of predicting, where nothing is “guaranteed”? If we had measurable stakes such as
predicting the stock market and other financial developments with such
accuracy, perhaps our need for such forecasting would be enhanced. Oh, I forget, we do have a personal stake but
alas, these stakes are not so quantifiable.
[1] Tetlock, P.
(2013). How to win at
forecasting. In J. Brockman (Ed.), Thinking:
The new science of decision-making, problem-solving, and prediction
(pp. 18-38). New York, NY: Harper Perennial.
[2] The exact
quote by Secretary Donald Rumsfeld is: “There are known knowns. These are things
we know that we know. There are known unknowns. That is to say, there are
things that we know we don't know. But there are also unknown unknowns. There
are things we don't know we don't know.”
[3]
For this limited point about health insurance
see Huettel, S. (2014). Behavioral economics: When psychology and economics collide. [A transcript] Chantilly, VA: The Great Courses/The Teaching Company.