Prediction Follies
In fields such as politics and investing many people are
in the business of making predictions and interestingly their
reputation does not seem to depend on how right (or wrong) they are.
In the June 2011
freakonomics podcast:
The Folly of Prediction
Stephen Dubner
summarizes one aspect of it nicely:
So, most predictions we remember are ones which were fabulously,
wildly unexpected and then came true. Now, the person who makes
that prediction has a strong incentive to remind everyone that
they made that crazy prediction which came true. If you look at
all the people, the economists, who talked about the financial crisis
ahead of time, those guys harp on it constantly. "I was right,
I was right, I was right." But if you're wrong, there's no person
on the other side of the transaction who draws any real benefit
from embarrassing you by bring up the bad prediction over and over.
So there's nobody who has a strong incentive, usually, to go back
and say, Here's the list of the 118 predictions that were false.
And without any sort of market mechanism or incentive for keeping
the prediction makers honest, there's lots of incentive to go out
and to make these wild predictions.
One participant in that podcast is
Philip Tetlock
whose book,
Expert Political Judgment: How Good Is It? How Can We Know?
was reviewed in
The New Yorker
back in December 2005.
A more recent conversation (December 2012) with him
can be found on
Edge.org:
How To Win At Forecasting
Under the auspices of the
Intelligence Advanced Research Projects Activity (IARPA),
Tetlock is co-leader of
The Good Judgment Project,
one of five teams competing in the
Aggregative Contingent Estimation (ACE) Program
whose aim is to benchmark the accuracy of predictions
and discover ways to improve that accuracy
by using
level playing field forecasting tournaments.
To whet your appetite, here are 3 excerpts from the above
Edge link.
So, we found three basic things: many pundits were hardpressed
to do better than chance, were overconfident, and were reluctant
to change their minds in response to new evidence. That combination
doesn't exactly make for a flattering portrait of the punditocracy.
. . .
One of the reactions to my work on expert political judgment was
that it was politically naive; I was assuming that political analysts
were in the business of making accurate predictions, whereas
they're really in a different line of business altogether.
They're in the business of flattering the prejudices of their base
audience and they're in the business of entertaining their base
audience and accuracy is a side constraint. They don't want to be
caught in making an overt mistake so they generally are pretty
skillful in avoiding being caught by using vague verbiage to disguise
their predictions.
. . .
The long and the short of the story is that it's very hard
for professionals and executives to maintain their status
if they can't maintain a certain mystique about their
judgment. If they lose that mystique about their judgment,
that's profoundly threatening. My inner sociologist says
to me that when a good idea comes up against entrenched
interests, the good idea typically fails. But this is
going to be a hard thing to suppress. Level playing field
forecasting tournaments are going to spread. They're going
to proliferate. They're fun. They're informative. They're
useful in both the private and public sector. There's going
to be a movement in that direction. How it all sorts out
is interesting. To what extent is it going to destabilize
the existing pundit hierarchy? To what extent is it going
to destabilize who the big shots are within organizations?