Superforecasters – Can Accurate Forecasting Be Learned?

Updated on

Superforecasters – Can Accurate Forecasting Be Learned?

March 8, 2016

by Michael Edesess

PDF | Page 2

Forecasting the financial markets is incredibly difficult, despite what the pundits on CNBC would have us believe. Indeed, Philip Tetlock has documented the overwhelming futility of such efforts in his research. But what if some rare individuals are truly prescient forecasters? What can we learn from how those people think? That is the subject of Tetlock’s newest book.

Almost five years ago, I wrote an article titled “No More Stupid Forecasts!” It was about the work of Tetlock, a professor of management and psychology at Wharton. For more than 20 years, Tetlock studied the predictions of experts, collecting 27,450 well-defined predictions about “clear questions whose answers can later be shown to be indisputably true or false.” The result was unsurprising: for the most part, the experts performed no better than a dart-throwing chimpanzee.

So why has Tetlock written a new book (co-authored with Dan Gardner) about “superforecasters,” and how people can train to be superforecasters? Has he discarded his earlier skepticism in the interest of writing a “Freakonomics”-type bestseller?

Superforecasters – Are some forecasters better than others?

In his earlier book, Tetlock did make note of the fact that some forecasters tended to do better than others. To distinguish good from bad forecasters, he characterized them, respectively, as “foxes” and “hedgehogs.”

The parable of the fox and the hedgehog comes from the Greek poet Archilochus, who said, “The fox knows many things, but the hedgehog knows one big thing.” The philosopher Isaiah Berlin later turned this line into a famous essay.

Tetlock found that hedgehogs, who were certain about the one big thing they believed they knew, were worse forecasters – even (and, in fact, especially) about that one big thing – than foxes, who knew many little things but were uncertain about what they knew. The forecasters who were wracked with uncertainty did better at forecasting than those who were not in the least wracked with uncertainty.

When I taught statistics years ago in the business school of the University of Illinois at Chicago, I told the students whenever they saw a statistic quoted, they should ask themselves, “How can they know that?” In this case it is worth asking, “How can Tetlock know that?” How does he know that foxes are better forecasters than hedgehogs?

In his book Superforecasting, he explains how he knows. When forecasters make a forecast, they are asked to assign it a probability. Consider for example weather forecasts. Weather forecasts come with a probability: “50% chance of rain.” If a weather forecaster forecasts 50% chance of rain repeatedly, and in fact 50% of the time it does rain, then, Tetlock and Gardner explain, that forecaster gets a perfect score on calibration.

However, this forecaster does not get a very good score on a second measure, which Tetlock and Gardner call resolution. That’s because the forecaster was “playing it safe,” taking a 50/50 non-committal middle-of-the-road stance every time. If the forecaster had predicted 100% chance of rain, and every time they predicted 100% chance it did rain, then they would get a perfect score on resolution too.

The combination of calibration and resolution scores is called the Brier score, developed by Glenn W. Brier in 1950. Predictions of 100% chances that are correct 100% of the time give you a better overall Brier score than mere predictions of 50% chances that turn out to be correct 50% of the time. However, a perusal of the definitions of the Brier score on the Internet turns up different versions. Unfortunately, Tetlock’s and Gardner’s book does not specify which version of the Brier score they are using.

So, we have to take into consideration the possibility that the results are an artifact of the scoring technique. Nevertheless, let’s proceed on Tetlock’s implicit assumption that the better a forecaster’s Brier score, the better a forecaster she is. And let’s take it for granted that Tetlock was able to distinguish foxes from hedgehogs.

If one type of forecaster is better than another, then there must be a systematic way to be a better forecaster. Tetlock’s challenge in this project (which was sponsored by the U.S. intelligence research agency IARPA, the Intelligence Advanced Research Projects Activity) was to see if he could find out how the better forecasters did it, whether better forecasting could be learned and whether people could be trained to do it – indeed, to see if some people were, or could learn to become “superforecasters.”

PDF | Page 2

Superforecasting: The Art and Science of Prediction by Philip E. Tetlock

Leave a Comment