Michael Mauboussin is the author of The Success Equation: Untangling Skill and Luck in Business, Sports, and Investing (Harvard Business Review Press, 2012), Think Twice: Harnessing the Power of Counterintuition (Harvard Business Press, 2009) and More Than You Know: Finding Financial Wisdom in Unconventional Places-Updated and Expanded (New York: Columbia Business School Publishing, 2008). More Than You Know was named one of “The 100 Best Business Books of All Time” by 800-CEO-READ, one of the best business books by BusinessWeek (2006) and best economics book by Strategy+Business (2006). He is also co-author, with Alfred Rappaport, of Expectations Investing: Reading Stock Prices for Better Returns (Harvard Business School Press, 2001).

Visit his site at: michaelmauboussin.com/

Michael Mauboussin: Sharpening Your Forecasting Skills

“Beliefs are hypotheses to be tested, not treasures to be protected.” – Philip E. Tetlock and Dan Gardner

  • Philip Tetlock’s study of hundreds of experts making thousands of predictions over two decades found that the average prediction was “little better than guessing.” That’s the bad news.
  • Tetlock, along with his colleagues, participated in a forecasting tournament sponsored by the U.S. intelligence community. That work identified “superforecasters,” people who consistently make superior predictions. That’s the good news.
  • The key to superforecasters is how they think. They are actively open-minded, intellectually humble, numerate, thoughtful updaters, and hard working.
  • Superforecasters achieve better results when they are part of a team. But since there are pros and cons to working in teams, training is essential.
  • Instruction in methods to reduce bias in forecasts improves outcomes. There must be a close link between training and implementation.
  • The best leaders recognize that proper, even bold, action requires good thinking.

Introduction: The Bad News and the Good News

What if you had the opportunity to learn how to improve the quality of your forecasts, measured as the distance between forecasts and outcomes, by 60 percent? Interested? Superforecasting: The Art and Science of Prediction by Philip Tetlock and Dan Gardner is a book that shows how a small number of “superforecasters” achieved that level of skill. If you are in the forecasting business-which is likely if you’re reading this—you should take a moment to buy it now. You’ll find that it’s a rare book that is both grounded in science and highly practical.

Phil Tetlock is a professor of psychology and political science at the University of Pennsylvania who has spent decades studying the predictions of experts. Specifically, he enticed 284 experts to make more than 27,000 predictions on political, social, and economic outcomes over a 21-year span ended in 2004. The period included six presidential elections and three wars. These forecasters had crack credentials, including more than a dozen years of relevant work experience and lots of advanced degrees—nearly all had postgraduate training and half had PhDs.

[drizzle]Tetlock then did something very unusual. He kept track of their predictions. The results, summarized in his book Expert Political Judgment, were not encouraging.2 The predictions of the average expert were “little better than guessing,” which is a polite way to say that “they were roughly as accurate as a dart-throwing chimpanzee.” When confronted with the evidence of their futility, the experts did what the rest of us do: they put up their psychological defense shields. They noted that they almost called it right, or that their prediction carried so much weight that it affected the outcome, or that they were correct about the prediction but simply off on timing. Overall, Tetlock’s results provide lethal ammunition for those who debunk the value of experts.

Below the headline of expert ineffectiveness were some more subtle findings. One was an inverse correlation between fame and accuracy. While famous experts had among the worst records of prediction, they demonstrated “skill at telling a compelling story.” To gain fame it helps to tell “tight, simple, clear stories that grab and hold audiences.” These pundits are often wrong but never in doubt.

Another result, which is related to the first, was that what mattered in the quality of predictions was less what the expert thought and more how he or she thought. Tetlock categorized his experts as foxes or hedgehogs based on a famous essay on thinking styles by the philosopher Isaiah Berlin. Foxes know a little about a lot of things, and hedgehogs know one big thing. Foxes did better than the dart-throwing chimp, and hedgehogs did worse.

It’s not hard to see the link between these findings. Most topics of interest in the economic, social, and political realms defy tight, simple, and clear stories. But imagine you are the producer of a television show that covers politics. Who do you want to put on the air, the equivocal guest who constantly says “on the other hand,” or the one who confidently tells a crisp and controversial story? It’s not a hard decision, which is why many hedgehogs are both famous and poor predictors.

While the conclusions of Expert Political Judgment were nuanced, they were on balance bad news for pundits. Despite how some read his results, Tetlock never believed in the extreme point of view that forecasts are useless. That foxes were better forecasters than the average of all experts provided a strong clue that foresight might be a real skill that could be identified and cultivated. Tetlock marked himself as an “optimistic skeptic.”

Michael Mauboussin Forecasting Skills

Michael Mauboussin Forecasting Skills

Expert Political Judgment is excellent scholarly research but is written in, well, scholarly prose. In Superforecasting, Tetlock collaborates with Dan Gardner, a journalist and author of a book about the failure of prediction. The result is great research that is easy to read.

Naturally, Tetlock is not the only one interested in learning how to make effective forecasts. The United States intelligence community was also keen to improve the quality of predictions, especially in the wake of the failure to anticipate the terrorist acts on September 11, 2001 and the overestimation of the probability of the existence of weapons of mass destruction in Iraq in 2003. An agency within the community, Intelligence Advanced Research Projects Activity (IARPA), was assembled to pursue high-risk research into how to improve American intelligence. IARPA decided to create a forecasting tournament to see if there might be a way to sharpen forecasts.

Tetlock and some colleagues launched the Good Judgment Project (GJP), one of five scientific teams that would compete to answer questions accurately. The teams could use whatever approaches they wanted to generate the best possible answers. Starting in September 2011, IARPA asked nearly 500 questions about various political and economic outcomes. The tournament garnered more than one million individual forecasts in the following four years. It is important to note that the time frames for the questions in the IARPA tournament, generally one month to one year, were shorter than the three to five years that were common in Tetlock’s study of experts.

Now the good news: the GJP results beat the control group by 60 percent in year one. Results in year two were even better, trouncing the control group by almost 80 percent. In fact, the GJP did so well that IARPA dropped the other teams.

Of the 2,800 GJP volunteers in the first year

1, 2  - View Full Page