Michael Mauboussin: Sharpening Your Forecasting Skills

Updated on

Michael Mauboussin is the author of The Success Equation: Untangling Skill and Luck in Business, Sports, and Investing (Harvard Business Review Press, 2012), Think Twice: Harnessing the Power of Counterintuition (Harvard Business Press, 2009) and More Than You Know: Finding Financial Wisdom in Unconventional Places-Updated and Expanded (New York: Columbia Business School Publishing, 2008). More Than You Know was named one of “The 100 Best Business Books of All Time” by 800-CEO-READ, one of the best business books by BusinessWeek (2006) and best economics book by Strategy+Business (2006). He is also co-author, with Alfred Rappaport, of Expectations Investing: Reading Stock Prices for Better Returns (Harvard Business School Press, 2001).

Visit his site at: michaelmauboussin.com/

Michael Mauboussin: Sharpening Your Forecasting Skills

“Beliefs are hypotheses to be tested, not treasures to be protected.” – Philip E. Tetlock and Dan Gardner

  • Philip Tetlock’s study of hundreds of experts making thousands of predictions over two decades found that the average prediction was “little better than guessing.” That’s the bad news.
  • Tetlock, along with his colleagues, participated in a forecasting tournament sponsored by the U.S. intelligence community. That work identified “superforecasters,” people who consistently make superior predictions. That’s the good news.
  • The key to superforecasters is how they think. They are actively open-minded, intellectually humble, numerate, thoughtful updaters, and hard working.
  • Superforecasters achieve better results when they are part of a team. But since there are pros and cons to working in teams, training is essential.
  • Instruction in methods to reduce bias in forecasts improves outcomes. There must be a close link between training and implementation.
  • The best leaders recognize that proper, even bold, action requires good thinking.

Introduction: The Bad News and the Good News

What if you had the opportunity to learn how to improve the quality of your forecasts, measured as the distance between forecasts and outcomes, by 60 percent? Interested? Superforecasting: The Art and Science of Prediction by Philip Tetlock and Dan Gardner is a book that shows how a small number of “superforecasters” achieved that level of skill. If you are in the forecasting business-which is likely if you’re reading this—you should take a moment to buy it now. You’ll find that it’s a rare book that is both grounded in science and highly practical.

Phil Tetlock is a professor of psychology and political science at the University of Pennsylvania who has spent decades studying the predictions of experts. Specifically, he enticed 284 experts to make more than 27,000 predictions on political, social, and economic outcomes over a 21-year span ended in 2004. The period included six presidential elections and three wars. These forecasters had crack credentials, including more than a dozen years of relevant work experience and lots of advanced degrees—nearly all had postgraduate training and half had PhDs.

[drizzle]

Tetlock then did something very unusual. He kept track of their predictions. The results, summarized in his book Expert Political Judgment, were not encouraging.2 The predictions of the average expert were “little better than guessing,” which is a polite way to say that “they were roughly as accurate as a dart-throwing chimpanzee.” When confronted with the evidence of their futility, the experts did what the rest of us do: they put up their psychological defense shields. They noted that they almost called it right, or that their prediction carried so much weight that it affected the outcome, or that they were correct about the prediction but simply off on timing. Overall, Tetlock’s results provide lethal ammunition for those who debunk the value of experts.

Below the headline of expert ineffectiveness were some more subtle findings. One was an inverse correlation between fame and accuracy. While famous experts had among the worst records of prediction, they demonstrated “skill at telling a compelling story.” To gain fame it helps to tell “tight, simple, clear stories that grab and hold audiences.” These pundits are often wrong but never in doubt.

Another result, which is related to the first, was that what mattered in the quality of predictions was less what the expert thought and more how he or she thought. Tetlock categorized his experts as foxes or hedgehogs based on a famous essay on thinking styles by the philosopher Isaiah Berlin. Foxes know a little about a lot of things, and hedgehogs know one big thing. Foxes did better than the dart-throwing chimp, and hedgehogs did worse.

It’s not hard to see the link between these findings. Most topics of interest in the economic, social, and political realms defy tight, simple, and clear stories. But imagine you are the producer of a television show that covers politics. Who do you want to put on the air, the equivocal guest who constantly says “on the other hand,” or the one who confidently tells a crisp and controversial story? It’s not a hard decision, which is why many hedgehogs are both famous and poor predictors.

While the conclusions of Expert Political Judgment were nuanced, they were on balance bad news for pundits. Despite how some read his results, Tetlock never believed in the extreme point of view that forecasts are useless. That foxes were better forecasters than the average of all experts provided a strong clue that foresight might be a real skill that could be identified and cultivated. Tetlock marked himself as an “optimistic skeptic.”

Michael Mauboussin Forecasting Skills

Expert Political Judgment is excellent scholarly research but is written in, well, scholarly prose. In Superforecasting, Tetlock collaborates with Dan Gardner, a journalist and author of a book about the failure of prediction. The result is great research that is easy to read.

Naturally, Tetlock is not the only one interested in learning how to make effective forecasts. The United States intelligence community was also keen to improve the quality of predictions, especially in the wake of the failure to anticipate the terrorist acts on September 11, 2001 and the overestimation of the probability of the existence of weapons of mass destruction in Iraq in 2003. An agency within the community, Intelligence Advanced Research Projects Activity (IARPA), was assembled to pursue high-risk research into how to improve American intelligence. IARPA decided to create a forecasting tournament to see if there might be a way to sharpen forecasts.

Tetlock and some colleagues launched the Good Judgment Project (GJP), one of five scientific teams that would compete to answer questions accurately. The teams could use whatever approaches they wanted to generate the best possible answers. Starting in September 2011, IARPA asked nearly 500 questions about various political and economic outcomes. The tournament garnered more than one million individual forecasts in the following four years. It is important to note that the time frames for the questions in the IARPA tournament, generally one month to one year, were shorter than the three to five years that were common in Tetlock’s study of experts.

Now the good news: the GJP results beat the control group by 60 percent in year one. Results in year two were even better, trouncing the control group by almost 80 percent. In fact, the GJP did so well that IARPA dropped the other teams.

Of the 2,800 GJP volunteers in the first year of the tournament, the top 2 percent were called “superforecasters.” To give you some sense of their acuity, the superforecasters performed about 30 percent better than the average for the intelligence community—people who had access to classified data—according to an editor at the Washington Post.

Encouraged by the GJP’s results, Tetlock came to a couple conclusions. The first is that foresight is a real and measurable skill. One test of skill is persistence. High persistence means that you do consistently well over time and are not a one-hit wonder. About 70 percent of superforecasters remain in those elite ranks from one year to the next, vastly more than what chance would dictate.

The second is that foresight “is the product of particular ways of thinking, of gathering information, of updating beliefs.” Importantly, the essential ingredients of being a superforecaster can be learned and cultivated. The beauty of the GJP is that it was carried out with scientific rigor, which allowed the researchers to distill the elements of success. We explore these elements in this report.

Even though most people can improve their thinking and forecasting, there has always been resistance to change based on what Tetlock and Gardner call “illusions of knowledge.” Intuition is one example. Intuition is a form of pattern recognition that works in settings with lots of “valid cues.” But intuition is notoriously unreliable in unstable or nonlinear environments. An overreliance on intuition leads to poor decisions.

Another case is insufficient self-reflection. This is in part prompted by a module in our brain that seeks to rapidly close cause-and-effect loops. We show you an outcome and your mind quickly comes up with an explanation for it. As Tetlock and Gardner write, “we move too fast from confusion and uncertainty to a clear and confident conclusion without spending any time in between.” This is related to the concept that Daniel Kahneman, an eminent psychologist, calls thinking fast.

Michael Mauboussin Forecasting Skills

Michael Mauboussin Forecasting Skills

See full PDF below.

[/drizzle]

Leave a Comment