Wharton’s Philip Tetlock discusses his book, ‘Expert Political Judgment: How Good Is It? How Can We Know?‘
Tune into any cable news program and chances are you’ll see pundits offering their insights and forecasting political outcomes. And often times, they get things incredibly wrong: The 2016 presidential campaign, for example, was filled with forecasters who predicted a solid Hillary Clinton win.
- Fund of funds Business Keeps Dying
- Baupost Letter Points To Concern Over Risk Parity, Systematic Strategies During Crisis
- AI Hedge Fund Robots Beating Their Human Masters
Philip Tetlock, a social psychologist and Wharton management professor, looks at these kinds of failures in his book, Expert Political Judgment: How Good Is It? How Can We Know? (The book was first published in 2005, but Tetlock updated it for republication this year.) He also co-wrote Superforecasting: The Art and Science of Prediction, which was released in 2015. Tetlock, who is also a Penn Integrates Knowledge professor at the University of Pennsylvania, spoke with the Knowledge@Wharton radio show, which airs on SiriusXM channel 111, about the widening chasm between science-based political forecasting and the snappy sound bites that are just right for television but often miss the mark.
Expert Political Judgment: How Good Is It? How Can We Know? by Philip Tetlock
An edited transcript of the conversation follows.
Knowledge@Wharton: Let’s start with the idea to update your book, Expert Political Judgment.
Philip Tetlock: Princeton University Press was pretty happy with how well the book did in the first place. Then there was the superforecasting project that was sponsored by the U.S. intelligence community. That put the old research results from Expert Political Judgment in a new light. The superforecasting data suggested that it’s possible to spot talent and cultivate talent in ways that previous work hadn’t discovered. New discoveries, new book.
Knowledge@Wharton: Take us back to the original book, which was published in 2005. You talked about the war in Iraq and the political climate between Al Gore and George W. Bush in the 2000 campaign. What did you find?
Tetlock: The major finding in Expert Political Judgment was that political experts are pretty seriously overconfident. They think they know a lot of things that they don’t really know. On average, when they’re 90% confident that something’s going to happen, those things don’t happen 90% of the time. They happen more like 70% of the time.
Knowledge@Wharton: When the book was published, it received more attention than you expected, correct?
“The major finding in Expert Political Judgment was that political experts are pretty seriously overconfident.”
Tetlock: That’s true. Expert Political Judgment was published by a university press, whereas the Superforecasting book is a popular press book. It was surprising that 10, 12 years later, Expert Political Judgment was being cited in the middle of the debate over Brexit.
Knowledge@Wharton: We’re now coming off an election cycle that drew unbelievable attention from experts trying to predict who was going to win the presidential race.
Tetlock: The 2016 election political experts didn’t do very well. The very best political experts were assigning about a probability of 70% to a Hillary Clinton victory. The most overconfident were assigning probabilities toward 95% or 98%.
Knowledge@Wharton: Is there such a thing as expert political judgment, especially when there are so many factors that can’t be accounted for?
Tetlock: There is. Political experts are really good at doing certain things. They’re very good at posing insightful questions, imagining possible futures and sketching options. But they’re not very good at forecasting. They often act as if they are, and that creates misleading impressions.
Knowledge@Wharton: Is the attractiveness of forecasting the reason it seems to be happening so much now? Is it a key component of drawing ratings?
Tetlock: Yes. The media do push the process. If you’re an expert and want to get a lot of media attention, the right strategy is not to be diffident. The right strategy is to be strident, to claim to know more than you do. If you’re running a show, you’ve got a choice between two experts. One of them is going to tell you a story about why he believes there’s going to be a fundamentalist coup in Saudi Arabia in the next couple of years and makes the claim with great confidence. The other expert says, “On the one hand, there are these sources of instability in Saudi Arabia. On the other hand, there are really powerful equilibrium forces at work. So, it’s very unclear there’s going to be any major change. The best bet is probably continuation of the status quo, low probability of change.” Who are you going to pick? Are you going to pick the guy who’s telling you about the fundamentalist coup or the much more nuanced and intricately-complex account? I think the question more or less answers itself.
Knowledge@Wharton: Some of this research and understanding about prediction and forecasting came from tournaments that were done years ago, which you were involved with, correct?
Tetlock: That’s true. The very first forecasting tournaments that we ran go all the way back to the mid-1980s, when the Soviet Union existed. The very first work we did, Mikhail Gorbachev wasn’t even general secretary of the Communist Party of the Soviet Union. But by the time we finished that set of tournaments, he was. He was no longer the General Party secretary, and he was balancing the household budget by doing commercials for Pizza Hut.
“The pundits who project a lot of confidence and can construct a really powerful narrative behind that, those are the people the media flock to.”
Knowledge@Wharton: How often do those predictions end up coming out right?
Tetlock: We don’t measure predictions as yes or no. We measure predictions along a probability scale. Being right or wrong is a matter of degree. If you’re better at assigning higher probabilities to things that happen and lower probabilities to things that don’t happen, you’re going to get a better accuracy score. If you’re really seriously overconfident, and you sometimes say 90% and things don’t happen, or 10% and things do happen, you’re going to get a terrible accuracy score.
Knowledge@Wharton: You wrote in the preface of your book, “Beware of sweeping generalizations.” Can you take us deeper into that idea?
Tetlock: This ties back to the preference the media have for a certain type of pundit — pundits who project a lot of confidence, who know how wonderful or disastrous the Trump presidency is going to be. They know what’s going to happen with Brexit, how good or bad it’s going to be. The pundits who project a lot of confidence and can construct a really powerful narrative behind that, those are the people the media flock to. And it turns out that those people, on average, are not nearly as good forecasters as the more intellectually honest, boring pundits who go, “on the one hand, on the other hand.” Harry Truman famously said he wanted a one-armed economist. Well, most people in the media want one-armed economist, too. Or one-armed pundits. “On the one hand, on the other hand,” doesn’t go over very well in the modern media world.
Knowledge@Wharton: In the book, you describe some of the people doing this as hedgehogs. Explain what you mean.
Tetlock: The animal terms are the foxes versus the hedgehogs. It dates back 2,500 years to an ancient Greek epigram from the warrior poet, Archilochus. He wrote, “The fox knows many things, but the hedgehog knows one big thing.” So, the foxes are these more eclectic, nimble, wary of big-idea people, and the hedgehogs embrace big ideas. There are many different ideological types of hedgehogs. You could be a libertarian hedgehog, believer in pure free markets. You could be a socialist hedgehog. You could be an environmental doomster hedgehog who believes that we’re on the edge of apocalypse. Or you could be a boomster hedgehog. There are many types of hedgehogs. But there’s this one big similarity they have: They’re animated by one big idea. They have an infectious confidence and enthusiasm for that big idea, and that enthusiasm gets communicated in their speeches and writings. And that makes them very mediagenic.
Knowledge@Wharton: Do we have more hedgehogs or foxes today?
Tetlock: I don’t have the data that would allow us to say that. But I do have the sense that the world has been tipping somewhat in a hedgehog-ish direction. The rapidity of communication. Attention spans seem to be narrowing. Information load seems to be increasing. Tolerance for complexity seems to be decreasing. All that works to the advantage of hedgehog pundits.
“You do pay a price for having a short attention span. You’re going to be a less discriminating consumer in the marketplace of ideas.”
Knowledge@Wharton: In this digital society, attention spans are not what they were several years ago. People want to know more about more things than ever before.
Tetlock: That’s right, and you do pay a price for having a short attention span. You’re going to be a less discriminating consumer in the marketplace of ideas. You’re going to buy more shoddy products, and you’re going to get worse forecasts.
Knowledge@Wharton: Looking back at when you published this book the first time, it drew a lot of attention. It’s being referenced again with Brexit.
Tetlock: The British politicians, like Michael Gove, who favored British withdrawal from the European Union were very frustrated that so many experts were in the other camp. He famously said that Britain has had enough of experts. He brought up the findings in Expert Political Judgment as the basis for doubting whether or not the experts knew what they were talking about.
Knowledge@Wharton: How much updating is there in doing a book like this?
Tetlock: Quite a bit, because so many things happened over the last 12 years. The really big thing that happened was the U.S. intelligence community paid attention. … They have suffered some major setbacks, analytic setbacks, and they were doing some serious organizational introspection. They wanted to figure out ways of doing things better. An innovative research branch within the intelligence community, within the Office of the Director of National Intelligence, known as Intelligence Advanced Research Projects Activity (IARPA), decided to launch a series of much larger forecasting tournaments than anything I’d ever done in 2011. My research group participated in those forecasting tournaments along with some others.
Knowledge@Wharton: What was the impact of those tournaments?
Tetlock: That’s to be determined. It’s evolving. The intelligence community is continually evolving. But I think there is growing interest in keeping score.
Knowledge@Wharton: How does analytics play into national security these days?
Tetlock: I’m a big believer in data-driven analytics, so I’m very supportive of that. The very best forecasters are also very much on board with that. One of the more common errors in analysis is that people are too slow to change their minds. They stick too long with their preconceptions, and they don’t change their minds in a timely way. I’ve never seen things quite as rigidly polarized as they are today.
Knowledge@Wharton: Will the importance of scientific forecasting continue to grow?
“At a really elementary level, people over-predict change.”
Tetlock: It is growing. There’s a new generation of forecasting tournaments, and they’re focusing on the combination of human judgment and artificial intelligence because it’s increasingly possible to use artificial intelligence to augment political forecasting. Some very simple machine models can often outperform humans. At a really elementary level, people over-predict change. People expect more change in the short term than there is, and they don’t expect as much change as there will be in the long term. You can design algorithms that correct for that.
There’s a big debate now about how far these artificial intelligence approaches can be taken. There’s a scenario that people talk about called the “Fourth Industrial Revolution” that will be driven by strong forms of artificial intelligence, that displace jobs that we previously thought only human beings can do. Intelligence analysis is one of those jobs. But there are lots of other jobs throughout, including professorship.
Vladimir Putin recently said, “Whoever dominates artificial intelligence this century will dominate the world.” So, you have a prediction from Vladimir Putin that we can monitor.
Knowledge@Wharton: In some ways, it felt as though the presidential campaign was so emotionally driven that it was keeping us from slowing down and thinking carefully about the data. There was this strategy to throw so much emotional content at us that we were exhausted. What are your thoughts on that?
Tetlock: What works in political persuasion doesn’t work very well in political forecasting, and emotion-laden appeals work well in political persuasion. I think we’ve seen that over and over again. Political forecasting is just about thinking slower, not faster. Emotions tell us a lot about what people want to be true. Whether they tell us what’s going to be true or not is another matter.
Article by Knowledge@Wharton