Are You A Superforecaster? What Good Decision-Makers Have In Common

Updated on

Wharton’s Barbara Mellers and Michael Platt discuss their research on superforecasters.

While nobody can get forecasts right 100% of the time, research shows that there are certain kinds of people who are better at forecasting outcomes than others. Wharton marketing professors Barbara Mellers and Michael Platt, who are also Penn Integrates Knowledge (PIK) professors at the University of Pennsylvania, examine the intersection of marketing, psychology and neuroscience to understand the traits that “superforecasters” share and that can lead to better decision making. They recently talked about their research and its implications on the Knowledge@Wharton show on Wharton Business Radio on SiriusXM channel 111. (Listen to the podcast at the top of this page.)

Superforecaster

An edited transcript of the conversation follows.

Knowledge@Wharton: You’re doing a lot of research right now into the mindset of people and what goes into the decision-making process.

Michael Platt: Right. In fact, we are going beyond the mindset. We are going into the mind and into the brain. The whole purpose of our research program is to try to understand the process by which people make decisions. If we can understand how that process unfolds, all the myriad factors that go into it, we might be able to shape that decision process and help people make better decisions.

Knowledge@Wharton: But there’s so much forecasting done on a variety of different things. The forecasting that surrounded the presidential election a few months ago went one way, but the result went a different way.

Barbara Mellers: It sure did. I think people look at the forecasts and say, “How did they get it so wrong?” There’s only two ways to get a forecast wrong. That is if you say zero or one.  There’s a whole range of possibilities that don’t necessarily mean you’re wrong in between that. Nate Silver gave some of the most accurate forecasts about whether Donald Trump would win. His estimates were around a 67% chance that Hillary Clinton would win, 33% for Trump.

Now, Trump wins. Is that wrong? No. He’s on the wrong side of maybe. But 33% of the time, if you ran these counterfactual trials in history, Trump would win, according to Nate Silver. So, it’s very tough to say somebody’s making the wrong prediction unless they go way out on the extremes.

Knowledge@Wharton: But these are people who have done this for a while, who are smart and probably more right than wrong. The job that they’re doing is a very important one, and they’re doing it properly for the most part, correct?

Mellers: I think they are. Let’s put it like this: The world is an incredibly difficult place to predict. We’ve got to give them that. I don’t think any of us realized how close the Trump election would be and how close Brexit would be. If it was easy, it would have been done before.

Knowledge@Wharton: What goes through the brain when these things are going on?

“The world is an incredibly difficult place to predict.” –Barbara Mellers

Platt: I’d like to return to this question overall. When we are looking at Trump vs. [Clinton], it was clear that there should be some very predictable outcome there. But if you looked at what you might call base rates, and the fact that this country is so evenly divided, in the end people really came home and voted according to their parties. I think that that probably explains a lot of it.

When we think about these kinds of collective decisions, those are the most complicated ones that we make. Because we take into account not only what you might think of as the economic and rational impacts on ourselves, but also there are all these social factors that come into play. Emotional factors. I think that was really key during this election, where you’re not even aware of it…. Maybe you privately think you’re going to vote for Trump. Maybe you don’t even know until you go into the voting booth.

Mellers: The shy Trump voter is the hypothesis.

Platt: That is, I think, a big part of it. But we’re speculating based on behavior and on what people say they do, or what they intend to do or how they feel about it. What we can do with neuroscience is maybe uncover the processes that are actually going into that decision. Many things have to come together. But in the end, you can only do one thing or another. Pull this lever or that one.

Knowledge@Wharton: The emotional part is maybe the key component, especially with what we saw a few months ago. Now, we have people who are emotional about the candidate who won, whether he is doing something good or bad. So, anger was a very powerful emotion in this, was it not?

Platt: Absolutely. It’s hard to distinguish many of these emotions. I think people are certainly very worked up. They are very keen to believe in their own side. I think that’s another thing. It’s very difficult take the other side, to view things through the eyes of another individual. I think that’s something that Barb has worked on as well.

Mellers: When we look at the best forecasters, whom we call superforecasters in the research that we’ve done, they tend to be much more analytical, much more rational. They score higher on measures of actively open-minded thinking. These folks, who also were on the wrong side of maybe in our research when it came to Hillary and Trump, do step back and take an analytical look at it and try to keep emotions out of it. Maybe not out of it, but at least not getting in the way of it.

Knowledge@Wharton: Is it a hard thing to keep emotions out of some of these decisions?

Platt: Frankly, that goes against biology. Emotions are evolved for a very important reason. It’s a simple and intuitive notion to think of our emotional self and our rational self as being completely separated. In fact, our brains integrate those processes every time you make a decision. Emotions are important. They’re an important part of the forecasting process.

Essentially, you should think about your brain as not just making predictions about the election, but about everything that you do — every single event that might happen in the world. That is, are they more rewarding, more pleasant, more aversive than you might have expected? Social emotions, jealousy, fear, anger, etc. are all going to shape that process of making a prediction or responding to the outcome of a prediction. Emotions help us to learn from those outcomes and, hopefully, make better decisions in the future. That’s sort of my evolutionary psychology/neuroscience view on it. But then again, to the degree that you can potentially be aware of those emotions, you might be able to be a little more rational.

Mellers: They’re signals that we ought to be paying attention to something. That’s essential. We’re learning a lot right now about how to make better forecasts. That’s going to influence all aspects of our life because we’re constantly making predictions about who we want to spend time with, how we want to spend our money, whom we want to vote for. When we can get predictions that are more accurate, even slightly more accurate, and use those in our decisions, I think we’re heading in the right direction.

Knowledge@Wharton: Do you think it’s possible to get to a point where we can have people who are accurate 100% of the time?

Mellers: No. The world’s a complicated place. If that ever happens, we’re all going to be dead, I’m sure.

Knowledge@Wharton: When this was all playing out during the election, I think most people were watching it state-by-state as it went along. The expectation through the forecasting didn’t change that much, even as some of the states early on went for Trump rather than Clinton. I found it interesting that the process of adapting your prediction along the way didn’t happen.

Mellers: We had correlated errors. That’s how we call it when it comes to making predictions. We erred on one, and that had an effect on the next one and so forth. Anecdotally, I’ve heard that Trump told his family on election night, “Get ready for a rough night.” Even he wasn’t expecting a victory.

Platt: That evening was pretty interesting if you were watching the prediction meter, which was pointing toward Hillary. Then sometime between 8 and 9 o’clock, it made this very rapid switch as some data came in. The speed with which that happened was really surprising and shocking. That’s one of the factors that I think impacted a lot of people. That’s what makes it so emotional — the switch. Your prediction was completely wrong.

Mellers: It was a really close election. In some ways, you could say she won. I think people are thinking more along the lines of popular vote than they are Electoral College when they’re asking, “Who do you think is going to win the election?” That turns out to be a better question than, “Who are you going to vote for?” There has been some research by Wharton that shows that when you try to predict what other people will do in an election, you could get a more accurate forecast [that way] than if you say, “What are you going to do?”

“It’s very difficult take the other side, to view things through the eyes of another individual.” –Michael Platt

Knowledge@Wharton: Isn’t that a challenge to be able to gauge that?

Mellers: You live in a world that is clearly biased, but you talk to a lot of people. And if you ask a lot of people who are talking to a lot of other different people, maybe you’ll get a better global estimate of sentiment and thought.

Platt: It’s not directly related to the election, but I think there’s a very interesting connection to some of the work that’s been going on in decision neuroscience. In the last couple of years, there have been half a dozen, perhaps more, studies that have shown that if you take two dozen college students and put them in an MRI machine — you’re scanning their brains and taking snapshots of brain activity — that just by looking at that activity, you can predict market-level behavior in a way that goes well beyond what you can get from asking those same individuals, “What would you buy? What do you like? What do you want?” In fact, there’s a brain signal there that may be more accurate, that’s inaccessible to your own verbal report. You can’t put your finger on it. You can’t state it. But yet, you can aggregate it across a couple dozen individuals. And you can predict how many people will go to see a particular movie in the next six months.

Knowledge@Wharton: One of the areas you’re working on involves social interaction and how we can make better teams. That’s a challenge that a lot of corporations are having right now. They want to have better teams to be more successful and improve their bottom line.

Platt: This is a very nice correspondence between the work that Barb does and the work that we do. We are very interested in the brain processes that allow us to read the cues of other individuals and to connect with them, empathize with them, respond in a way that is synchronized and better functioning, which perhaps allows us to make better decisions.

We are examining this in very minute ways, using a whole suite of techniques — everything from measuring communication and voice to measuring peripheral measures of arousal, like pupil dilation, or how red your face might be, the tone of your voice. In some cases, we might connect that to various kinds of brain signals. We hope that by doing this we can not only get a more accurate and biologically valid understanding of how we connect and how we might form better teams, but also then we could use that to evaluate different approaches for constructing teams. Most of those approaches, as I understand it, are based on intuition and experience. Whether it’s the military or first responders, they have a way of doing this. I’m not saying that it’s wrong; it’s just that we might be able to fine-tune it, or come up with better ways.

Mellers: That’s interesting to me for a lot of reasons. One of them is that in our research, we found with randomized control trials that people make significantly better forecasts when they’re working in teams than when they’re working by themselves. When you track people and put the high-accuracy folks together in their own teams, you get this surge of accuracy that goes way beyond what you’d expect.

One of the things that Michael and I have talked about is taking a look at the neuroscience of superforecasters relative to regular forecasters. Maybe it’s part of this social interaction that’s going on with the team. Not wanting to disappoint each other. Wanting to help each other. It’s this wonderful competition/cooperation arrangement with teams. You try to help your team, and then you’re also competing with all the other teams out there.

Knowledge@Wharton: When are you able to gauge when somebody falls into that category of superforecaster?

“We found that people make significantly better forecasts when they’re working in teams than when they’re working by themselves.” –Barbara Mellers

Mellers: We define superforecasters at the end of each year. Those were the top 2% of thousands of people who were forecasting for us. It turned out that we could predict relatively quickly who was going to be good and who wasn’t.

I looked at the first 25 questions over a two-year period. I took top scores on the first 25 questions, 100 bottom scores on the top 25 questions, and watched to see what happened to these groups over two years. The answer was, they stayed remarkably far apart over time. That tells us that maybe there’s something like an underlying forecasting skill, which we didn’t know about before the last five years. We know there’s an underlying IQ. There are underlying personality traits. But forecasting skill? What’s with that?

Platt: I wonder whether that forecasting skill is very domain-specific to the kinds of problems you’re giving them, or whether it extends to other areas outside in daily life?

Mellers: Well, it’s interesting. The forecasting questions that we give our folks are all over the map. Elections, wars, international treaties, diseases, you name it. You cannot be an expert on all of these things. In a sense, you’re a super-generalist. That’s how I see the superforecasters. They aren’t subject matter experts in a particular domain, but they are wonderful at figuring out where to get good, esoteric information. Sharing it with each other. Dividing the labor. Figuring out how to come up with an aggregate, after discussion and so forth, that’s way above what we would have expected, based on the general population.

…I estimated that over five years, [the superforecasters] were on the right side of maybe 85% of the time. These were hundreds of questions, millions of forecasts. That’s pretty darn good. These superforecasters were actually better than intelligence analysts with access to classified information forecasting on the same exact questions.

Platt: I’d love to see who these people are. I’d love to get a look at their brains because they’re bound to be very interesting people.

One point that you made earlier that really stuck out to me is that [they score] high on open-mindedness. And there is something potentially social about the way that they can interact with teams. There is, we now know, within the brain, overlapping circuits that deal with others, that allow us to sort of respond to and connect to other people. That seemed to also be important for exploration, creativity and open-mindedness. I wonder whether they’ve kind of got the sweet spot of interaction between those circuits.

Mellers: Good question. We should find out.

Knowledge@Wharton: What is the next step in the research? There are always more stories or events that can potentially be forecast.

Mellers: One of the projects that grew out of this tournament that I’ve been talking about is a new project that involves hybrid forecasting. We have lots and lots more data than ever before. We’re living in the big data world. We have fabulous forecasters now, from the human side. What’s the best way to put all of that together? We have clever ways of predicting diseases, purchases of Kleenex, the number of cars in the hospital parking lot. There ought to be a way for us to combine that and improve accuracy over and beyond superforecasters and machine data by itself.

Article by Knowledge@Wharton

Leave a Comment