New Evidence: Simple Forecasts Beat Complex Forecasts

Updated on

New Evidence: Simple Forecasts Beat Complex Forecasts by David Foulke, Alpha Architect

Occam’s razor teaches us we should cut away any extraneous factors that are unnecessary to explain something. Stated another way, we should avoid adding predictive elements unless they are absolutely necessary and strongly enhance our prediction.

But why should we believe this is necessarily true? Where’s the evidence? Forecasting is a subtle art. Perhaps more complex models can capture a wider range of class frequencies and offer more granular forecasts. Thus, maybe in the realm of predictions, we should favor complexity over simplicity?

Wharton Professor of Marketing J. Scott Armstrong has set about trying to measure the effectiveness of simple versus complex approaches to forecasting. He recently had a great post on the Wharton Blog Network relating to some research he’s done on this question.

In a new article in the Journal of Business Research, Armstrong discussed a recent meta study, or study of studies, he undertook involving 32 papers covering a range of forecasting methods (judgmental, causal, etc.), which included 97 independent comparisons of the accuracy of simple versus complex approaches.

What did the evidence say?

In a slam dunk, Armstrong found that in an extraordinary eighty-one percent of these independent comparisons, the simple forecasts beat complex ones. Moreover, the errors of from complex forecasts were 27 percent greater than for simple forecasts. So complexity reduced forecast accuracy and increased mistakes. In fact, in his survey of the research, Armstrong couldn’t find any papers that argued that complex predictive models beats simple ones.

Despite the seemingly conclusive nature of this finding, there’s also plenty of evidence that people still prefer complexity over simplicity. Why? It may be that there’s something persuasive about complexity itself.

For instance, Armstrong describes the “Dr. Fox effect,” from a famous experiment from 1970.

In the experiment, researchers had an actor, whom they called “Dr. Fox,” and described as a legitimate and esteemed “expert,” deliver a lecture that was intentionally engineered to consist of contradictions, meaningless references, non sequiturs and unintelligible mumbo jumbo. Yet despite the nonsensical nature of the talk, subjects gave Dr. Fox high satisfaction ratings.

What’s going on here?

Subjects were told Dr. Fox was a noted expert. He was lively and charismatic. He was even funny! He appeared to have a deep command of the material. He seemed to understand, and acted as if he understood. Yet…the material was very complicated. Very dense. And so, although it was difficult to understand him, the subjects still thought he was competent. They might have been saying to themselves, “well, if I can’t understand it, then it must be really high quality material indeed!” In a sense, it complexity itself that contributed to making Dr. Fox more credible.

Armstrong points out this effect may hold in academia, where the highly regarded journals tend to be more complex. Want to get published? Perhaps you should make your papers more dense and impenetrable to give them the best shot of getting published in the big name journals. If the writing is dense, then you must really know what you’re talking about. It’s harder to understand really smart people. By contrast, if your research is easy to read and accessible, then that might suggest you lack sophistication or your insights are too obvious. So as with Dr. Fox, again it is complexity itself that can contribute to credibility and positive assessments of competence.

Yet Armstrong’s meta study suggests this is the wrong intuition. We are falling victim to the siren song of complexity. We should do the opposite of what our gut tells us!

Instead, we should be suspicious of complexity, since it tends not to add predictive value, but rather to detract from the accuracy of forecasts. If we conclude the basis for a forecast is too complex, we should reject it on that basis alone. How might you go about doing this?

Armstrong suggests using a “simplicity checklist,” (a copy is here) as a way to assess whether an approach really is simple. The checklist focuses on prior knowledge used, relationship of model elements, and whether users can explain the forecasting process. If a model is not simple, and you can’t explain it, then you shouldn’t trust it.

I think this is useful thinking to apply to the asset management industry, where there is always a tug of war between simplicity and complexity. The Dr. Fox study reminds me of some industry dynamics I’ve witnessed.

In asset management, there are plenty of reasons to avoid simplicity and add complexity. After all, if it’s simple, then there’s nothing special about it. A client might say, “if it’s so easy for me to understand, and I can explain it to you, then maybe I know as much as you do, and maybe you’re not adding any value!”

Meanwhile, complexity sells. The “wow” effect is similar to the effect in academic journals. “If I can’t understand it, then it must be really good!”

Complexity is also a great substitute for a lack of substance. If a client wants a justification for something, you can devise a complex, heavily data mined solution that supports the decision, no matter what it is. Armstrong mentions the old saying, “if you can’t convince them, confuse them.”

So don’t be fooled by Dr. Fox! When in doubt, simplify, and avoid harmful complexity at all costs.

If you’d like to learn more about this in the context of finance, here is a post we wrote entitled, “Are you trying too hard?” The essence of the argument is to focus on simple robust processes and avoid complexity.

Leave a Comment