Cantab Capital Partners: Algorithm Aversion

Updated on

Algorithm Aversion by Dr Ewan Kirk, Cantab Capital Partners

Investors in general are skeptical of systematic trading. Why could this be? Long term performance seems to indicate that the performance of models is at least as good as that of humans. So why the skepticsm?

This short piece was engendered by some recently published research which we will examine later but first let’s examine the problem.

Aversion in action

Over the years we have met many investors and allocators ranging from high net worth individuals to sovereign wealth funds and pension funds. No two investors are identical and each potential investor’s response to the systematic trading proposition is different. However, their view is often coloured by considerable scepticism. The questions that most investors ask about model based investment processes are strikingly similar. How do we know that our models will work? What happens when they go wrong? How do we know if they have gone wrong?

Furthermore, when models or strategies have a period of poor performance(1) the scepticism and lack of confidence grows quickly. Is the model broken? Has the environment changed? Is it different this time?(2) Although systematic trading comes in many forms, the concern about things being broken or never working again seem to bedevil CTAs and other macro strategies more than others.

We would be the first to agree that skepticism is an extremely good heuristic(3) for investors to use when they are evaluating investment styles and funds. However, we are always somewhat taken aback at how much stronger the skepticism is regarding systematic methods compared to other investment approaches. Surely a robust, reproducible, disciplined process which has been tested on decades of data and hundreds of different markets should be more believable than a process which depends on the deep but often inexplicable insights of humans which are prone to well known behavioral biases and errors?

Experimental Evidence

Well, much to our relief, it turns out that the bias against systematic processes and algorithms is a bias in its own right! In Algorithm Aversion: People Erroneously Avoid Algorithms after Seeing Them Err” Dietvorst, Simmons and Massey perform clever experiments to evaluate their subjects’ reactions to both the performance of algorithms and also their reactions when algorithms appear to underperform human forecasters.

We know that most of the readers of this post will have downloaded the paper and read it thoroughly before continuing but for those that haven’t, we have summarised the paper below.

Before starting the analysis of the results, the authors perform a little thought experiment(4). Imagine you are driving to work and you unexpectedly encounter a traffic jam. You decide (predict) that an alternate route will be quicker. Arriving at work 20 minutes later, your co-worker tells you that you mis-predicted the effect of the traffic and in fact it would have been fine on your normal route. This has happened to us all whether or not it is driving or deciding to take a bus or the tube. When this happens, you are unlikely to never trust your judgement of traffic conditions in the future.

However, imagine the scenario if your traffic-aware GPS(5) had rerouted you to avoid a traffic jam but it turned out that the traffic cleared more quickly than expected. Many of us would lose confidence in the routing algorithm(6) and would become more reluctant to trust it in the future.

To quote the authors: “It seems that errors that we tolerate in humans become less tolerable when machines make them.”(7)

The authors perform experiments on undergraduate students and use Amazon’s Mechanical Turk to supply a second cohort. Simplifying a little, both cohorts have an economic incentive to undertake a forecasting task well. They are allowed either to use a human forecast or an algorithmic forecast. The participants are told that the algorithm has been developed using statistical techniques. The participants are given some experience in either a human forecast method (themselves) or the algorithm. They then have to choose what they will use to forecast during the period of the experiment when they will be paid on the basis of performance.

The cunning psychologists arranged the results so that the model outperformed the participants’ forecasts by a considerable margin. Cleverly, the model didn’t always outperform the human forecasters, just most of the time. Despite seeing the model outperform in most cases, the subjects chose to bet on the human forecasts much more than one would expect if they were being rational. Also, those people who selected the model over humans were much quicker to drop the model predictions when they saw the algorithm make an error compared to when they experienced the human forecaster make an error. The graph below shows the proportion of the subjects choosing the statistical model both in the case where they did not see the model perform and where they did.(8)

This is a very surprising result. For example, those people who were more confident in a human would still choose the model 33% of the time if they hadn’t seen the model perform. But if they actually had seen the model perform the number choosing the model dropped to 10%. This is even though they presumably could have worked out that the model outperformed. Not only did people choose their own or human forecasts more ex-ante when they saw a model perform better than a human, they then chose it even less.

It appears that the participants in the experiments over weighted the errors that the model made. Clearly if the model predicted perfectly then it would take an outstandingly stubborn or luddite human to believe that their own views were better than a perfect machine. But, of course, no statistical model is perfect and when a good model errs, it is significantly more penalised than a human who errs.

How does this affect investors?

As developers of algorithmic rules for trading, this is fascinating research. It appears that not only do human investors suffer significant behavioral biases(9) but that there is a significant bias against algorithms themselves.

It is hard to see how one can guard against this bias when choosing an investment. As we said at the outset, surely a rule or process which can be demonstrated to have worked over 30 years of data and hundreds of markets should be more compelling than a discretionary track record which at most will span 10 or 15 years? When we have been discussing the performance of CTAs in the 2011 to 2014 period, we have often mentioned that the track record of CTAs looks good relative to equities which are the cornerstone of almost all investment portfolios. Discretionary hedge funds are not simple long only equities positions(10) but often the track record of the manager is shorter than a decade and ex-ante one would assume that investors would be more sceptical of discretionary styles of management compared to systematic styles of investment(11). Logically, this should be true but it may be that “algorithm aversion” is misleading investors into being more sceptical of systematic trading than is justified both by the historical statistics and the realistic ex-ante expectations. In addition, investors are prone to giving up on systematic investment styles more quickly than they should, compared to discretionary styles.

This is likely to have a negative impact on investors’ portfolios in the long term since the investors are not only giving up their high water marks in drawdowns but their portfolios also become sub-optimal because a valuable diversifying source of return has been removed.

Leave a Comment