Investors and managers are concerned with “fat tails”. In part one of a two part article we look at where fat tails come from and how they can be managed.
“Extreme events”, “nonlinear dynamics”, “power laws”, “flash crashes”, “fractal processes”… a lot of academic, journalistic and practitioner ink has been spilled about “fat tail events” in financial markets. As is so often the case, a great deal of confusion has been generated by the casual use of subtle statistical concepts. Although this article isn’t attempting to be a statistical primer, it might help to illustrate and illuminate some of these subtleties and to manage the risk of fat tail events.
The existence of fat tails in the financial markets results from the well-known fact that a Normal (or Gaussian) distribution doesn’t model returns in markets exactly. Despite strong statements in the press about quants in finance not having enough real world experience to know the limitations of their models(1) (see for example here, here, here, here, and just about every article about LTCM ever written) it turns out that those of us in the industry with a more mathematical approach aren’t surprised that a Normal model for returns doesn’t match markets perfectly. Although a Gaussian distribution is extremely good at modelling almost all of the returns of financial assets, it is less good at characterizing the tails of the distribution. This is about the first thing you learn as a scientist when you arrive at a bank or hedge fund fresh faced, enthusiastic and desperate to apply the sexy new statistical techniques you learned in university.
Quantitative Modelling of Markets
Unless we want to throw away all quantitative techniques and decide on investment strategies and risk allocation by using chicken entrails, astrology or tarot cards, we are going to have to use some mathematical techniques to model the prices or returns of financial assets. When one compares the real returns of financial assets to the distributions shown on this page it seems like the Normal distribution is a good place to start.
The Normal distribution is completely characterized by two parameters.(2) These are the location parameter and the shape parameter . Using these two numbers, the probability density function at (roughly(3) how likely it is that an event with value happens) is calculated from this equation:
More informally, we refer to the location parameter as the mean and the shape parameter as the standard deviation and, in the world of finance, standard deviation is known as volatility.(4) In finance, it is returns which are modeled, not prices(5) and it is those that we expect to be Normally distributed. So, how closely can we model returns using a Normal distribution?(6)
To model returns using a Normal distribution one has to estimate the standard deviation. One way to do this is to estimate the standard deviation of the history of the returns of the market using a standard maximum likelihood estimator. Then we assume that this standard deviation can be used in a Normal model for the returns of the market in the future. Since we didn’t actually know what the sample standard deviation would be until the end of the data, this model does have a significant future peeking problem, but let’s ignore that issue for the moment.
What is rather surprising is that the Normal distribution is pretty good. Here is the return distribution for the crude oil futures market since 1983 using daily close data.(7)
The crude oil futures market has an annualised volatility of 34.3% over its lifetime. We have overlaid a Normal distribution with a zero mean and a daily standard deviation of on the graph. All the returns are scaled by this standard deviation and are z scores.
This is a reasonably good fit. Sure, there is more weight in the middle of the distribution, less in the shoulders and there are more of those nasty tail events, but it isn’t too bad. Although there are a lot more plus four and minus four standard deviation events than the Normal distribution predicts (and way too many small events), it turns out that the Normal distribution is a good start.
A lot of really very smart mathematicians have made the rookie error of looking at this graph and saying “The Normal distribution obviously doesn’t work. How stupid those quantitative financial professionals are”. For an unusually insightful criticism of this problem see this letter in the FT. I wonder where that guy works now…
Can we do better? Yes, we can!
The fundamental assumption underlying this analysis — and a fundamental problem with it — is that we have assumed that the market is stationary. Or rather it can be modeled using a stationary process. A stationary process is not one that doesn’t move.(8) It is a process for which the parameters describing it(9) are constant. In our example we are assuming that the underlying volatility of oil is constant.(10) It isn’t a terribly deep or insightful comment to make, but market volatility isn’t constant.(11) and thus it is unlikely that modelling with a stationary process is going to be the best we can do.
However, we can use a little bit of “real world insight”. Even to the average non-financial person it seems that financial markets have periods where not much is really happening interspersed with bursts of fear.(12) We could incorporate this in our model by estimating the volatility of the market over some recent period and assuming that the recent volatility is a better estimate of the volatility tomorrow.
A common way of doing this is to use an exponentially weighted estimator (EWMA) of the volatility, which places more weight on recent observations compared to older observations. These estimators are characterised by the length of time that it takes for weight of a point to decay to half its original weight. We are going to choose a 100 day exponentially weighted estimator, but across a wide range of parameters. The choice of weighting window isn’t that important.
The figure supports our hypothesis that market volatility isn’t constant.(13) So, we can use this observation to turn the crude oil series into something which is more “stationary”. We take each daily return and divide it by the volatility of the returns up to the day before (as estimated by the EWMA estimator) and then multiply by 10%. This should give us a 10% volatility series which (we hope) will be better described by a Normal distribution.
This is a much better fit to the distribution of real world returns. It’s true that if you look really closely, there are still a few more large returns in the tails, but re-scaling the returns using recent volatility seems to be a considerable improvement on the stationary distribution approach.
Where have we got to and where do we go from here?
We’ve shown that using a very naïve approach where we assume that volatility is constant we get a reasonably good fit if we model a market using a Normal distribution. However, this naive approach fails when one looks at the number of extreme moves. A significant improvement is to scale the returns of the market by recent volatility. This normalization or scalification process reduces the incidence of extreme return events considerably. Since, in principle, you could scale your position every day by the same measure of the recent volatility, you can actually trade this normalized process, and trading a normalized process should reduce the incidence of fat tails considerably.(14)
We’ve made an improvement (which is always valuable), but there are still some extra extreme events.(15) To understand the higher incidence of extreme events, we are going to have to introduce another statistical measure. Rather than just eyeball a distribution to work out if there are extreme events, we are going to use Kurtosis to estimate the fatness of the tails.
Unfortunately, covering all that would make this piece excessively long and so we will cover this in “Does My Tail Look Fat In This: Part 2”, which will be coming out shortly.