Cliff Asness – 2016 Beyond Equities: Still Boring

Cliff Asness – 2016 Beyond Equities: Still Boring

Writing my post titled “2016 Was Not a Particularly Volatile Year” on realized risk/variability turned out to be even more fun than usual as I got a lot of great comments on it. Of course, some didn’t buy it. Some said things like “yeah, but you have to admit this or that was crazy,” whatever this or that was. Sometimes I agreed, sometimes I didn’t. We all have a different definition and standard for crazy! By far the most common comment was “ok, you just looked at the S&P 500, what about…?” I can’t look at everything, or in fact anything more, but I can ask for help! To that end, my colleague, Ashwin Thapar, ran a very similar analysis to that in my original post on four more major markets (the two most requested were fixed income and volatility markets which many thought had a wild 2016; Ashwin looked at those and added commodities and an index of the Dollar because he’s just that kind of guy). So, enjoy, but be prepared to be absolutely floored with how bored you are (of 2016, not of Ashwin, of course!).

Extending Analysis to Other Asset Classes

As a reminder, Cliff’s original post focused on the S&P 500 (“SPX”) and put a variety of realized risk/variability measures for 2016 in terms of a longer historical context. The results were consistent across metrics showing that for equities, 2016 was plain, boring, and average.1 In this post, we extend Cliff’s analysis to include a broader set of markets: fixed income (Barclays Aggregate U.S. Index “Barclays Agg”); commodities (GSCI Index “GSCI”); currencies (Dollar Index “DXY”); and volatility itself (VIX Index “VIX”).2 Similar to what we observed in equities, when looking across realized risk/variability metrics, 2016 was pretty ordinary across the broad set of markets. While results vary by asset class – bonds and currencies were less risky than historical norms, commodities were riskier, and other asset classes neither more nor less risky – none hit a level that most would consider extreme. Overall, this is a picture that screams normalcy.

David Einhorn At The 2021 Sohn Investment Conference: Buy These Copper Plays

david einhorn, reading, valuewalk, internet, investment research, Greenlight Capital, hedge funds, Greenlight Masters, famous hedge fund owners, big value investors, websites, books, reading financials, investment analysis, shortselling, investment conferences, shorting, short biasThere's a gold rush coming as electric vehicle manufacturers fight for market share, proclaimed David Einhorn at this year's 2021 Sohn Investment Conference. Check out our coverage of the 2021 Sohn Investment Conference here. Q1 2021 hedge fund letters, conferences and more SORRY! This content is exclusively for paying members. SIGN UP HERE If you Read More

Like Cliff, we look at realized daily volatility, the maximum one month price move in the last year (considering all monthly moves at the rolling daily level), and the high divided by the low price on the year. For each we compare the measure during 2016 to the range we’d get calculating this same measure for all rolling years from 1990-2016 (the historical sample period is limited by data availability, but we think is still long enough to draw reasonable conclusions). We focus on results comparing 2016 to this full sample of data, though we also include results for a more recent comparison (5 years) in an appendix at the end of the post.3 To start, we repeat the results for the S&P 500 (SPX) from Cliff’s prior post.

Historical Percentiles of Risk Measures by Asset Classes since 1990

Source: AQR and Bloomberg.

Nothing looks quite as dead-on 50th percentile as equities. But we’re looking for extremes and the overall picture across markets doesn’t show that. Bonds came in relatively low on all three realized risk/variability measures (that is, 2016 was calmer than the median year from 1990-2016).4 Commodities were the one asset class that showed above normal risk/variability in 2016. Could the (moderately) elevated commodity risk be the piece of evidence that the “2016 was crazy” camp was searching for? Should elevated risk in one asset class count as crazy for markets in general? We think the answer is a definite no. If you look in enough places, you’d expect to see a couple of large or small observations – without anything being abnormal.5 Anyway, even for commodities we’re only talking about measures at or around the 70th percentile. If we start calling the 70th percentile wild and crazy we’re going to have a lot of wild and crazy! Finally, the dollar was calmer than normal on all measures, and the VIX was just about at the median (perhaps not surprisingly mirroring the equity index it’s based on – though Cliff received many questions suggesting that maybe the “vol of vol” was crazy even though equities themselves were not).

Bottom line, we still can’t find evidence of really crazy financial markets at the asset class level, and markets were even less compellingly crazy when you consider the whole set of five we examined.

Appendix: A more recent comparison (for those who insist…)

Comparing 2016’s risk to just a 5-year history (below) also appears fairly mundane. We see some very limited evidence of an elevated previous year’s high over low metric, as we did in equities, but still no readings above the 70th percentile. Moreover, if we use annualized volatility as the measure of risk, compared to the same 5-year history, four of the five markets are almost exactly at their historical median (between 46 percent and 56 percent) — a remarkably strong vote in favor of business-as-usual (perhaps even worthy of the “amazingly normal” tag Cliff used in his post). Finally, on the “max one-month move” metric, the DXY was perhaps surprisingly low (comparing to the 5-year history, but also the longer sample). Do people write notes about within-year market gyrations when they are surprisingly low, and just for one asset and metric?

Historical Percentiles of Risk Measures by Asset Class since 2012


Source: AQR and Bloomberg.

1We’re using average here to mean more like “median” relative to some historical time period to account for the fact that, in general, these measures are positively skewed.

2The VIX index roughly measures the market price of volatility of the SPX index. The idea of volatility of a volatility index may seem confusing, but it is relatively intuitive: if risk measures how much prices tend to change, a natural follow up question is to ask how much risk itself changes. Did we stay in the same environment all year, or did we jump around? One unique feature of the VIX measure is that it is quoted in “volatility” terms (rather than price), so we measure the variability here using simple differences, as opposed to the percentage differences we use for the others.

3Cliff only included this shorter 5-year comparison to see if it was one people were making in error. That is, if people were saying 2016 was a crazy ride, and it looked normal versus long-term history but high versus the last five years, perhaps this is what people meant? In that case we’d argue people were making an error as five years is too short and it might show people quickly forget the longer term past. It turned out that things looked pretty bland versus this 5-year period anyway.

4This defies many people’s intuition but there might be a good reason why bond volatility in 2016 was well below historical levels: many fixed income models posit that moves tend to be smaller when yields themselves are lower. Perhaps adjusted for that, 2016 would look more surprising. That is, perhaps moves were big in 2016 compared to an average year starting from such low yields (unfortunately, that is difficult to test for empirically, given there are not many historical observations of yields this low). Nonetheless, the absolute volatility of returns, which is what our unadjusted measure gets at, is still what ultimately impacts investor portfolios, and is therefore worthy of attention.

5In fact, if one ran 100 tests on a normally distributed variable, one would expect to see 5 “statistically significant” results, even in the absence of any true relationships. Reading too much into the most extreme results from multiple tests is a pitfall known as the multiple comparisons problem – one of several ways to lie with statistics, along with over emphasizing spurious correlations, or plain, old-fashioned making numbers up.

Article by AQR

No posts to display