Logica Capital Advisers commentary discussing drawdowns in the quant/EMN space.
"Education is a refuge in adversity" ~ Aristotle
About five years ago, I had a relatively bad cycling accident, where an elderly driver made a wider-than-they-should’ve turn into a parking lot, leaving me nowhere to go but straight into (and onto, and over) their vehicle. The moment my body slammed into the ground I could sense the damage that had been done, and over the next many months, I lived through excessive back pain. As part of my healing, I started physical therapy on a regular basis, and as the sessions advanced, I learned more and more about strengthening key muscles and protecting my back from further harm. Today, years later, my back is not only better than it ever has been, but my cycling has approached a new level thanks to the specificity of training I learned as a result of this incident. The cloud had a silver lining -- there was an upside to the downside.
Much the same, I wonder about the upside to the literal downside of a difficult investment cycle. Simply, is there a gain to a drawdown? To answer that, I surmise, there are both good and bad drawdowns. The bad ones are the ones that teach us nothing, that leave us dumbfounded as to what happened exactly and why. The good ones, on the other hand, are the ones that teach us something, that provide us with a specific lesson that will revise -- more so, improve -- what it is that we previously thought we knew. Thus, the category of drawdowns that we can learn from are precisely the cycling accidents that, over the long run, make us better cyclists and overall healthier individuals.
I therefore pondered what it was, if anything, that I learned from the recent drawdown in the quant/EMN space; was there a single valuable insight gained? To assess this, I surveyed the damage from the top of the mountain and saw that the timing was not tied to a specific event, whether geo-political or otherwise -- there was not yet “Turkey’s Lira” to point to. More specifically, I noticed that the dislocation was most severe for quant/systematic managers, while the S&P, as well as a variety of absolute return styles, chugged along unscathed. This bifurcates the investment world; those generally engaged in empirical testing and implementing data science were heavily affected, while the rest were generally okay. So was data crunching the culprit? And if so, to what end?
To even begin to answer these questions, I had to start at the top. As quant managers processing mounds of data, we rely upon the consistency of rationale hypotheses supported by empirical methods. For example, markets tend to move between value and momentum (mean reversion vs. mean expansion), with each of them, therefore, offering a natural offset to the other; when momentum stocks break, investors rush back to value. We rely on this lack of correlation because 1) it makes some logical sense, and 2) it is rigorously supported in the data. But when the correlation “breaks”, with multi-sigma moves in the “same” direction, we question whether our thesis was really that stable in the first place, or perhaps whether some new confounding variable has been introduced that hadn’t arisen previously. Could it be the surge in passive investing driving factor agnostic rebalancing? Or maybe the growth of risk parity forging a great divide between realized and unrealized volatility? Whatever it is, we are faced with a high sigma event, more so, a breakdown, and so confronted with the legitimacy of our thesis.
This is where my high level collapses into everything below it. The data is definitely the culprit, and our reliance on it must be called into question. In these days of enormous computing power at ever cheaper prices and more programmers than bank tellers, the ultimate question arises: What are the probabilities of any single investor consistently gathering superior information when so many others are trying to do exactly the same thing? I say close to impossible. A data or data processing edge is now an oxymoron. Literally all of us have access to the same data, and similar power to process it. I chuckle these days about the things that I once had to go to my local library to find out. Ironically, my Google search just confirmed, in less than half a second, that there are still 119,487 libraries in the US.
So what now for quants? It remains fairly likely that low price to book will rise, and that excessive PE’s will fall, and that value and momentum will continue to yin and yang. But, while that’s all being exploited with all the information there is, I personally will be looking for new methods to synthesize information. Processing power is an arms race, and “proprietary” datasets are more available than iPhone apps, but thinking differently to uncover and utilize new analytical techniques will never converge. In data analysis, the common enemy is noise, and so the task at hand is simply improving methods of extracting signal or cancelling noise. So says Google, there were 36,348 new mathematics papers published in 2017. That’s a lot of reading, but rigorous research into insights gained around behaviors exhibited in the drawdown is, in my view, the upside in the downside.
Chief Investment Officer
Logica Captial Advisers, LLC