Every year we invite some of the investment industry’s most creative thinkers to speak about their work at the Research Affiliates’ Advisory Panel conference. Along with Nobel laureates Vernon Smith and Harry Markowitz, the speakers at our 14th annual meeting included Campbell Harvey, Richard Roll, Andrew Karolyi, Bradford Cornell, Andrew Ang, Charles Gave, Tim Jenkinson, and our very own Rob Arnott.1 The richness of the speakers’ presentations beggars any attempt to summarize them; I’ll limit myself to the points I found most intriguing and illuminating. I also acknowledge that this account may reflect my own capacity for misinterpretation as much as the genius of the speakers’ actual research.
Cam Harvey of Duke University’s Fuqua School of Business and the Man Group, who recently completed a 10 year stint as editor of the Journal of Finance, spoke about revising the traditional t-statistic standard to counter the industry’s collective data-snooping for new factors. Dick Roll presented a protocol for factor identification which helps classify a factor as either behavioral or risk-based in nature. These two topics are at the center of our research agenda (Hsu and Kalesnik, 2014; Hsu, Kalesnik, and Viswanathan, 2015).
Cam has written about the factor proliferation that has resulted from extensive data-mining in academia and the investment industry (Harvey, Liu, and Zhu, 2015; Harvey and Liu, 2015). As of year-end 2014 he and his colleagues turned up 316 supposed factors reported in top journals and selected working papers, with an accelerating pace of new discoveries (roughly 40 per year). Cam’s approach to adjusting the traditional t-stat is mathematically sophisticated but conceptually intuitive. When one runs a backtest to assess a signal that is, in fact, uncorrelated with future returns, the probability of observing a t-stat greater than 2 is 2.5%. However, when thousands upon thousands of such backtests are conducted, the probability of seeing a t-stat greater than 2 starts to approach 100%.
To establish a sensible criterion for hypothesis testing in the age of dirt-cheap computing power, we need to adjust the t-stat for the aggregate number of backtests that might be performed in any given year by researchers collectively. Recognizing that there are a lot more professors and quantitative analysts running a lot more backtests today than 20 years ago, Cam argued that a t-stat threshold of 3 is certainly warranted now. Applying this standard of significance, Cam also concluded that outside of the market factor, the other factors that seem to be pervasive and believable are the old classics: the value, low beta, and momentum effects. The newer anomalies are most likely results of datamining.
I am happy to note that at Research Affiliates we adopt an even more draconian approach to research. For example, Dr. Feifei Li requires a t-stat greater than 4 from our more overzealous junior researchers. Indeed, as we add to our research team and thus the number of backtests that we perform in aggregate, we recognize that our “false discovery” rate also increases meaningfully. We must and have developed procedures for establishing robustness beyond the simple t-stat.
[drizzle]Richard Roll, who was recently appointed Linde Institute Professor of Finance at Caltech, reminded us that there are essentially three types of factor strategies:
- Those that do not appear to be correlated with macro risk exposures yet generate excess returns
- Those that are correlated with macro risks and thus produce excess returns
- Those that seem to be correlated with sources of volatility but don’t give rise to excess returns
Dick proposed an identification scheme which first extracts the macro risk factors through a principal component approach and then determines whether known factor strategies belong to the first, second, or third group. The principal components should be derived from a large universe of tradable portfolios representing diverse asset classes and equity markets as well as proven systematic strategies. Think of the extracted principal components as the primary sources of systematic volatility in the economy. A modified Fama–MacBeth cross-sectional regression approach, which uses only “real” assets to span the cross-section, should then be applied to determine which principal components command a premium and which do not. Then we examine the “canonical” correlation between the principal components and the various factor strategies of interest. This will help us identify which factor strategies derive greater returns than their exposure to systematic volatility would warrant, and which, in contrast, derive less return than their exposure would suggest. For instance, Dick concluded that momentum is almost certainly a free lunch: it creates excess returns without exhibiting any meaningful covariance with true underlying risks (Pukthuanthong and Roll, 2014).
The factor emphasis of the meeting continued with Andrew Ang, the Ann F. Kaplan Professor of Business at Columbia. Andrew presented a framework for factor investing that encourages investors to think more about factors and less about asset classes (Ang, 2014). Andrew argues that factors are like nutrients as asset classes are like meals. Ultimately, what we care about are the vitamins, amino acids, proteins, carbohydrates, and other nutrients we get from meals.
The beauty of this analogy is that it illustrates wonderfully both the power of the factor framework for helping investors invest better and the danger associated with a narrow focus on factor investing while ignoring asset classes. The factor framework tells us that whether we invest in U.S., European, Japanese, or Chinese equities, we are exposed to the global growth factor and earn a risk premium associated with that exposure. This is similar to recognizing that whether we eat a steak, a duck breast, or a salmon fillet—seemingly very different meals—we are nonetheless eating protein, with little other nutrients like fiber, vitamin C, or complex carbohydrates. This intuition helps us understand more scientifically our portfolio diversification.
However, there is a deeper intuition that is unfortunately missed by most proponents of factor investing. It is dangerous to assume that factor loadings are the only salient information in investing; I think it is a mistake to assume that portfolios with similar factor exposures are largely identical, irrespective of the prices charged. There are numerous combinations of different assets which result in similar factor exposures, just as there is a large variety of foods which can be combined to create different meals providing similar nutrients. While my mother cares deeply about the nutrients in the meals she prepares, she cares just as much about the cost of the ingredients that go into her dishes. If salmon is on sale at the supermarket, Mom will prepare a meal based on salmon.
We need to remember that investors transact in the asset space and that there are often a dozen different asset mixes which provide exposure to the same factor. The successful investor will be the one who buys her factor exposures cheaply. For example, we can buy global growth by buying emerging market stocks or U.S. stocks. Currently, emerging market stocks have a cyclically adjusted P/E (CAPE) of about 12, and U.S. stocks, about 25. Does it not matter whether we purchase global growth through EM equities or U.S. equities?
I also wish to offer caution on the emerging trend toward “pure” factor portfolios. Going back to the food/nutrient analogy: would one consider it wise to replace traditional home-cooked meals with a chemical cocktail of vitamins and nutritional supplements? Similarly, would factor portfolios constructed from long–short portfolios based on complex quantitative models provide more effective and complete access to the essential drivers