Advisor Perspectives welcomes guest contributions. The views presented here do not necessarily represent those of Advisor Perspectives.
Michael Edesess’ article, The Trend that is Ruining Finance Research, makes the case that financial research is flawed. In this two-part article series, I will examine the points that Edesess raised in some detail. His arguments have some merit. Importantly however, his article fails to undermine the value of finance research in general. Rather, his points serve to highlight that finance is a real profession that requires skills, education, and experience that differentiates professionals from laymen.
Edesess’ case against evidence-based inveesting rests on three general assertions. There is a very real issue with using a static t-statistic threshold when the number of independent tests becomes very large. Financial research is often conducted with a universe of securities that includes a large number of micro-cap and nano-cap stocks. These stocks often do not trade regularly, and exhibit large overnight jumps in prices. They are also illiquid and costly to trade. Third, the regression models used in most financial research are poorly calibrated to form conclusions on non-stationary financial data with large outliers.
At this year's annual Robin Hood conference, which was held virtually, the founder of the world's largest hedge fund, Ray Dalio, talked about asset bubbles and how investors could detect as well as deal with bubbles in the marketplace. Q1 2021 hedge fund letters, conferences and more Dalio believes that by studying past market cycles Read More
This article will explore the issues around the latter two challenges. My next article will tackle the “p-hacking” issue in finance, and propose a framework to help those who embrace evidence-based investing to make judicious decisions based on a more thoughtful interpretation of finance research.
An un-investable investment universe
A large proportion of finance studies perform their analysis with a universe of stocks that is practically un-investable for most investors. That’s because they include stocks with very small market capitalizations. In fact, the top 1,000 stocks by market capitalization represent over 93% of the total aggregate market capitalization of all U.S. stocks. This means the bottom 3,000 or so stocks account for just 7% of total market capitalization. The median market cap of a stock in the bottom half of the market capitalization distribution is just over $1billion.
Figure 1. Cumulative proportion of U.S. market capitalization
Mathematically, only a very small portion of investment capital can be deployed outside the top 1,000 or so stocks. Smaller stocks are also much less liquid, with less frequent trading, high bid-ask spreads and larger overnight volatility. Moreover, these companies tend to trade at low prices, which means trading costs are larger for institutions who pay commissions on a per-share basis.
For these reasons, practitioner-oriented studies should include sections on inefficiencies in larger and smaller companies. And many do. In particular, many of the papers from AQR break down the performance of anomalies into effects among large (top 30% by market cap), mid (middle 40% by market cap) and small (lowest 30% by market cap) companies. The paper “The Role of Shorting, Firm Size, and Time on Market Anomalies” by Israel and Moskowitz at AQR focused specifically on this topic. Figure 2 below shows the results for traditional value and momentum factor portfolios for five different market capitalization buckets from 1926-2011.
Figure 2. Performance of value and momentum factor portfolios conditioned on market capitalization
Source: Israel, R., and T. Moskowitz. “The Role of Shorting, Firm Size, and Time on Market Anomalies.”
Journal of Financial Economics, Vol. 108, No. 2 (2013)
By Adam Butler, read the full article here.