The Cross-Section of Expected Returns
Duke University – Fuqua School of Business; National Bureau of Economic Research (NBER)
How Warren Buffett Uses Discount Rates To Value Stocks
Warren Buffett has never detailed the process he uses to value the businesses he acquires for Berkshire Hathaway. However, over the years, he has provided some limited insight into his methods. Q3 2020 hedge fund letters, conferences and more Based on these comments, it is widely assumed that Buffett uses a discount cash flow model Read More
Texas A&M University, Department of Finance
Duke University – Fuqua School of Business
Hundreds of papers and hundreds of factors attempt to explain the cross-section of expected returns. Given this extensive data mining, it does not make any economic or statistical sense to use the usual significance criteria for a newly discovered factor, e.g., a t-ratio greater than 2.0. However, what hurdle should be used for current research? Our paper introduces a multiple testing framework and provides a time series of historical significance cutoffs from the first empirical tests in 1967 to today. Our new method allows for correlation among the tests as well as publication bias. We also project forward 20 years assuming the rate of factor production remains similar to the experience of the last few years. The estimation of our model suggests that today a newly discovered factor needs to clear a much higher hurdle, with a t-ratio greater than 3.0. Echoing a recent disturbing conclusion in the medical literature, we argue that most claimed research findings in financial economics are likely false.
Our key results are summarized:
The Cross-Section of Expected Returns – Intorduction
Forty years ago, one of the first tests of the Capital Asset Pricing Model (CAPM) found that the market beta was a significant explanator of the cross-section of expected returns. The reported t-ratio of 2.57 in Fama and MacBeth (1973) comfortably exceeded the usual cutoff of 2.0. However, since that time, hundreds of papers have tried to explain the cross-section of expected returns. Given the known number of factors that have been tried and the reasonable assumption that many more factors have been tried but did not make it to publication, the usual cutoff levels for statistical significance are not appropriate. We present a new framework that allows for multiple tests and derive recommended statistical significance levels for current research in asset pricing.
We begin with 313 papers that study cross-sectional return patterns published in a selection of journals. We provide recommended p-values from the first empirical tests in 1967 through to present day. We also project minimum t-ratios through 2032 assuming the rate of “factor production” remains similar to the recent experience. We present a taxonomy of historical factors as well as definitions.1
Our research is related to a recent paper by McLean and Pontiff (2014) who argue that certain stock market anomalies are less anomalous after being published.2 Their paper tests the statistical biases emphasized in Leamer (1978), Ross (1989), Lo and MacKinlay (1990), Fama (1991) and Schwert (2003).
Our paper also adds to the recent literature on biases and inefficiencies in crosssectional regression studies. Lewellen, Nagel and Shanken (2010) critique the usual practice of using cross-sectional R2s and pricing errors to judge the success of a work and show that the explanatory powers of many previously documented factors are spurious.3 Balduzzi and Robotti (2008) challenge the traditional approach of estimating factor risk premia via cross-sectional regressions and advocate a factor projection approach. Our work focuses on evaluating the statistical significance of a factor given the previous tests on other factors. Our goal is to use a multiple testing framework to both re-evaluate past research and to provide a new benchmark for current and future research.
We tackle multiple hypothesis testing from the frequentist perspective. Bayesian approaches on multiple testing and variable selection also exist. However, the high dimensionality of the problem combined with the fact that we do not observe all the factors that have been tried poses a big challenge for Bayesian methods. While we propose a frequentist approach to overcome this missing data issue, it is unclear how to do this in the Bayesian framework. Nonetheless, we provide a detailed discussion of Bayesian methods in the paper.
See full PDF here.