Does Academic Research Destroy Stock Returns Predictability?

Updated on

Does Academic Research Destroy Stock Returns Predictability?

R. David McLean

University of Alberta

Phone: 774-270-2300

Email: [email protected]

Jeffrey Pontiff

Boston College

Phone: 617-552-6786

Email: [email protected]

Abstract

We study the out-of-sample and post-publication return-predictability of 95 characteristics that published academic studies show to predict cross-sectional stock returns. We estimate an upper bound decline in predictability due to statistical bias of 25%, and a post-publication decline, which we attribute to both statistical bias and informed trading, of 56%. Our findings support the contention that investors learn about mispricing from publications. Post-publication declines are greater for predictors with larger in-sample returns, and returns are lower for predictors concentrated in stocks with low idiosyncratic risk and high liquidity. Post-publication, predictor portfolios exhibit increases in correlations with other portfolios that are based on published predictors.

Does Academic Research Destroy Stock Return Predictability? – Introduction

Finance research has uncovered many cross-sectional relations between predetermined variables and future stock returns. Beyond historical curiosity, these relations are relevant to the extent they provide insight into the future. Whether or not the typical relation continues outside of a study’s original sample is an open question, the answer to which can shed light on why cross-sectional return predictability is observed in the first place.1 Although several papers note whether a specific cross-sectional relation continues, no study compares in-sample returns, post-sample returns, and post-publication returns among a large sample of predictors. Moreover, previous studies produce contradictory messages. As examples, Jegadeesh and Titman (2001) show that the relative returns to high-momentum stocks increased after the publication of their 1993 paper, while Schwert (2003) argues that since the publication of the value and size effects, index funds based on these variables fail to generate alpha.

In this paper, we synthesize information from 95 predictors that have been shown to explain cross-sectional stock returns in peer-reviewed finance, accounting, and economics journals. Our goal is to better understand what happens to return-predictability outside of a study’s sample period. We compare each predictor’s returns over three distinct periods: (i) the original study’s sample; (ii) after the original sample but before publication; and (iii) post-publication.

Previous studies contend that return-predictability is either the outcome of a rational asset pricing model, statistical biases, or mispricing. By comparing return-predictability across these three distinct periods, we are able to give insight into what best explains the typical predictor’s returns.

Pre-publication, out-of-sample predictability. If return-predictability in published studies is solely the result of statistical biases, then predictability should disappear out of sample. We use the term “statistical biases” to describe a broad array of biases that are inherent to research.

At least three statistical biases could affect observed stock return-predictability: specification selection bias, sample selection bias, and multiple testing bias. Leamer (1978)
points out that a bias arises when the choice of a method is influenced by the method’s result. Lo and MacKinlay (1990) study a version of the specification selection bias in finance, and refer to it as the “data snooping bias.” The sample selection bias is studied in Heckman (1979); this bias arises when the sample construction is influenced by the result of the test.3 A multiple testing bias arises when researchers conduct multiple tests of the same hypothesis. This bias is applied to finance by Fama (1991) when he notes that, “With clever researchers on both sides of the efficiency fence, rummaging for forecasting variables, we are sure to find instances of ‘reliable’ return predictability that are in fact spurious.” Harvey, Liu, and Zhu (2013) argue that this bias has worsened over time due to the growth of finance research. To the extent that the results of the studies in our sample are caused by such biases, we should observe a decline in returnpredictability out-of-sample.

See full PDF below.

Leave a Comment