Does Academic Research Destroy Stock Return-Predictability? via The Ben Graham Centre For Value Investing

R. David McLean

University of Alberta and MIT Sloan School of Management

Phone: 774-270-2300

Jeffrey Pontiff

Boston College

Phone: 617-552-6786

May 16, 2013

Abstract

We study the out-of-sample and post-publication return-predictability of 82 characteristics that are identified in published academic studies. The average out-of-sample decay due to statistical bias is about 10%, but not statistically different from zero. The average post-publication decay, which we attribute to both statistical bias and price pressure from aware investors, is about 35%, and statistically different from both 0% and 100%. Our findings point to mispricing as the source of predictability. Post-publication, stocks in characteristic portfolios experience higher volume, variance, and short interest, and higher correlations with portfolios that are based on published characteristics. Consistent with costly (limited) arbitrage, post-publication return declines are greater for characteristic portfolios that consist of stocks with low idiosyncratic risk.

Does Academic Research Destroy Stock Return-Predictability? – Introduction

Finance research has uncovered many cross-sectional relations between predetermined variables and future stock returns. Beyond historical curiosity, these relations are relevant to the extent they provide insight into the future. Whether or not the typical relation continues outside of a study’s original sample is an open question, the answer to which can shed light on why cross-sectional return predictability is observed in the first place.1 Although several papers note whether a specific cross-sectional relation continues, no study compares in-sample returns, postsample returns, and post-publication returns among a large sample of predictors. Moreover, previous studies produce contradictory messages. As examples, Jegadeesh and Titman (2001) show that the relative returns to high-momentum stocks increased after the publication of their 1993 paper, while Schwert (2003) argues that since the publication of the value and size effects, index funds based on these variables fail to generate alpha.

In this paper, we synthesize information from 82 characteristics that have been shown to explain cross-sectional stock returns in peer-reviewed finance, accounting, and economics journals. Our goal is to better understand what happens to return-predictability outside of a study’s sample period. We compare each characteristic’s return-predictability over three distinct periods: (i) the original study’s sample; (ii) after the original sample but before publication; and (iii) post publication. Previous studies contend that return-predictability is either the outcome of a rational asset pricing model, statistical biases, or mispricing. By comparing return-predictability across these three distinct periods, we are able to give insight into what best explains the typical characteristic’s return-predictability.

Pre-publication, out-of-sample predictability. If return-predictability in published studies is the result of statistical biases, then predictability should disappear out of sample. We use the term “statistical biases” to describe a broad array of biases that are inherent to research.

At least three statistical biases could affect observed stock return-predictability. First, Leamer (1978) shows the impact of “specification search” biases, which occur if the choice of model is influenced by the model’s result. Lo and MacKinlay (1990) examine a specific type of specification search bias found in finance, which they refer to as the “data snooping bias.” A second type of bias is sample selection bias, studied in Heckman (1979), where the sample construction is influenced by the result of the test. 3 A third type of bias arises when researchers conduct multiple tests of the same hypothesis. This bias goes back to Bonferroni (1935) and is applied to finance by Fama (1991) when he notes that, “With clever researchers on both sides of the efficiency fence, rummaging for forecasting variables, we are sure to find instances of ‘reliable’ return predictability that are in fact spurious.” Harvey, Liu, and Zhu (2013) argue that this bias has worsened over time; as researchers mine an increasing number of characteristics, an increasing number of studies will be published that falsely reject the null. To the extent that the results of the studies in our sample are caused by such biases, we should observe a decline in return-predictability out-of-sample.

Return-Predictability

See full PDF below.