Financial Analysts Were Only Wrong By 25%

Updated on

Financial Analysts Were Only Wrong By 25%

Andrew Stotz

University of Science and Technology of China (USTC) – School of Management

Wei Lu

University of Science and Technology of China (USTC) – School of Management

November 24, 2015

Abstract:

Over the past decade, financial analysts worldwide have produced company earnings forecasts that were wrong by an optimistic excess of 25%. This paper is a descriptive review of analyst performance at providing company earnings forecast for all listed companies in the world. The universe is 74% comprised of developed market companies while the balance is made up from emerging countries. The average company size is US$5,541m.

Financial Analysts Were Only Wrong By 25% – Introduction

In this study, we try to answer the question: “Are financial analysts across the global accurate in their earnings forecasts?” This is an important inquiry as the fortunes of most individuals are now deeply tied to the predictions and actions of financial analysts.

Though there are many papers on the forecast accuracy of such analysts, and our research focuses on the whole data set of analysts across the globe over the past thirteen years. To arrive at our conclusion, we first gather data on analyst forecasts, apply a cut-off of extreme data points and consider companies that have at least three analysts covering them.

Though considerable work has been done on this, we are attempting to consolidate the research and terminology.

Literature Review

As early as the 1960’s, studies were conducted to assess forecast accuracy in the stock market. The early studies mainly focused on how accurate an individual firm was at forecasting its own profits. Crichfield et al. (1978) moved the focus to investigate financial analysts’ Earnings Per Share (EPS) forecast accuracy. The authors concluded that forecast accuracy increased as the earnings reporting date approached. A critique of this conclusion is that a forecast made 12 months prior has considerably less information than a forecast made closer to the announcement date; a simple fix is to calculate a rolling 12-month accuracy. Almost every paper, new and old, has either the same definition or approximately the same definition, but each word it in different ways. To make it more convenient for the reader, we have chosen to standardize the measures and definitions.

  • Forecast Error (FE) is the difference between the Forecast earnings (F) and Actual earnings (A) [FE=F-A]
  • Scaled Forecast Error (SFE) is the FE relative to something such as Share Price (P) or Actual earnings (A); [SFE=FE/Absolute Value of (A)*100]. We use the absolute value of A in the denominator to make sure the correct calculation emerges in cases of A being a negative value. To illustrate, imagine three different companies had Actual earnings of -4, and Forecasts of -5, -3, or +5. For the first company, the analyst Forecast was below Actual, the second was slightly above and the third, largely above. However, without taking the absolute value of -4 the SFE would be 25%, -25% and -225%: all incorrect. By taking the absolute value of -4, we now show the correct SFE of -25%, 25%, and 225% respectively.
  • Absolute Forecast Error (AFE) is the absolute value of the difference between Forecast and Actual earnings [AFE=|F-A|]
  • Scaled Absolute Forecast Error (SAFE) is the AFE relative to something such as Share Price or Actual earnings. [SAFE=[AFE/absolute(A)*100]

A review of analyst forecast accuracy shows that in most cases the authors agree that analysts are inaccurate and optimistic. In addition to calculating the degree of forecast error, the authors focus on two main issues: 1) explaining this inaccuracy and 2) identifying whether or not this forecast error has a pattern to it. In most papers, instead of working with individual analyst forecast error, the authors take an average at some level, for example, the average of forecast errors for all analysts following a firm in a particular year (usually referred to as “consensus forecast”), the average of all analyst forecast errors following firms in a particular sector for a given year, or the average of all analyst forecast errors when all forecasted companies are subject to a particular set of accounting treatment.

Hong and Kubik (2003) investigated the relationship between individual analysts’ forecast accuracy and their career outcomes. The authors investigated analysts in the United States of America (USA) from 1993-2000 using Institutional Brokers’ Estimate System (I/B/E/S) data. They measured the accuracy of individual analysts in two ways: 1) SAFE (scaled by the share price at the time of earnings announcements) of all firms covered by the analyst in a particular year, and 2) relative forecast accuracy to rank the analysts among each other. Relative forecast accuracy of various analysts’ is outside the scope of this paper, but relevant to Hong and Kubik’s conclusions. The authors found evidence that brokerage houses rewarded optimism rather than forecast accuracy, especially if the firms being forecasted were underwritten by the brokerage houses that the analyst was employed by. The key takeaway from this paper is that the authors found a relation between analyst forecast biases with their career outcomes and a probable conflict of interest arising from employment.

Financial Analysts

See full PDF below.

Leave a Comment