Accounting Quality And Credit Ratings’ Ability To Measure Default Risk
Michael V Chin
May 20, 2016
This study examines whether the quality of borrowers’ accounting information affects the accuracy and timeliness of credit ratings issued by rating agencies. I consider two possible effects. The news effect posits that higher quality accounting provides better information to credit rating agencies, enabling them to develop better ratings. The discipline effect describes how the timely public disclosure of bad news can limit rating agencies’ ability and incentive to issue inflated ratings. I utilize rating data from two major agencies: Standard & Poor’s (S&P), an issuer-paid agency that obtains private information and may have incentives to cater to issuers; and Egan-Jones Ratings Company (EJR), an investor-paid agency that relies solely on public information to develop its ratings. The differences between these agencies make EJR an effective control group for the identification of the two accounting quality effects. I find that debt issuers with earnings that exhibit more timely loss recognition have credit ratings that predict default more accurately and are downgraded more promptly. I also find that issuers with upward-managed earnings have less timely rating downgrades. In most settings, these results are comparable for both rating agencies, consistent with the news effect. However, the results are more pronounced for EJR ratings relative to S&P ratings for firms near default and firms issuing restatements, when agency reputation costs are high and conflicts of interest are low. These findings provide evidence in support of the discipline effect of accounting quality.
Accounting Quality And Credit Ratings’ Ability To Measure Default Risk – Introduction
In this study, I examine how the quality of accounting information that debt issuers provide influence the timeliness and accuracy of their credit ratings. Rating agencies are information intermediaries whose purpose is to reduce information asymmetry between issuers and market participants. Investors and the public expect ratings to provide a reliable and timely measure of a debt issuer’s “ability and willingness to meet its financial obligations,” 1 but empirical and anecdotal evidence suggest credit ratings do not always provide an accurate measure of credit risk. For example, it was widely acknowledged that the largest agencies failed to predict significant credit events, such as the Asian crisis of the late 1990s, the Enron and WorldCom bankruptcies, and the global financial crisis of the late 2000s. These events helped motivate academic research investigating why some credit ratings perform better or worse than others (e.g. Beaver et al., 2006; Cheng and Neamtiu, 2009; Becker and Milbourn, 2011; Strobl and Xia, 2012; Bruno et al., 2013). Researchers, as well as the media and regulators, have focused primarily on rating agencies’ conflicts of interest or regulatory frictions as drivers of biased or sluggish ratings. This is the first study to examine whether rating quality is also a function of the quality of public information available to rating agencies.
Although agencies use many sources of information to generate their credit ratings, I focus on borrowers’ financial accounting information, which is a particularly important source. Rating agencies incorporate information from the financial statements into default prediction models that are central to the development of ratings. Standard and Poor’s states that, “a company’s financial reports are the starting point for the financial analysis of a rated entity.”2 Akins (2013) finds that high reporting quality reduces uncertainty among debt market participants, including credit rating agencies. If accounting provides incremental information to rating agencies, then higher quality accounting gives rating agencies better information that enables them to develop more timely and accurate ratings. I refer to this benefit of accounting quality as the news effect.
High quality accounting information can also lead to more accurate credit ratings due to rating agencies’ conflicting incentives. Analytical and empirical studies suggest rating agencies that are compensated by debt issuers may issue inflated ratings and delay rating downgrades in order to satisfy their clients, who prefer ratings that are both high and stable (Cantor and Mann, 2007; Becker and Milbourn, 2011; Manso, 2013; Bruno et al., 2013). Further, as a nationally recognized statistical rating organization (NRSRO), S&P’s ratings are used by numerous federal and state regulations (Covitz and Harrison, 2003), perhaps most importantly for determining whether a security is considered to be “investment grade” or “speculative.” They are also widely used to determine the portfolio allocation of large investors, such as pensions, and in debt contract covenants and performance pricing provisions (Asquith et al., 2005; Ball et al., 2008). These additional uses of S&P ratings can increase the real effects of rating changes, which may further compel S&P to maintain rating stability at the expense of timeliness. As long as the users that rely on ratings are not aware that they are biased, agencies can continue this practice without harming their reputations. However, informative financial reporting helps investors perform an independent evaluation of credit risk and may allow them to recognize inflated ratings. In this way, high accounting quality may have a discipline effect, compelling rating agencies to issue more accurate ratings to avoid damage to their reputation.
In order to identify and distinguish the news and discipline effects, I use ratings data from two nationally-recognized rating agencies – Standard and Poor’s (S&P) and Egan-Jones Ratings Company (EJR). S&P is compensated by debt issuers, while EJR is paid by outside investors. Their relationship with issuers gives S&P access to private information directly from management. In contrast, EJR relies solely on public information to develop its ratings. S&P’s fee arrangement potentially gives them incentives to issue inflated ratings, while EJR should be free of such conflicts.3 Because of these differences, EJR serves as an effective control group throughout my analysis. In addition, using a sample of firms that are rated by both agencies mitigates concerns about correlated omitted variables due to unobservable differences between firms.
See full PDF below.