The Empirical Economist’s Toolkit: From Models To Methods

Updated on

The Empirical Economist’s Toolkit: From Models To Methods by SSRN

Matthew T Panhans

Duke University, Department of Economics, Students

John D. Singleton

Duke University, Department of Economics, Students

May 27, 2015

Center for the History of Political Economy (CHOPE) Working Paper No. 2015-03

Abstract:

While historians of economics have noted the transition toward empirical work in economics since the 1970s, less understood is the shift toward “quasi-experimental” methods in applied microeconomics. Angrist and Pischke (2010) trumpet the wide application of these methods as a “credibility revolution” in econometrics that has finally provided persuasive answers to a diverse set of questions. Particularly influential in the applied areas of labor, education, public, and health economics, the methods shape the knowledge produced by economists and the expertise they possess. First documenting their growth bibliometrically, this paper aims to illuminate the origins, content, and contexts of quasi-experimental research designs, which seek natural experiments to justify causal inference. To highlight lines of continuity and discontinuity in the transition, the quasi-experimental program is situated in the historical context of the Cowles econometric framework and a case study from the economics of education is used to contrast the practical implementation of the approaches. Finally, significant historical contexts of the paradigm shift are explored, including the marketability of quasi-experimental methods and the 1980s crisis in econometrics.

The Empirical Economist’s Toolkit: From Models To Methods – Introduction

In 2010, the Journal of Economic Perspectives hosted a symposium revisiting Edward E. Leamer’s provocative 1983 article, “Let’s Take the Con out of Econometrics.” Taking aim at existing econometric practice, Leamer had posited that econometricians project on themselves the false image of a “white coat” experimental scientist. More accurately,

The applied econometrician is like a farmer who notices that the yield is somewhat higher under trees where birds roost, and he uses this as evidence that bird droppings increase yields. However, when he presents this finding at the the annual meeting of the American Ecological Association, another farmer in the audience objects that he used the same data but came up with the conclusion that moderate amounts of shade increase yields. A bright chap in the back of the room then observes that these two hypotheses are indistinguishable given the available data. He mentions the phrase “identification problem,” which, though no one knows quite what he means, is said with such authority that it is totally convincing. (Leamer 1983, 31)

In fact, empirical researchers’ inferences were troublingly subject to “whimsical assumptions” and subjective judgments.1 Ending with a plea for systematic examination of the sensitivity of econometric results, Leamer concluded: “If it turns out that almost all inferences from economic data are fragile, I suppose we shall have to revert to our old methods lest we lose our customers in government, business, and on the boardwalk at Atlantic City” (Leamer 1983, 43).

In their contribution to the Journal of Economic Perspectives symposium, Joshua D. Angrist and Jorn-Steffen Pischke argue that, nearly twenty years after Leamer’s critique, “better research design is taking the con out of econometrics.” They identify instrumental variables, regression discontinuity, and difference-in-differences analyses as “quasi-experimental” methods to justify casual inference that have “grown and become more self-conscious and sophisticated since the 1970s” (Angrist and Pischke 2010, 12). Pointing to examples from the economics of crime, education, and health, Angrist and Pischke trumpet the application of these methods as a “credibility revolution” in econometrics that has finally provided persuasive answers to a diverse set of questions: “…it’s no longer enough to adopt the language of an orthodox simultaneous equations framework, labeling some variables endogenous and other exogenous, without offering strong institutional or empirical support for these identifying assumptions” (Angrist and Pischke 2010, 116).

They single out the fields of industrial organization and macroeconomics as the sole exceptions to the shift. Angrist and Pischke’s article not only picked up on points in an ongoing methodological debate within applied economics (Heckman 1997; Angrist and Imbens 1999; Rosenzweig and Wolpin 2000), it precipitated a number of responses (Sims 2010; Nevo and Whinston 2010; Keane 2010a,b; Wolpin 2013). Nonetheless, Angrist and Pischke’s article was less an argument than it was a victory lap:

For instance, of the nine economists awarded the John Bates Clark Award within the prior fifteen years, the research of at least five, most notably David Card and Steven Levitt, could be counted as quasi-experimental. The trend has only accelerated since, with four of the five most recent winners applying quasi-experimental methods in their work. Moreover, the central importance of “credibly” identified econometric work is attested in the training of applied economists, where it is reinforced in chapters and handbooks (Angrist and Pischke 2009, 2014; Imbens and Lemieux 2008; Lee and Lemieux 2010), and in the rhetoric and working discourse among applied practitioners.2 This paradigm shift has also been institutionalized in schools of public policy, applied economics, and public health, in turn influencing the way economics engages with neighboring disciplines and with policymakers (Hirschman and Berman 2014). The methods thereby constitute a key feature of the transition to applied economics and shape the knowledge produced by economists and the expertise they posses (Fourcade et al. 2014).

Empirical Economist

See full PDF below.

Leave a Comment