The Bayesian New Statistics: Two Historical Trends Converge

Updated on

The Bayesian New Statistics: Two Historical Trends Converge by SSRN

John K. Kruschke

Indiana University

Torrin M. Liddell

Indiana University Bloomington

May 13, 2015

Abstract:

There have been two historical shifts in the practice of data analysis. One shift is from hypothesis testing to estimation with uncertainty and meta-analysis, which among frequentists in psychology has recently been dubbed “the New Statistics” (Cumming, 2014). A second shift is from frequentist methods to Bayesian methods. We explain and applaud both of these shifts. Our main goal in this article is to explain how Bayesian methods achieve the goals of the New Statistics better than frequentist methods. The two historical trends converge in Bayesian methods for estimation with uncertainty and meta-analysis.

The Bayesian New Statistics: Two Historical Trends Converge – Introduction

The New Statistics emphasizes a historical shift away from null hypothesis significance testing (NHST) to “estimation based on effect sizes, confidence intervals, and meta-analysis” (Cumming, 2014, p. 7). There are many reasons to eschew NHST, with its focus on black-andwhite thinking about the presence or absence of effects, and to focus instead on cumulative science that incrementally improves estimates of magnitudes and uncertainty.

Recent decades have also seen repeated calls to shift away from frequentist methods to Bayesian analysis (e.g., Lindley, 1975). In the shift to Bayesian methods there have been different currents, with one current focused on hypothesis testing using Bayes factors (e.g., Edwards, Lindman, & Savage, 1963; Wagenmakers, 2007; Rouder, Speckman, Sun, Morey, & Iverson, 2009), and another current focused on magnitude estimation and assessment of uncertainty (e.g., Gelman et al., 2013; Kruschke, 2010, 2013; Kruschke, Aguinis, & Joo, 2012).

In this article, we review both of these recommended shifts in the practice of data analysis, and we promote their convergence in Bayesian methods for estimation. The implication, however, is that the goals of the New Statistics are better achieved by Bayesian methods than by frequentist methods. In other words, anyone who upholds the goals of the New Statistics ought to be using Bayesian methods. In that sense, we recommend a Bayesian New Statistics. Moreover, within the domain of Bayesian methods, anyone who upholds the goals of the New Statistics ought to emphasize parameter estimation and uncertainty, not exclusively hypothesis testing. In that sense, we recommend a New Bayesian Statistics.

Our main goal in this article is to explain how Bayesian methods achieve the goals of the New Statistics better than frequentist methods. We will recapitulate the goals of the New Statistics and the frequentist methods for addressing them, and we will describe Bayesian methods for achieving those goals. We will cover hypothesis testing, estimation of magnitude (e.g., of effect size), assessment of uncertainty (with confidence intervals or posterior distributions), meta-analysis, and power analysis. We hope to convince you that Bayesian approaches to all these goals are more direct, more intuitive, and more informative than frequentist approaches. We believe that the goals of the New Statistics, including meta-analytic thinking engendered by an estimation approach, are better realized by Bayesian methods.

Two historical trends in data analysis

We frame our exposition in the context of the two historical trends in data analysis that we mentioned earlier, and which are illustrated in Figure 1. The trend from point-value hypothesis tests to estimation of magnitude and uncertainty is shown across rows of Figure 1. The trend from frequentist to Bayesian analysis is shown across columns of Figure 1. We will review the two trends in the next sections, but we must first explain what all the trends refer to, namely, formal models of data.

Data are described by formal models

In all of the data analyses that we consider, the data are described with formal (i.e., mathematical) models. The models have meaningful parameters. You can think of a mathematical model as a machine that generates random samples of data in a pattern that depends on the settings of its control knobs. For example, a shower head spews droplets of water (i.e., the data) in a pattern that depends on the angle of the shower head and the setting of the spray nozzle (i.e., the parameters). Different machines can make different patterns of data; for example a lawn sprinkler can make different patterns of water than a bathroom shower. In data analysis, we describe the actually-observed data in terms of a mathematical machine that has its parameters set to values that would generate simulated data that mimic the observed data. When we “fit” a model to data, we are figuring out the settings of the parameters (i.e., the control knobs) that would best mimic the observed data.

New Statistics

See full PDF below.

Leave a Comment