Volume 182, Issue 2
Original Article
Free Access

Visualization in Bayesian workflow

Jonah Gabry

Corresponding Author

E-mail address: jonah.sol.gabry@columbia.edu

Columbia University, New York, USA

Address for correspondence: Jonah Gabry, Columbia University, 927 Social Work Building, 1255 Amsterdam Avenue, New York, NY 10027, USA. E‐mail: jonah.sol.gabry@columbia.eduSearch for more papers by this author
Aki Vehtari

Aalto University, Espoo, Finland

Search for more papers by this author
Michael Betancourt

Columbia University, New York, USA

Symplectomorphic, New York, USA

Search for more papers by this author
Andrew Gelman

Columbia University, New York, USA

Search for more papers by this author
First published: 15 January 2019
Citations: 62

Abstract

Bayesian data analysis is about more than just computing a posterior distribution, and Bayesian visualization is about more than trace plots of Markov chains. Practical Bayesian data analysis, like all data analysis, is an iterative process of model building, inference, model checking and evaluation, and model expansion. Visualization is helpful in each of these stages of the Bayesian workflow and it is indispensable when drawing inferences from the types of modern, high dimensional models that are used by applied researchers.

1 Introduction and running example

Visualization is a vital tool for data analysis, and its role is well established in both the exploratory and the final presentation stages of a statistical workflow. In this paper, we argue that the same visualization tools should be used at all points during an analysis. We illustrate this thesis by following a single real example, estimating the global concentration of a certain type of air pollution, through all of the phases of statistical workflow:
  • (a) exploratory data analysis to aid in setting up an initial model;
  • (b) computational model checks using fake data simulation and the prior predictive distribution;
  • (c) computational checks to ensure that the inference algorithm works reliably;
  • (d) posterior predictive checks and other juxtapositions of data and predictions under the fitted model;
  • (e) model comparison via tools such as cross‐validation.

The tools that are developed in this paper are implemented in the bayesplot R package (Gabry, 2017; R Core Team, 2017), which uses ggplot2 (Wickham, 2009) and is linked to—though not dependent on—Stan (Stan Development Team, 2017a, b): the general purpose Hamiltonian Monte Carlo (HMC) engine for Bayesian model fitting.

To discuss better the ways that visualization can aid a statistical workflow we consider a particular problem: the estimation of human exposure to air pollution from particulate matter measuring less than 2.5 μm in diameter, PM2.5. Exposure to PM2.5 is linked to a number of poor health outcomes, and a recent report estimated that PM2.5 is responsible for 3 million deaths world wide each year (Shaddick et al., 2018).

For our running example, we use the data from Shaddick et al. (2018), aggregated to the city level, to estimate concentrations of ambient PM2.5 across the world. The statistical problem is that we have direct measurements of PM2.5 from only a sparse network of 2980 ground monitors with heterogeneous spatial coverage (Fig. 1(a)). This monitoring network has especially poor coverage across Africa, central Asia and Russia.

image
Data displays for our running example of exposure to particulate matter: (a) satellite estimates of PM2.5 concentration (●, locations of the ground monitors); (b) scatter plot of  log (PM2.5) versus log(satellite) (image, eastern Europe–central Europe–central Asia; image, high income super‐region; image, Latin America–Caribbean; image, north Africa–Middle East; image, south Asia; image, south‐east Asia–east Asia–Oceania; image, sub‐Saharan Africa)

To estimate the public health effect of PM2.5, we need estimates of its concentration at the same spatial resolution as the population data. To obtain these estimates, we supplement the direct measurements with a high resolution satellite data product that converts measurements of aerosol optical depth into estimates of PM2.5 concentration. The hope is that we can use the ground monitor data to calibrate the approximate satellite measurements and hence obtain estimates of PM2.5 concentration at the required spatial resolution.

The aim of this analysis is to build a predictive model of PM2.5 with appropriately calibrated prediction intervals. We shall not attempt a full analysis of these data, which was undertaken by Shaddick et al. (2018). Instead, we shall focus on three simple, but plausible, models for the data to show how visualization can be used to help to construct, sense‐check, compute and evaluate these models.

The data that are analysed in the paper and the programs that were used analyse them can be obtained from

https://rss.onlinelibrary.wiley.com/hub/journal/1467985x/series-a-datasets

2 Exploratory data analysis goes beyond just plotting the data

An important aspect of formalizing the role of visualization in exploratory data analysis is to place it within the context of a particular statistical workflow. In particular, we argue that exploratory data analysis is more than simply plotting the data. Instead, we consider it a method to build a network of increasingly complex models that can capture the features and heterogeneities in the data (Gelman, 2004).

This ground‐up modelling strategy is particularly useful when the data that have been gathered are sparse or unbalanced, as the resulting network of models is built knowing the limitations of the design. A different strategy, which is common in machine learning, is to build a top‐down model that throws all available information into a complicated non‐parametric procedure. This works well for data that are a good representation of the population of interest but can be prone to overfitting or generalization error when used on sparse or unbalanced data. Using a purely predictive model to calibrate the satellite measurements would yield a fit that would be dominated by data in western Europe and north America, which have air pollution profiles that are very different from those of most developing nations. With this in mind, we use the ground‐up strategy to build a small network of three simple models for predicting PM2.5 concentrations on a global scale.

The simplest predictive model that we can fit assumes that the satellite data product is a good predictor of the ground monitor data after a simple affine adjustment. In fact, this was the model that was used by the global burden of disease project before the 2016 update (Forouzanfar et al., 2015). Fig. 1(b) shows a straight line that fits the data on a log–log‐scale reasonably well (R2≈0.6). Discretization artefacts at the lower values of concentrations of PM2.5 are also clearly visible.

To improve the model, we need to think about possible sources of heterogeneity. For example, we know that developed and developing countries have different levels of industrialization and hence different air pollution. We also know that desert sand can be a large source of PM2.5. If these differences are not appropriately captured by the satellite data product, fitting only a single regression line could leave us in danger of falling prey to Simpson's paradox (that a trend can reverse when data are grouped).

To expand out our network of models, we consider two possible groupings of countries. The World Health Organization super‐regions (Fig. 2(a)) separate out rich countries and divide the remaining countries into six geographically contiguous regions. These regions have not been constructed with air pollution in mind, so we also constructed a different division based on a six‐component hierarchical clustering of ground monitor measurements of PM2.5 (Fig. 2(b)). The seventh region constructed this way is the collection of all countries for which we do not have ground monitor data.

image
(a) World Health Organization super‐regions (the pink super‐region corresponds to wealthy countries; the remaining regions are defined on the basis of geographic contiguity) and (b) super‐regions found by clustering based on ground measurements of PM2.5 concentration (countries for which we have no ground monitor measurements are coloured red)

When the trends for each of these regions are plotted individually (Fig. 3), it is clear that some ecological bias would enter the analysis if we used only a single linear regression. We also see that some regions, particularly sub‐Saharan Africa (red in Fig. 3(a)) and clusters 1 and 6 (pink and yellow in Fig. 3(b)), do not have enough data to pin down the linear trend comprehensively. This suggests that some borrowing of strength through a multilevel model may be appropriate.

image
Graphics in model building (here, evidence that a single linear trend is insufficient): (a) the same as Fig. 1(b), but also showing independent linear models fitted within each World Health Organization super‐region; (b) the same as (a), but the linear models are fitted within each of the cluster regions shown in Fig. 2(b)

From this preliminary data analysis, we have constructed a network of three potential models. Model 1 is a simple linear regression. Model 2 is a multilevel model where observations are stratified by World Health Organization super‐region. Model 3 is a multilevel model where observations are stratified by clustered super‐region.

These three models will be sufficient for demonstrating our proposed workflow, but this is a smaller network of models than we would use for a comprehensive analysis of the PM2.5 data. Shaddick et al. (2018), for example, also considered smaller regions, country level variation and a spatial model for the varying coefficients. Further calibration covariates can also be included.

3 Fake data can be almost as valuable as real data for building your model

The exploratory data analysis resulted in a network of three models: one linear regression model and two different linear multilevel models. To specify these models fully, we need to specify prior distributions on all the parameters. If we specify proper priors for all parameters in the model, a Bayesian model yields a joint prior distribution on parameters and data, and hence a prior marginal distribution for the data, i.e. Bayesian models with proper priors are generative models. The main idea in this section is that we can visualize simulations from the prior marginal distribution of the data to assess the consistency of the chosen priors with domain knowledge.

The main advantage to assessing priors based on the prior marginal distribution for the data is that it reflects the interplay between the prior distribution on the parameters and the likelihood. This is a vital component of understanding how prior distributions actually work for a given problem (Gelman et al., 2017). It also explicitly reflects the idea that we cannot fully understand the prior by fixing all except one parameter and assessing the effect of the unidimensional marginal prior. Instead, we need to assess the effect of the prior as a multivariate distribution.

The prior distribution over the data enables us to extend the concept of a weakly informative prior (Gelman et al., 2008) to be more aware of the role of the likelihood. In particular, we say that a prior leads to a weakly informative joint prior data‐generating process if draws from the prior data‐generating distribution p(y) could represent any data set that could plausibly be observed. As with the standard concept of weakly informative priors, it is important that this prior predictive distribution for the data has at least some mass around extreme but plausible data sets. However, there should be no mass on completely implausible data sets. We recommend assessing how informative the prior distribution on the data is by generating a ‘flip book’ (a series of visualizations to scroll through) of simulated data sets that can be used to investigate the variability and multivariate structure of the distribution.

To demonstrate the power of this approach, we return to the multilevel model for the PM2.5 data. Mathematically, the model will look like yijN{β0+β0j+(β1+β1j)xij,σ2}, urn:x-wiley:09641998:media:rssa12378:rssa12378-math-0001urn:x-wiley:09641998:media:rssa12378:rssa12378-math-0002 where yij is the logarithm of the observed concentration of PM2.5, xij is the logarithm of the estimate from the satellite model, i ranges over the observations in each super‐region, j ranges over the super‐regions and σ, τ0, τ1, β0 and β1 need prior distributions.

Consider some priors of the sort that are sometimes recommended as being vague: βkN(0,100) and urn:x-wiley:09641998:media:rssa12378:rssa12378-math-0003. The data that are generated by using these priors and shown in Fig. 4(a) are completely impossible for this application; note the y‐axis limits and recall that the data are on the log‐scale. This is primarily because the vague priors do not actually respect our contextual knowledge.

image
Visualizing the prior predictive distribution: (a) and (b) show realizations from the prior predictive distribution using priors for the βs and τs that are vague and weakly informative respectively; the same N+(0,1) prior is used for σ in both cases; simulated data are plotted on the y‐axis and observed data on the x‐axis; because the simulations under the vague and weakly informative priors are so different, the y‐axis scales used in panels (a) and (b) also differ dramatically; (c) emphasizes the difference in the simulations by showing the red points from (a) and the black points from (b) plotted with the same y‐axis

We know that the satellite estimates are reasonably faithful representations of the concentration of PM2.5, so a more sensible set of priors would be centred near models with intercept 0 and slope 1. An example of this would be β0N(0,1), β1N(1,1) and τkN+(0,1), where N+ is the half‐normal distribution. Data that are generated by this model are shown in Fig. 4(b). Although it is clear that this realization corresponds to quite a miscalibrated satellite model (especially when we remember that we are working on the log‐scale), it is considerably more plausible than the model with vague priors.

We argue that the tighter priors are still only weakly informative, in that the implied data‐generating process can still generate data that are much more extreme than we would expect from our domain knowledge. In fact, when repeating the simulation that is shown in Fig. 4(b) many times we found that the data that are generated by using these priors can produce data points with more than 22 000 μg m−3, which is still a very high number in this context.

The prior predictive distribution is a powerful tool for understanding the structure of our model before we make a measurement, but its density evaluated at the measured data also plays the role of the marginal likelihood which is commonly used in model comparison. Unfortunately the utility of the prior predictive distribution to evaluate the model does not extend to utility in selecting between models. For further discussion see Gelman et al. (2017).

4 Graphical Markov chain Monte Carlo diagnostics: moving beyond trace plots

Constructing a network of models is only the first step in the Bayesian workflow. Our next job is to fit them. Once again, visualizations can be a key tool in doing this well. Traditionally, Markov chain Monte Carlo (MCMC) diagnostic plots consist of trace plots and auto‐correlation functions. We find that these plots can be helpful to understand problems that have been caught by numerical summaries such as the potential scale reduction factor urn:x-wiley:09641998:media:rssa12378:rssa12378-math-0004 (Stan Development Team (2017b), section 30.3), but they are not always needed as part of the workflow in the many settings where chains mix well.

For general MCMC methods it is difficult to do any better than between‐ and within‐summary comparisons, following up with trace plots as needed. But, if we restrict our attention to HMC sampling and its variants, we can obtain much more detailed information about the performance of the Markov chain (Betancourt, 2017). We know that the success of HMC sampling requires that the geometry of the set containing the bulk of the posterior probability mass (which we call the typical set) is fairly smooth. It is not possible to check this condition mathematically for most models, but it can be checked numerically. It turns out that, if the geometry of the typical set is non‐smooth, the path taken by leapfrog integrator that defines the HMC proposal will rapidly diverge from the energy conserving trajectory.

Diagnosing divergent numerical trajectories precisely is difficult, but it is straightforward to identify these divergences heuristically by checking whether the error in the Hamiltonian crosses a large threshold. Occasionally this heuristic falsely flags stable trajectories as divergent, but we can identify these false positive results visually by checking whether the samples that are generated from divergent trajectories are distributed in the same way as the non‐divergent trajectories. Combining this simple heuristic with visualization greatly increases its value.

Visually, a concentration of divergences in small neighbourhoods of parameter space, however, indicates a region of high curvature in the posterior that obstructs exploration. These neighbourhoods will also impede any MCMC method based on local information, but to our knowledge only HMC sampling has enough mathematical structure to be able to diagnose these features reliably. Hence, when we are using HMC sampling for our inference, we can use visualization to assess the convergence of the MCMC method and also to understand the geometry of the posterior.

There are several plots that we have found useful for diagnosing troublesome areas of the parameter space, in particular bivariate scatter plots that mark the divergent transitions (Fig. 5(a)) and parallel co‐ordinate plots (Fig. 5(b)). These visualizations are sufficiently sensitive to differentiate between models with a non‐smooth typical set and models where the heuristic has given a false positive result. This makes them an indispensable tool for understanding the behaviour of an HMC algorithm when applied to a particular target distribution.

image
Diagnostic plots for HMC sampling (models were fitted by using the RStan interface to Stan 2.17 (Stan Development Team, 2017a)): (a) for model 3, a bivariate plot of the log‐standard‐deviation of the cluster level slopes (y‐axis) against the slope for the first cluster (x‐axis) (the green dots indicate starting points of divergent transitions; this plot can be made by using mcmc_scatter in bayesplot); (b) for model 3, a parallel co‐ordinates plot showing the cluster level slope parameters and their log‐standard‐deviation  log (τ1) (the green lines indicate starting points of divergent transitions; this plot can be made by using mcmc_parcoord in bayesplot)

If an HMC algorithm were struggling to fit model 3, the divergences would be clustered in the parameter space. Examining the bivariate scatter plots (Fig. 5(a)), there is no obvious pattern to the divergences. Similarly, the parallel co‐ordinate plot (Fig. 5(b)) does not show any particular structure. This indicates that the divergences that are found are most probably false positive results. For contrast, the on‐line supplementary material contains the same plots for a model where HMC sampling fails to compute a reliable answer. In this case, the clustering of divergences is pronounced and the parallel co‐ordinate plot clearly indicates that all the divergent trajectories have the same structure.

5 How did we do?: posterior predictive checks are vital for model evaluation

The idea behind posterior predictive checking is simple: if a model is a good fit we should be able to use it to generate data that resemble the data that we observed. This is similar in spirit to the prior checks that were considered in Section 3, except now we have a data‐informed data‐generating model. This means that we can be much more stringent in our comparisons. Ideally, we would compare the model predictions with an independent test data set, but this is not always feasible. However, we can still do some checking and predictive performance assessments by using the data that we already have.

To generate the data that are used for posterior predictive checks we simulate from the posterior predictive distribution urn:x-wiley:09641998:media:rssa12378:rssa12378-math-0005, where y are our current data, urn:x-wiley:09641998:media:rssa12378:rssa12378-math-0006 are our new data to be predicted and θ are our model parameters. Posterior predictive checking is mostly qualitative. By looking at some important features of the data and the replicated data, which were not explicitly included in the model, we may find a need to extend or modify the model.

For each of the three models, Fig. 6 shows the distributions of many replicated data sets drawn from the posterior predictive distribution (thin light curves) compared with the empirical distribution of the observed outcome (the thick dark curve). From these plots it is evident that the multilevel models (models 2 and 3) can simulate new data that are more similar to the observed  log (PM2.5) values than the model without any hierarchical structure (model 1).

image
Kernel density estimate of the observed data set y (dark curves), with density estimates for 100 simulated data sets urn:x-wiley:09641998:media:rssa12378:rssa12378-math-0007 drawn from the posterior predictive distribution (thin, lighter curves) (these plots can be produced using ppc_dens_overlay in the bayesplot package): (a) model 1; (b) model 2; (c) model 3

Posterior predictive checking makes use of the data twice: once for the fitting and once for the checking. Therefore it is a good idea to choose statistics that are orthogonal to the model parameters. If the test statistic is related to one of the model parameters, e.g. if the mean statistic is used for a Gaussian model with a location parameter, the posterior predictive checks may be less able to detect conflicts between the data and the model. Our running example uses a Gaussian model so in Fig. 7 we investigate how well the posterior predictive distribution captures skewness. Model 3, which used data‐adapted regions, is best at capturing the observed skewness, whereas model 2 does a satisfactory job and the linear regression (model 1) totally fails.

image
Histograms of statistics skewurn:x-wiley:09641998:media:rssa12378:rssa12378-math-0008 computed from 4000 draws from the posterior predictive distribution (the dark vertical line is computed from the observed data; these plots can be produced using ppc_stat in the bayesplot package): (a) model 1; (b) model 2; (c) model 3

We can also perform similar checks within levels of a grouping variable. For example, in Fig. 8 we split both the outcome and the posterior predictive distribution according to region and check the median values. The two hierarchical models give a better fit to the data at the group level, which in this case is unsurprising.

image
Checking posterior predictive test statistics, in this case the medians, within region (the vertical lines are the observed medians; the facets are labelled by number in (o)–(t) because they represent groups found by the clustering algorithm rather than actual super‐regions; these grouped plots can be made using ppc_stat_grouped in the bayesplot package): (a)–(g) model 1; (h)–(n) model 2; (o)–(t) model 3; (a), (h) high income super‐region; (b), (i) eastern Europe–central Europe–central Asia; (c), (j) Latin America–Caribbean; (d), (k) north Africa–Middle East; (e), (l) south Asia; (f), (m) south‐east Asia–east Asia–Oceania; (g), (n), sub‐Saharan Africa; (o) group 1; (p) group 2; (q) group 3; (r) group 4; (s) group 5; (t) group 6

In cross‐validation, double use of data is partially avoided and test statistics can be better calibrated. When performing leave‐one‐out (LOO) cross‐validation we usually work with univariate posterior predictive distributions, and thus we cannot examine properties of the joint predictive distribution. To check specifically that predictions are calibrated, the usual test is to look at the LOO cross‐validation predictive cumulative density function values, which are asymptotically uniform (for continuous data) if the model is calibrated (Gelfand et al., 1992; Gelman et al., 2013).

The plots that are shown in Fig. 9 compare the density of the computed LOO probability integral transforms (the thick dark curve) versus 100 simulated data sets from a standard uniform distribution (the thin light curves). We can see that, although there is some clear miscalibration in all cases, the hierarchical models are an improvement over the single‐level model.

image
Graphical check of the LOO cross‐validated probability integral transform (image, simulations from the standard uniform distribution; image, density of the computed LOO probability integral transforms) (similar plots can be made using ppc_dens_overlay and ppc_loo_pit in the bayesplot package; the downward slope near 0 and 1 on the ‘uniform’ histograms is an edge effect due to the density estimator used and can be safely discounted): (a) model 1; (b) model 2; (c) model 3

The shape of the miscalibration in Fig. 9 is also meaningful. The frown shapes that are exhibited by models 2 and 3 indicate that the univariate predictive distributions are too broad compared with the data, which suggests that further modelling will be necessary to reflect the uncertainty accurately. One possibility would be to subdivide the super‐regions further to capture within‐region variability better (Shaddick et al., 2018).

6 Pointwise plots for predictive model comparison

Visual posterior predictive checks are also useful for identifying unusual points in the data. Unusual data points come in two flavours: outliers and points with high leverage. In this section, we show that visualization can be useful for identifying both types of data point. Examining these unusual observations is a critical part of any statistical workflow, as these observations give hints about how the model may need to be modified. For example, they may indicate that the model should use non‐linear instead of linear regression, or that the observation error should be modelled with a heavier‐tailed distribution.

The main tool in this section is the one‐dimensional cross‐validated LOO predictive distribution urn:x-wiley:09641998:media:rssa12378:rssa12378-math-0009. Gelfand et al. (1992) suggested examining the LOO log‐predictive density values (they called them conditional predictive ordinates) to find observations that are difficult to predict. This idea can be extended to model comparison by looking at which model best captures each left‐out data point. Fig. 10(a) shows the difference between the expected log‐predictive densities ELPD for the individual data points estimated by using Pareto‐smoothed importance sampling (PSIS) (Vehtari et al., 2017a,b). Model 3 appears to be slightly better than model 2, especially for difficult observations like the station in Mongolia.

image
Model comparisons by using LOO cross‐validation: (a) the difference in pointwise values obtained from LOO PSIS for model 3 compared with model 2 coloured by World Health Organization cluster (see Fig. 1(b) for the key; positive values indicate that model 3 outperformed model 2); (b) urn:x-wiley:09641998:media:rssa12378:rssa12378-math-0010‐diagnostics from LOO PSIS for model 2 (the 2674th data point (the only data point from Mongolia) is highlighted by the urn:x-wiley:09641998:media:rssa12378:rssa12378-math-0011‐diagnostic as being influential on the posterior)

In addition to looking at the individual LOO log‐predictive densities, it is useful to look at how influential each observation is. Some of the data points may be difficult to predict but not necessarily influential, i.e. the predictive distribution does not change much when they are left out. One way to look at the influence is to look at the difference between the full data log‐posterior predictive density and the LOO log‐predictive density.

We recommend computing the LOO log‐predictive densities by using the PSIS LOO method as implemented in the loo package (Vehtari et al., 2017c). A key advantage of using the PSIS LOO method to compute the LOO densities is that it automatically computes an empirical estimate of how similar the full data predictive distribution is to the LOO predictive distribution for each left‐out point. Specifically, it computes an empirical estimate urn:x-wiley:09641998:media:rssa12378:rssa12378-math-0012 of urn:x-wiley:09641998:media:rssa12378:rssa12378-math-0013 where
urn:x-wiley:09641998:media:rssa12378:rssa12378-math-0014
is the α‐Rényi divergence (Yao et al., 2018). If the jth LOO predictive distribution has a large urn:x-wiley:09641998:media:rssa12378:rssa12378-math-0015‐value when used as a proposal distribution for the full data predictive distribution, it suggests that yj is a highly influential observation.

Fig. 10(b) shows the urn:x-wiley:09641998:media:rssa12378:rssa12378-math-0016‐diagnostics from the PSIS LOO method for our model 2. The 2674th data point is highlighted by the urn:x-wiley:09641998:media:rssa12378:rssa12378-math-0017‐diagnostic as being influential on the posterior. If we examine the data we find that this point is the only observation from Mongolia and corresponds to a measurement (x,y)=( log (satellite), log (PM2.5))=(1.95,4.32), which would look like an outlier if highlighted in the scatter plot in Fig. 1(b). By contrast, under model 3 the urn:x-wiley:09641998:media:rssa12378:rssa12378-math-0018‐value for the Mongolian observation is significantly lower (urn:x-wiley:09641998:media:rssa12378:rssa12378-math-0019) indicating that that point is better resolved in model 3.

7 Discussion

Visualization is probably the most important tool in an applied statistician's toolbox and is an important complement to quantitative statistical procedures (Buja et al., 2009). In this paper, we have demonstrated that it can be used as part of a strategy to compare models, to identify ways in which a model fails to fit, to check how well our computational methods have resolved the model, to understand the model sufficiently well to be able to set priors and to improve the model iteratively.

The last of these tasks is a little controversial as using the measured data to guide model building raises the concern that the resulting model will generalize poorly to new data sets. A different objection to using the data twice (or even more) comes from ideas around hypothesis testing and unbiased estimation, but we are of the opinion that the danger of overfitting the data is much more concerning (Gelman and Loken, 2014).

In the visual workflow that we have outlined in this paper, we have used the data to improve the model in two places. In Section 3 we proposed prior predictive checks with the recommendation that the data‐generating mechanism should be broader than the distribution of the observed data in line with the principle of weakly informative priors. In Section 5 we recommended undertaking careful calibration checks as well as checks based on summary statistics, and then updating the model accordingly to cover the deficiencies that are exposed by this procedure. In both of these cases, we have made recommendations that aim to reduce the danger. For the prior predictive checks, we recommend aiming for a prior data‐generating process that can produce plausible data sets, not necessarily data sets that are indistinguishable from observed data. For the posterior predictive checks, we ameliorate the concerns by checking carefully for influential measurements and proposing that model extensions be weakly informative extensions that are still centred on the previous model (Simpson et al., 2017).

Regardless of concerns that we have about using the data twice, the workflow that we have described in this paper (perhaps without the stringent prior and posterior predictive checks) is common in applied statistics. As academic statisticians, we have a duty to understand the consequences of this workflow and offer concrete suggestions to make the practice of applied statistics more robust.

Acknowledegements

The authors thank Gavin Shaddick and Matthew Thomas for their help with the PM2.5 example, Ari Hartikainen for suggesting the parallel co‐ordinates plot, Ghazal Fazelnia for finding an error in our map of ground monitor locations, Eren Metin Elçi for alerting us to a discrepancy between our text and code, and the Sloan Foundation, Columbia University, US National Science Foundation, Institute for Education Sciences, Office of Naval Research and Defense Advanced Research Projects Agency for financial support.

    Number of times cited according to CrossRef: 62

    • Time‐varying relationships between ocean conditions and sockeye salmon productivity, Fisheries Oceanography, 10.1111/fog.12469, 29, 3, (265-275), (2020).
    • The use of Bayesian priors in Ecology: The good, the bad and the not great, Methods in Ecology and Evolution, 10.1111/2041-210X.13407, 11, 8, (882-889), (2020).
    • Predicting the Food-Energy Nexus of Wild Food Systems: Informing Energy Transitions for Isolated Indigenous Communities, Ecological Economics, 10.1016/j.ecolecon.2020.106712, 176, (106712), (2020).
    • Improving Bayesian statistics understanding in the age of Big Data with the bayesvl R package, Software Impacts, 10.1016/j.simpa.2020.100016, (100016), (2020).
    • Behavioral couples therapy versus cognitive behavioral therapy for problem gambling: a randomized controlled trial, Addiction, 10.1111/add.14900, 115, 7, (1330-1342), (2020).
    • A physical background model for the Fermi Gamma-ray Burst Monitor , Astronomy & Astrophysics, 10.1051/0004-6361/201937347, 640, (A8), (2020).
    • Commentary on Boslett et al. (2020): Towards better measurement of drug involvement in fatal overdoses, Addiction, 10.1111/add.14988, 115, 7, (1318-1319), (2020).
    • Explaining Discrepancies Between Total and Segmental DXA and BIA Body Composition Estimates Using Bayesian Regression, Journal of Clinical Densitometry, 10.1016/j.jocd.2020.05.003, (2020).
    • Interindividual variability and lateralization of μ-opioid receptors in the human brain, NeuroImage, 10.1016/j.neuroimage.2020.116922, (116922), (2020).
    • Relationships between temperature and Pacific hake distribution vary across latitude and life-history stage, Marine Ecology Progress Series, 10.3354/meps13286, 639, (185-197), (2020).
    • The ORVAC trial: a phase IV, double-blind, randomised, placebo-controlled clinical trial of a third scheduled dose of Rotarix rotavirus vaccine in Australian Indigenous infants to improve protection against gastroenteritis: a statistical analysis plan, Trials, 10.1186/s13063-020-04602-w, 21, 1, (2020).
    • Instantaneous vs. non-instantaneous diver-operated stereo-video (DOV) surveys of highly mobile sharks in the Galápagos Marine Reserve, Marine Ecology Progress Series, 10.3354/meps13447, 649, (111-123), (2020).
    • Estimating Building Electricity Performance Gaps with Internet of Things Data Using Bayesian Multilevel Additive Modeling, Journal of Construction Engineering and Management, 10.1061/(ASCE)CO.1943-7862.0001930, 146, 12, (05020017), (2020).
    • Adjusting ROC Curve for Covariates with AROC R Package, Computational Science and Its Applications – ICCSA 2020, 10.1007/978-3-030-58808-3_15, (185-198), (2020).
    • Quantifying a Novel Climate Through Changes in PDO‐Climate and PDO‐Salmon Relationships, Geophysical Research Letters, 10.1029/2020GL087972, 47, 16, (2020).
    • Bayesian estimation of spatial filters with Moran’s eigenvectors and hierarchical shrinkage priors, Spatial Statistics, 10.1016/j.spasta.2020.100450, (100450), (2020).
    • Recognizing affiliation in colaughter and cospeech, Royal Society Open Science, 10.1098/rsos.201092, 7, 10, (201092), (2020).
    • Revisiting the Links-Species Scaling Relationship in Food Webs, Patterns, 10.1016/j.patter.2020.100079, (100079), (2020).
    • Farm Exit Among Smallholder Farmers of Nepal: A Bayesian Logistic Regression Models Approach, Agricultural Research, 10.1007/s40003-020-00465-4, (2020).
    • A Bayesian Modeling Approach for Estimating Earthquake Reconstruction Behavior, Annals of the American Association of Geographers, 10.1080/24694452.2020.1756207, (1-17), (2020).
    • A case study comparison of objective and subjective evaluation methods of physical qualities in youth soccer players, Journal of Sports Sciences, 10.1080/02640414.2020.1766177, (1-9), (2020).
    • State-space modeling of the dynamics of temporal plant cover using visually determined class data, PeerJ, 10.7717/peerj.9383, 8, (e9383), (2020).
    • Unravelling the γ-butyrolactone network in Streptomyces coelicolor by computational ensemble modelling, PLOS Computational Biology, 10.1371/journal.pcbi.1008039, 16, 7, (e1008039), (2020).
    • Of power and despair in cetacean conservation: estimation and detection of trend in abundance with noisy and short time-series, PeerJ, 10.7717/peerj.9436, 8, (e9436), (2020).
    • Bayes-Raking: Bayesian Finite Population Inference with Known Margins, Journal of Survey Statistics and Methodology, 10.1093/jssam/smaa008, (2020).
    • Can we decode phonetic features in inner speech using surface electromyography?, PLOS ONE, 10.1371/journal.pone.0233282, 15, 5, (e0233282), (2020).
    • Relations Among Low-Income Preschool Children’s Self-Regulation, Marginal Food Security, and Parent Stress, Early Education and Development, 10.1080/10409289.2020.1749492, (1-17), (2020).
    • Rapid processing of neutral and angry expressions within ongoing facial stimulus streams: Is it all about isolated facial features?, PLOS ONE, 10.1371/journal.pone.0231982, 15, 4, (e0231982), (2020).
    • Overconfidence in Bayesian analyses of galaxy rotation curves, Nature Astronomy, 10.1038/s41550-019-0998-2, (2020).
    • Semantic priming and schizotypal personality: reassessing the link between thought disorder and enhanced spreading of semantic activation, PeerJ, 10.7717/peerj.9511, 8, (e9511), (2020).
    • ssMousetrack—Analysing Computerized Tracking Data via Bayesian State-Space Models in R, Mathematical and Computational Applications, 10.3390/mca25030041, 25, 3, (41), (2020).
    • Imaging the 511 keV Positron Annihilation Sky with COSI, The Astrophysical Journal, 10.3847/1538-4357/ab9607, 897, 1, (45), (2020).
    • Spatial Trends in Salmonella Infection in Pigs in Spain, Frontiers in Veterinary Science, 10.3389/fvets.2020.00345, 7, (2020).
    • Prosumer Response Estimation Using SINDyc in Conjunction with Markov-Chain Monte-Carlo Sampling, Energies, 10.3390/en13123183, 13, 12, (3183), (2020).
    • A neurotransmitter produced by gut bacteria modulates host sensory behaviour, Nature, 10.1038/s41586-020-2395-5, (2020).
    • Seasonal and annual fluctuations of deer populations estimated by a Bayesian state–space model, PLOS ONE, 10.1371/journal.pone.0225872, 15, 6, (e0225872), (2020).
    • Bayesian Estimation and Testing of a Beta Factor Model for Bounded Continuous Variables, Multivariate Behavioral Research, 10.1080/00273171.2020.1805582, (1-22), (2020).
    • Lowered endogenous mu-opioid receptor availability in subclinical depression and anxiety, Neuropsychopharmacology, 10.1038/s41386-020-0725-9, (2020).
    • Neonatal spectral EEG is prognostic of cognitive abilities at school age in premature infants without overt brain damage, European Journal of Pediatrics, 10.1007/s00431-020-03818-x, (2020).
    • Hierarchical analysis of wild Atlantic salmon (Salmo salar) fecundity in relation to body size and developmental traits, Journal of Fish Biology, 10.1111/jfb.14181, 96, 2, (316-326), (2019).
    • Acute peripheral inflammation and post‐traumatic sleep differ between sexes after experimental diffuse brain injury, European Journal of Neuroscience, 10.1111/ejn.14611, 52, 1, (2791-2814), (2019).
    • Extra‐medical use of antipsychotics: what can be learnt from experiences with other prescription medicines?, Addiction, 10.1111/add.14823, 115, 7, (1203-1205), (2019).
    • A Mathematical Modelling for Workflows, Journal of Mathematics, 10.1155/2019/4784909, 2019, (1-23), (2019).
    • Farm size shapes friend choice amongst rice producers in China: Some evidence for the theory of network ecology, Social Networks, 10.1016/j.socnet.2019.10.001, (2019).
    • undefined, Proceedings of the 32nd Annual ACM Symposium on User Interface Software and Technology, 10.1145/3332165.3347940, (591-603), (2019).
    • Why Authors Don't Visualize Uncertainty, IEEE Transactions on Visualization and Computer Graphics, 10.1109/TVCG.2019.2934287, (1-1), (2019).
    • Contemporary statistical inference for infectious disease models using Stan, Epidemics, 10.1016/j.epidem.2019.100367, (100367), (2019).
    • Predicting Early-Childhood Gender Transitions, Psychological Science, 10.1177/0956797619830649, 30, 5, (669-681), (2019).
    • Green turtle somatic growth dynamics: distributional regression reveals effects of differential emigration, Marine Ecology Progress Series, 10.3354/meps12946, 616, (185-195), (2019).
    • Effective Visual Communication for the Quantitative Scientist, CPT: Pharmacometrics & Systems Pharmacology, 10.1002/psp4.12455, 8, 10, (705-719), (2019).
    • Criminogenic Factors Associated with Noncompliance and Rearrest of Mental Health Court Participants, Criminal Justice and Behavior, 10.1177/0093854819862010, (009385481986201), (2019).
    • Overcoming long Bayesian run times in integrated fisheries stock assessments, ICES Journal of Marine Science, 10.1093/icesjms/fsz059, (2019).
    • Bayesian pharmacokinetic modeling of dynamic contrast-enhanced magnetic resonance imaging: validation and application, Physics in Medicine & Biology, 10.1088/1361-6560/ab3a5a, 64, 18, (18NT02), (2019).
    • Evidence against interactive effects on articulation in Javanese verb paradigms, Psychonomic Bulletin & Review, 10.3758/s13423-019-01637-2, (2019).
    • Dyslexia Research and the Partial Report Task: A First Step toward Acknowledging Iconic and Visual Short-term Memory, Scientific Studies of Reading, 10.1080/10888438.2019.1642341, (1-11), (2019).
    • Productivity Equation and the m Distributions of Information Processing in Workflows, Applied System Innovation, 10.3390/asi2030024, 2, 3, (24), (2019).
    • Maternal Parenting Stress Following Paternal or Close Family Incarceration: Bayesian Model-Based Profiling Using the HILDA Longitudinal Survey, Journal of Quantitative Criminology, 10.1007/s10940-019-09430-z, (2019).
    • The Experiment is just as Important as the Likelihood in Understanding the Prior: a Cautionary Note on Robust Cognitive Modeling, Computational Brain & Behavior, 10.1007/s42113-019-00051-0, (2019).
    • G‐Protein coupled receptor 64 promotes invasiveness and metastasis in Ewing sarcomas through PGF and MMP1, The Journal of Pathology, 10.1002/path.4170, 230, 1, (70-81), (2013).
    • Tracing the origins of metastasis, The Journal of Pathology, 10.1002/path.2801, 223, 2, (196-205), (2010).
    • Identifying mesopredator release in multi‐predator systems: a review of evidence from North America, Mammal Review, 10.1111/mam.12207, 0, 0, (undefined).
    • Bayesian and frequentist testing for differences between two groups with parametric and nonparametric two‐sample tests, WIREs Computational Statistics , 10.1002/wics.1523, 0, 0, (undefined).