Brazilian Journal of Probability and Statistics

Subjective Bayesian testing using calibrated prior probabilities

Dan J. Spitzner

Full-text: Access denied (no subscription detected)

We're sorry, but we are unable to provide you with the full text of this article because we are not able to identify you as a subscriber. If you have a personal subscription to this journal, then please login. If you are already logged in, then you may need to update your profile to register your subscription. Read more about accessing full-text


This article proposes a calibration scheme for Bayesian testing that coordinates analytically-derived statistical performance considerations with expert opinion. In other words, the scheme is effective and meaningful for incorporating objective elements into subjective Bayesian inference. It explores a novel role for default priors as anchors for calibration rather than substitutes for prior knowledge. Ideas are developed for use with multiplicity adjustments in multiple-model contexts, and to address the issue of prior sensitivity of Bayes factors. Along the way, the performance properties of an existing multiplicity adjustment related to the Poisson distribution are clarified theoretically. Connections of the overall calibration scheme to the Schwarz criterion are also explored. The proposed framework is examined and illustrated on a number of existing data sets related to problems in clinical trials, forensic pattern matching, and log-linear models methodology.

Article information

Braz. J. Probab. Stat., Volume 33, Number 4 (2019), 861-893.

Received: March 2018
Accepted: November 2018
First available in Project Euclid: 26 August 2019

Permanent link to this document

Digital Object Identifier

Mathematical Reviews number (MathSciNet)

Zentralblatt MATH identifier

Subjective Bayes Bayesian testing Bayes factors default priors high-dimensional statistics multiplicity variable selection Schwarz criterion


Spitzner, Dan J. Subjective Bayesian testing using calibrated prior probabilities. Braz. J. Probab. Stat. 33 (2019), no. 4, 861--893. doi:10.1214/18-BJPS424.

Export citation


  • Bartlett, M. S. (1957). Comment on D. V. Lindley’s statistical paradox. Biometrika 44, 533–534.
  • Bayarri, M. J., Berger, J. O., Forte, A. and García-Donato, G. (2012). Criteria for Bayesian model choice with application to variable selection. The Annals of Statistics 40, 1550–1577.
  • Berger, J. and Pericchi, L. (2004). Training samples in objective model selection. The Annals of Statistics 32, 841–869.
  • Berger, J. O. and Pericchi, L. (1996). The intrinsic Bayes factor for model selection and prediction. Journal of the American Statistical Association 91, 109–122.
  • Berry, D. A. and Hochberg, Y. (1999). Bayesian perspectives on multiple comparisons. Journal of Statistical Planning and Inference 82, 215–227.
  • Berry, S. M. and Berry, D. A. (2004). Accounting for multiplicities in assessing drug safety: A three-level hierarchical mixture model. Biometrics 60, 418–426.
  • Billingsley, P. (1995). Probability and Measure, 3rd ed. New York: Wiley.
  • Bollen, K., Ray, S., Zavisca, J. and Harden, J. J. (2012). A comparison of Bayes factor approximation methods including two new methods. Sociological Methods and Research. In press.
  • Box, G. E. P. and Tiao, G. C. (1992). Bayesian Inference in Statistical Analysis. Reading, MA: Addison-Wesley.
  • Casella, G., Girón, F. J., Martinez, M. L. and Moreno, E. (2009). Consistency of Bayesian procedures for variable selection. The Annals of Statistics 37, 1207–1228.
  • Castillo, I., Schmidt-Hieber, J. and van der Vaart, A. (2015). Bayesian linear regression with sparse priors. The Annals of Statistics 43, 1986–2018.
  • Dellaportas, P., Forster, J. J. and Ntzoufras, I. (2012). Joint specification of model space and parameter space prior distributions. Statistical Science 27, 232–246.
  • Fan, J. and Lv, J. (2008). Sure independence screening for ultrahigh dimensional feature space. Journal of the Royal Statistical Society, Series B 70, 849–911.
  • Fan, J. and Lv, J. (2010). A selective overview of variable selection in high dimensional feature space. Statistica Sinica 20, 101–148.
  • Fouskakis, D., Ntzoufras, I. and Perrakis, K. (2018). Power-expected-posterior priors for generalized linear models. Bayesian Analysis 13, 721–748.
  • Good, I. J. (1950). Probability and the Weighing of Evidence. London: Griffin.
  • Ibrahim, J. G. and Chen, M. H. (2000). Power prior distributions for regression models. Statistical Science 15, 46–60.
  • Jeffreys, H. (1961). Theory of Probability, 3rd ed. Oxford: Oxford University Press.
  • Kass, R. E. and Raftery, A. E. (1995). Bayes factors. Journal of the American Statistical Association 90, 773–795.
  • Kass, R. E. and Wasserman, L. (1995). A reference Bayesian test for nested hypotheses and its relationship to the Schwarz criterion. Journal of the American Statistical Association 90, 928–934.
  • Lavine, M. and Schervish, M. J. (1999). Bayes factors: What they are and what they are not. American Statistician 53, 119–122.
  • Liang, F., Paulo, R., Molina, G., Clyde, C. A. and Berger, J. O. (2008). Mixtures of $g$ priors for Bayesian variable selection. Journal of the American Statistical Association 103, 410–423.
  • Lindley, D. V. (1957). A statistical paradox. Biometrika 44, 187–192.
  • Lindley, D. V. (1977). A problem in forensic science. Biometrika 64, 207–213.
  • Lund, S. P. and Iyer, H. (2017). Likelihood ratio as weight of forensic evidence: a closer look. Journal of Research of National Institute of Standards and Technology 122, 1–32.
  • Moreno, E. and Pericchi, L. R. (2014). Intrinsic priors for objective Bayesian model selection. In Bayesian Model Comparison (I. Jeliazkov and D. J. Poirier, eds.) 279–300. Emerald Group Publishing Limited.
  • Müller, P., Parmigiani, G. and Rice, K. (2007). FDR and Bayesian multiple comparison rules. In Bayesian Statistics 8, 349–370. Oxford: Oxford University Press.
  • Narisetty, N. N. and He, X. (2014). Bayesian variable selection with shrinking and diffusing priors. The Annals of Statistics 42, 789–817.
  • O’Hagan, A. (1995). Fractional Bayes factors for model comparisons. Journal of the Royal Statistical Society, Series B 57, 99–138.
  • Pérez, J. M. and Berger, J. O. (2002). Expected-posterior prior distributions for model selection. Biometrika 89, 491–511.
  • Raftery, A. E. (1993). Approximate Bayes factors and accounting for model uncertainty in generalized linear models. Technical Report 255, Dept. Statistics, Univ. Washington.
  • Robert, C. P. (1993). A note on Jeffreys–Lindley paradox. Statistica Sinica 3, 603–608.
  • Robert, C. P. and Casella, G. (1999). Monte Carlo Statistical Methods. New York: Springer.
  • Schwartz, G. (1978). Estimating the dimension of a model. The Annals of Statistics 6, 461–464.
  • Scott, J. G. and Berger, J. O. (2010). Bayes and empirical Bayes multiplicity adjustment in the variable selection problem. The Annals of Statistics 38, 2587–2619.
  • Spitzner, D. J. (2008). An asymptotic viewpoint on high-dimensional Bayesian testing. Bayesian Analysis 3, 121–160.
  • Spitzner, D. J. (2011). Neutral-data comparisons for Bayesian testing. Bayesian Analysis 6, 603–638.
  • Tierney, L. and Kadane, J. B. (1986). Accurate approximations for posterior moments and marginal densities. Journal of the American Statistical Association 81, 82–86.
  • Wilson, M. A., Iversen, E. S., Clyde, M. A., Schmidler, S. C. and Schildkraut, J. M. (2010). Bayesian model search and multilevel inference for SNP association studies. Annals of Applied Statistics 4, 1342–1364.
  • Womack, A. J., Fuentes, C. and Taylor-Rodriguez, D. (2015). Model space priors for objective sparse Bayesian regression. Preprint. Available at
  • Zellner, A. (1986). On assessing prior distributions and Bayesian regression analysis using g-prior distributions. In Bayesian Inference and Decision Techniques: Essays in Honor of Bruno de Finetti (P. Goel and A. Zellner, eds.) 233–243. Amsterdam: North-Holland.