The Annals of Statistics

Uniformly most powerful Bayesian tests

Valen E. Johnson

Full-text: Open access

Abstract

Uniformly most powerful tests are statistical hypothesis tests that provide the greatest power against a fixed null hypothesis among all tests of a given size. In this article, the notion of uniformly most powerful tests is extended to the Bayesian setting by defining uniformly most powerful Bayesian tests to be tests that maximize the probability that the Bayes factor, in favor of the alternative hypothesis, exceeds a specified threshold. Like their classical counterpart, uniformly most powerful Bayesian tests are most easily defined in one-parameter exponential family models, although extensions outside of this class are possible. The connection between uniformly most powerful tests and uniformly most powerful Bayesian tests can be used to provide an approximate calibration between $p$-values and Bayes factors. Finally, issues regarding the strong dependence of resulting Bayes factors and $p$-values on sample size are discussed.

Article information

Source
Ann. Statist., Volume 41, Number 4 (2013), 1716-1741.

Dates
First available in Project Euclid: 5 September 2013

Permanent link to this document
https://projecteuclid.org/euclid.aos/1378386237

Digital Object Identifier
doi:10.1214/13-AOS1123

Mathematical Reviews number (MathSciNet)
MR3127847

Zentralblatt MATH identifier
1277.62084

Subjects
Primary: 62A01: Foundations and philosophical topics 62F03: Hypothesis testing 62F05: Asymptotic properties of tests 62F15: Bayesian inference

Keywords
Bayes factor Jeffreys–Lindley paradox objective Bayes one-parameter exponential family model Neyman–Pearson lemma nonlocal prior density uniformly most powerful test Higgs boson

Citation

Johnson, Valen E. Uniformly most powerful Bayesian tests. Ann. Statist. 41 (2013), no. 4, 1716--1741. doi:10.1214/13-AOS1123. https://projecteuclid.org/euclid.aos/1378386237


Export citation

References

  • Berger, J. (2006). The case for objective Bayesian analysis. Bayesian Anal. 1 385–402.
  • Berger, J. O. and Pericchi, L. R. (1996). The intrinsic Bayes factor for model selection and prediction. J. Amer. Statist. Assoc. 91 109–122.
  • Berger, J. O. and Sellke, T. (1987). Testing a point null hypothesis: Irreconcilability of $P$ values and evidence. J. Amer. Statist. Assoc. 82 112–122.
  • Berger, J. O. and Wolpert, R. L. (1984). The Likelihood Principle. Institute of Mathematical Statistics Lecture Notes—Monograph Series 6. IMS, Hayward, CA.
  • Edwards, W., Lindman, H. and Savage, L. (1963). Bayesian statistical inference for psychological research. Psychological Review 70 193–242.
  • Howson, C. and Urbach, P. (2005). Scientific Reasoning: The Bayesian Approach, 3rd ed. Open Court, Chicago, IL.
  • Jeffreys, H. (1939). Theory of Probability. Cambridge Univ. Press, Cambridge.
  • Johnson, V. E. (2005). Bayes factors based on test statistics. J. R. Stat. Soc. Ser. B Stat. Methodol. 67 689–701.
  • Johnson, V. E. (2008). Properties of Bayes factors based on test statistics. Scand. J. Stat. 35 354–368.
  • Johnson, V. E. and Rossell, D. (2010). On the use of non-local prior densities in Bayesian hypothesis tests. J. R. Stat. Soc. Ser. B Stat. Methodol. 72 143–170.
  • Johnson, V. E. and Rossell, D. (2012). Bayesian model selection in high-dimensional settings. J. Amer. Statist. Assoc. 107 649–660.
  • Lehmann, E. L. and Romano, J. P. (2005). Testing Statistical Hypotheses, 3rd ed. Springer, New York.
  • Lindley, D. (1957). A statistical paradox. Biometrika 44 187–192.
  • Mayo, D. G. and Spanos, A. (2006). Severe testing as a basic concept in a Neyman–Pearson philosophy of induction. British J. Philos. Sci. 57 323–357.
  • Neyman, J. and Pearson, E. (1928). On the use and interpretation of certain test criteria for purposes of statistical inference. Biometrika 20A 175–240.
  • Neyman, J. and Pearson, E. (1933). On the problem of the most efficient tests of statistical hypotheses. Philos. Trans. R. Soc. Lond. Ser. A Math. Phys. Eng. Sci. 231 289–337.
  • O’Hagan, A. (1995). Fractional Bayes factors for model comparison. J. R. Stat. Soc. Ser. B Stat. Methodol. 57 99–118.
  • Pitman, E. (1949). Lecture Notes on Nonparametric Statistical Inference. Columbia Univ., New York.
  • Popper, K. R. (1959). The Logic of Scientific Discovery. Hutchinson, London.
  • Prosper (2012). Personal communication to news@bayesian.org.
  • Robert, C. P., Chopin, N. and Rousseau, J. (2009). Harold Jeffreys’s theory of probability revisited. Statist. Sci. 24 141–172.
  • Rousseau, J. (2007). Approximating interval hypothesis: $p$-values and Bayes factors. In Proceedings of the 2006 Valencia Conference (J. Bernardo, M. Bayarri, J. Berger, A. Dawid, D. Heckerman, A. Smith and M. West, eds.) 1–27. Oxford Univ. Press, Oxford.