- Bayesian Anal.
- Volume 12, Number 4 (2017), 1069-1103.
Inconsistency of Bayesian Inference for Misspecified Linear Models, and a Proposal for Repairing It
We empirically show that Bayesian inference can be inconsistent under misspecification in simple linear regression problems, both in a model averaging/selection and in a Bayesian ridge regression setting. We use the standard linear model, which assumes homoskedasticity, whereas the data are heteroskedastic (though, significantly, there are no outliers). As sample size increases, the posterior puts its mass on worse and worse models of ever higher dimension. This is caused by hypercompression, the phenomenon that the posterior puts its mass on distributions that have much larger KL divergence from the ground truth than their average, i.e. the Bayes predictive distribution. To remedy the problem, we equip the likelihood in Bayes’ theorem with an exponent called the learning rate, and we propose the SafeBayesian method to learn the learning rate from the data. SafeBayes tends to select small learning rates, and regularizes more, as soon as hypercompression takes place. Its results on our data are quite encouraging.
Bayesian Anal., Volume 12, Number 4 (2017), 1069-1103.
First available in Project Euclid: 18 November 2017
Permanent link to this document
Digital Object Identifier
Mathematical Reviews number (MathSciNet)
Zentralblatt MATH identifier
Grünwald, Peter; van Ommen, Thijs. Inconsistency of Bayesian Inference for Misspecified Linear Models, and a Proposal for Repairing It. Bayesian Anal. 12 (2017), no. 4, 1069--1103. doi:10.1214/17-BA1085. https://projecteuclid.org/euclid.ba/1510974325
- Supplementary material of “Inconsistency of Bayesian Inference for Misspecified Linear Models, and a Proposal for Repairing It”. In this paper, we described a problem for Bayesian inference under misspecification, and proposed the SafeBayes algorithm for solving it. The main appendix, Appendix B, places SafeBayes in proper context by giving a six point overview of what can go wrong in Bayesian inference from a frequentist point of view, and what can be done about it, both in the well- and in the misspecified case. Specifically we clarify the one other problem with Bayes under misspecification — interest in non-KL-associated tasks — and its relation to Gibbs posteriors. The remainder of the supplement is devoted to discussing these six points in great detail, explicitly stating several Open Problems, related work, and ideas for a general Bayesian misspecification theory as we go along. We also provide further details on SafeBayes (Appendix C), additional experiments (Appendix G) and refine and explain in more detail the notion of bad misspecification and hypercompression (Appendix D).