Abstract
Generalized Bayes posterior distributions are formed by putting a fractional power on the likelihood before combining with the prior via Bayes’s formula. This fractional power, which is often viewed as a remedy for potential model misspecification bias, is called the learning rate, and a number of data-driven learning rate selection methods have been proposed in the recent literature. Each of these proposals has a different focus, a different target they aim to achieve, which makes them difficult to compare. In this paper, we provide a direct head-to-head empirical comparison of these learning rate selection methods in various misspecified model scenarios, in terms of several relevant metrics, in particular, coverage probability of the generalized Bayes credible regions. In some examples all the methods perform well, while in others the misspecification is too severe to be overcome, but we find that the so-called generalized posterior calibration algorithm tends to outperform the others in terms of credible region coverage probability.
Funding Statement
This work is partially supported by the U.S. National Science Foundation, DMS–1811802.
Acknowledgments
The authors are grateful for the helpful suggestions from three anonymous reviewers on a previous version of the manuscript.
Citation
Pei-Shien Wu. Ryan Martin. "A Comparison of Learning Rate Selection Methods in Generalized Bayesian Inference." Bayesian Anal. 18 (1) 105 - 132, March 2023. https://doi.org/10.1214/21-BA1302
Information