Abstract
Bayesian analyses combine information represented by different terms in a joint Bayesian model. When one or more of the terms is misspecified, it can be helpful to restrict the use of information from suspect model components to modify posterior inference. This is called “cutting feedback”, and both the specification and computation of the posterior for such “cut models” is challenging. In this paper, we define cut posterior distributions as solutions to constrained optimization problems, and propose variational methods for their computation. These methods are faster than existing Markov chain Monte Carlo (MCMC) approaches by an order of magnitude. It is also shown that variational methods allow for the evaluation of computationally intensive conflict checks that can be used to decide whether or not feedback should be cut. Our methods are illustrated in a number of simulated and real examples, including an application where recent methodological advances that combine variational inference and MCMC within the variational optimization are used.
Acknowledgments
The authors thank Chris Carmona and Geoff Nicholls for sharing some details of their work with us, and the review team for helpful feedback that improved the paper. David Nott is affiliated with the Institute of Operations Research and Analytics, National University of Singapore.
Citation
Xuejun Yu. David J. Nott. Michael Stanley Smith. "Variational Inference for Cutting Feedback in Misspecified Models." Statist. Sci. 38 (3) 490 - 509, August 2023. https://doi.org/10.1214/23-STS886
Information