Abstract
In Bayesian model selection when the prior information on the parameters of the models is vague default priors should be used. Unfortunately, these priors are usually improper yielding indeterminate Bayes factors that preclude the comparison of the models. To calibrate the initial default priors Cano et al. (2008) proposed integral priors as prior distributions for Bayesian model selection. These priors were defined as the solution of a system of two integral equations that under some general assumptions has a unique solution associated with a recurrent Markov chain. Later, in Cano et al. (2012b) integral priors were successfully applied in some situations where they are known and they are unique, being proper or not, and it was pointed out how to deal with other situations. Here, we present some new situations to illustrate how this new methodology works in the cases where we are not able to explicitly find the integral priors but we know they are proper and unique (one-sided testing for the exponential distribution) and in the cases where recurrence of the associated Markov chains is difficult to check. To deal with this latter scenario we impose a technical constraint on the imaginary training samples space that virtually implies the existence and the uniqueness of integral priors which are proper distributions. The improvement over other existing methodologies comes from the fact that this method is more automatic since we only need to simulate from the involved models and their posteriors to compute very well behaved Bayes factors.
Citation
Juan Antonio Cano. Diego Salmerón. "Integral Priors and Constrained Imaginary Training Samples for Nested and Non-nested Bayesian Model Comparison." Bayesian Anal. 8 (2) 361 - 380, June 2013. https://doi.org/10.1214/13-BA812
Information