Bernoulli

  • Bernoulli
  • Volume 21, Number 3 (2015), 1855-1883.

On particle Gibbs sampling

Nicolas Chopin and Sumeetpal S. Singh

Full-text: Open access

Abstract

The particle Gibbs sampler is a Markov chain Monte Carlo (MCMC) algorithm to sample from the full posterior distribution of a state-space model. It does so by executing Gibbs sampling steps on an extended target distribution defined on the space of the auxiliary variables generated by an interacting particle system. This paper makes the following contributions to the theoretical study of this algorithm. Firstly, we present a coupling construction between two particle Gibbs updates from different starting points and we show that the coupling probability may be made arbitrarily close to one by increasing the number of particles. We obtain as a direct corollary that the particle Gibbs kernel is uniformly ergodic. Secondly, we show how the inclusion of an additional Gibbs sampling step that reselects the ancestors of the particle Gibbs’ extended target distribution, which is a popular approach in practice to improve mixing, does indeed yield a theoretically more efficient algorithm as measured by the asymptotic variance. Thirdly, we extend particle Gibbs to work with lower variance resampling schemes. A detailed numerical study is provided to demonstrate the efficiency of particle Gibbs and the proposed variants.

Article information

Source
Bernoulli, Volume 21, Number 3 (2015), 1855-1883.

Dates
Received: January 2014
First available in Project Euclid: 27 May 2015

Permanent link to this document
https://projecteuclid.org/euclid.bj/1432732040

Digital Object Identifier
doi:10.3150/14-BEJ629

Mathematical Reviews number (MathSciNet)
MR3352064

Zentralblatt MATH identifier
1333.60164

Keywords
Feynman–Kac formulae Gibbs sampling particle filtering particle Markov chain Monte Carlo sequential Monte Carlo

Citation

Chopin, Nicolas; Singh, Sumeetpal S. On particle Gibbs sampling. Bernoulli 21 (2015), no. 3, 1855--1883. doi:10.3150/14-BEJ629. https://projecteuclid.org/euclid.bj/1432732040


Export citation

References

  • [1] Andrieu, C., Doucet, A. and Holenstein, R. (2010). Particle Markov chain Monte Carlo methods. J. R. Stat. Soc. Ser. B Stat. Methodol. 72 269–342.
  • [2] Andrieu, C. and Vihola, M. (2012). Convergence properties of pseudo-marginal Markov chain Monte Carlo algorithms. Available at arXiv:1210.1484.
  • [3] Cappé, O., Moulines, E. and Rydén, T. (2005). Inference in Hidden Markov Models. Springer Series in Statistics. New York: Springer.
  • [4] Carpenter, J., Clifford, P. and Fearnhead, P. (1999). Improved particle filter for nonlinear problems. IEE Proc. Radar, Sonar Navigation 146 2–7.
  • [5] Chopin, N., Jacob, P.E. and Papaspiliopoulos, O. (2013). $\mathrm{SMC}^{2}$: An efficient algorithm for sequential analysis of state space models. J. R. Stat. Soc. Ser. B Stat. Methodol. 75 397–426.
  • [6] Chopin, N. and Singh, S. (2013). On the particle Gibbs sampler. Available at arXiv:1304.1887.
  • [7] Del Moral, P. (1996). Nonlinear filtering: Interacting particle solution. Markov Process. Related Fields 2 555–579.
  • [8] Del Moral, P. (2004). Feynman–Kac Formulae: Genealogical and Interacting Particle Systems with Applications. Probability and Its Applications (New York). New York: Springer.
  • [9] Doucet, A., de Freitas, N. and Gordon, N., eds. (2001). Sequential Monte Carlo Methods in Practice. Statistics for Engineering and Information Science. New York: Springer.
  • [10] Everitt, R.G. (2012). Bayesian parameter estimation for latent Markov random fields and social networks. J. Comput. Graph. Statist. 21 940–960.
  • [11] Golightly, A. and Wilkinson, D. (2011). Bayesian parameter inference for stochastic biochemical network models using particle Markov chain Monte Carlo. Interface Focus 1 807–820.
  • [12] Hoeffding, W. (1963). Probability inequalities for sums of bounded random variables. J. Amer. Statist. Assoc. 58 13–30.
  • [13] Jacob, P.E., Murray, L. and Rubenthaler, S. (2013). Path storage in the particle filter. Available at arXiv:1307.3180.
  • [14] Launay, T., Philippe, A. and Lamarche, S. (2013). On particle filters applied to electricity load forecasting. J. SFdS 154 1–36.
  • [15] Lindsten, F., Jordan, M.I. and Schön, T.B. (2012). Ancestor sampling for particle Gibbs. Available at arXiv:1210.6911.
  • [16] Lindsten, F. and Schön, T.B. (2012). On the use of backward simulation in the particle Gibbs sampler. In Proceedings of the 37th IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP) 3845–3848. Kyoto: IEEE.
  • [17] Lindvall, T. (1992). Lectures on the Coupling Method. Wiley Series in Probability and Mathematical Statistics: Probability and Mathematical Statistics. New York: Wiley.
  • [18] Liu, J.S. and Chen, R. (1998). Sequential Monte Carlo methods for dynamic systems. J. Amer. Statist. Assoc. 93 1032–1044.
  • [19] Mira, A. and Geyer, C. (1999). Ordering Monte Carlo Markov chains. Technical report, School of Statistics, Univ. Minnesota.
  • [20] Peters, G., Hosack, G. and Hayes, K. (2010). Ecological non-linear state space model selection via adaptive particle Markov chain Monte Carlo. Available at arXiv:1005.2238.
  • [21] Pitt, M.K., Silva, R.d.S., Giordani, P. and Kohn, R. (2012). On some properties of Markov chain Monte Carlo simulation methods based on the particle filter. J. Econometrics 171 134–151.
  • [22] Roberts, G.O. and Rosenthal, J.S. (2004). General state space Markov chains and MCMC algorithms. Probab. Surv. 1 20–71.
  • [23] Silva, R., Giordani, P., Kohn, R. and Pitt, M. (2009). Particle filtering within adaptive Metropolis–Hastings sampling. Preprint. Available at arXiv:0911.0230.
  • [24] Tierney, L. (1998). A note on Metropolis–Hastings kernels for general state spaces. Ann. Appl. Probab. 8 1–9.
  • [25] Vrugt, J.A., ter Braak, C.J., Diks, C.G. and Schoups, G. (2014). Hydrologic data assimilation using particle Markov chain Monte Carlo simulation: Theory, concepts and applications. Advances in Water Resources. To appear.
  • [26] Whiteley, N. (2010). Discussion of “Particle Markov chain Monte Carlo methods” by Andrieu et al. J. R. Stat. Soc. Ser. B Stat. Methodol. 72 306–307.
  • [27] Whiteley, N., Andrieu, C. and Doucet, A. (2010). Efficient Bayesian inference for switching state-space models using discrete particle Markov chain Monte Carlo methods. Available at arXiv:1011.2437.
  • [28] Yu, Y. and Meng, X.-L. (2011). To center or not to center: That is not the question – An ancillarity-sufficiency interweaving strategy (ASIS) for boosting MCMC efficiency. J. Comput. Graph. Statist. 20 531–570.