Advances in Applied Probability

Weak convergence rates of population versus single-chain stochastic approximation MCMC algorithms

Qifan Song, Mingqi Wu, and Faming Liang

Full-text: Access denied (no subscription detected)

We're sorry, but we are unable to provide you with the full text of this article because we are not able to identify you as a subscriber. If you have a personal subscription to this journal, then please login. If you are already logged in, then you may need to update your profile to register your subscription. Read more about accessing full-text

Abstract

In this paper we establish the theory of weak convergence (toward a normal distribution) for both single-chain and population stochastic approximation Markov chain Monte Carlo (MCMC) algorithms (SAMCMC algorithms). Based on the theory, we give an explicit ratio of convergence rates for the population SAMCMC algorithm and the single-chain SAMCMC algorithm. Our results provide a theoretic guarantee that the population SAMCMC algorithms are asymptotically more efficient than the single-chain SAMCMC algorithms when the gain factor sequence decreases slower than O(1 / t), where t indexes the number of iterations. This is of interest for practical applications.

Article information

Source
Adv. in Appl. Probab., Volume 46, Number 4 (2014), 1059-1083.

Dates
First available in Project Euclid: 12 December 2014

Permanent link to this document
https://projecteuclid.org/euclid.aap/1418396243

Digital Object Identifier
doi:10.1239/aap/1418396243

Mathematical Reviews number (MathSciNet)
MR3290429

Zentralblatt MATH identifier
1305.60065

Subjects
Primary: 60J22: Computational methods in Markov chains [See also 65C40]
Secondary: 65C05: Monte Carlo methods

Keywords
Asymptotic normality Markov chain Monte Carlo stochastic approximation Metropolis-Hastings algorithm

Citation

Song, Qifan; Wu, Mingqi; Liang, Faming. Weak convergence rates of population versus single-chain stochastic approximation MCMC algorithms. Adv. in Appl. Probab. 46 (2014), no. 4, 1059--1083. doi:10.1239/aap/1418396243. https://projecteuclid.org/euclid.aap/1418396243


Export citation

References

  • Aldous, D., Lovász, L. and Winkler, P. (1997). Mixing times for uniformly ergodic Markov chains. Stoch. Process. Appl. 71, 165–185.
  • Andrieu, C. and Moulines, $\acute{\rm E}$. (2006). On the ergodicity properties of some adaptive MCMC algorithms. Ann. Appl. Prob. 16, 1462–1505.
  • Andrieu, C., Moulines, $\acute{\rm E}$. and Priouret, P. (2005). Stability of stochastic approximation under verifiable conditions. SIAM J. Control Optimization 44, 283–312.
  • Atchadé, Y. and Fort, G. (2010). Limit theorems for some adaptive MCMC algorithms with subgeometric kernels. Bernoulli 16, 116–154.
  • Atchadé, Y., Fort, G., Moulines, E. and Priouret, P. (2011) Adaptive Markov chain Monte Carlo: theory and methods. In Bayesian Time Series Models, Cambridge University Press, pp. 32–51.
  • Benveniste, A., Métivier, M. and Priouret, P. (1990). Adaptive Algorithms and Stochastic Approximations. Springer, Berlin.
  • Casella, G. and Berger, R. L. (2002). Statistical Inference, 2nd edn. Thomson Learning, Pacific Grove, CA.
  • Chen, H.-F. (2002). Stochastic Approximation and Its Applications. Kluwer, Dordrecht.
  • Cheon, S. and Liang, F. (2009). Bayesian phylogeny analysis via stochastic approximation Monte Carlo. Mol. Phylogenet. Evol. 53, 394–403.
  • Geman, S. and Geman, D. (1984). Stochastic relaxation, Gibbs distributions, and the Bayesian restoration of images. IEEE Trans. Pattern Anal. Mach. Intell. 6, 721–741.
  • Geyer, C. J. (1991). Markov chain Monte Carlo maximum likelihood. In Computing Science and Statistics: Proceedings of the 23rd Symposium on the Interface, Interface Foundation, Fairfax Station, VA, pp. 153–163.
  • Gilks, W. R., Roberts, G. O., and George, E. I. (1994). Adaptive direction sampling. J. R. Statist. Soc. Ser. D (The Statistician) 43, 179–189.
  • Gu, M. G. and Kong, F. H. (1998). A stochastic approximation algorithm with Markov chain Monte-Carlo method for incomplete data estimation problems. Proc. Nat. Acad. Sci. USA 95, 7270–7274.
  • Haario, H., Saksman, E. and Tamminen, J. (2001). An adaptive Metropolis algorithm. Bernoulli 7, 223–242.
  • Hall, P. and Heyde, C. C. (1980). Martingale Limit Theory and Its Applications. Academic Press, New York.
  • Hastings, W. K. (1970). Monte Carlo sampling methods using Markov chains and their applications. Biometrika 57, 97–109.
  • Liang, F. (2007). Continuous contour Monte Carlo for marginal density estimation with an application to a spatial statistical model. J. Comput. Graph. Statist. 16, 608–632.
  • Liang, F. (2009). Improving SAMC using smoothing methods: theory and applications to Bayesian model selection problems. Ann. Statist. 37, 2626–2654.
  • Liang, F. (2010). Trajectory averaging for stochastic approximation MCMC algorithms. Ann. Statist. 38, 2823–2856.
  • Liang, F., and Wong, W. H. (2000). Evolutionary Monte Carlo: applications to $C_p$ model sampling and change point problem. Statistica Sinica 10, 317–342.
  • Liang, F., and Wong, W. H. (2001). Real-parameter evolutionary Monte Carlo with applications to Bayesian mixture models. J. Amer. Statist. Assoc. 96, 653–666.
  • Liang, F. and Zhang, J. (2009). Learning Bayesian networks for discrete data. Comput. Statist. Data. Anal. 53, 865–876.
  • Liang, F., Liu, C. and Carroll, R. J. (2007). Stochastic approximation in Monte Carlo computation. J. Amer. Statist. Assoc. 102, 305–320.
  • Liu, J. S., Liang, F. and Wong, W. H. (2000). The multiple-try method and local optimization in Metropolis sampling. J. Amer. Statist. Assoc. 95, 121–134.
  • Marinari, E. and Parisi, G. (1992). Simulated tempering: a new Monte Carlo scheme. Europhys. Lett. 19, 451–458.
  • Metropolis, N. et al. (1953). Equation of state calculations by fast computing machines. J. Chem. Phys. 21, 1087–1092.
  • Meyn, S. and Tweedie, R. L. (2009). Markov Chains and Stochastic Stability, 2nd edn. Cambridge University Press.
  • Nummelin, E. (1984), General Irreducible Markov Chains and Nonnegative Operators. Cambridge University Press.
  • Pelletier, M. (1998). Weak convergence rates for stochastic approximation with application to multiple targets and simulated annealing. Ann. Appl. Prob. 8, 10–44.
  • Robbins, H. and Monro, S. (1951). A stochastic approximation method. Ann. Math. Statist. 22, 400–407.
  • Roberts, G. O. and Rosenthal, J. S. (2009). Examples of adaptive MCMC. J. Comput. Graph. Statist. 18, 349–367.
  • Roberts, G. O. and Tweedie, R. L. (1996). Geometric convergence and central limit theorems for multidimensional Hastings and Metropolis algorithms. Biometrika 83, 95–110.
  • Song, Q., Wu, M. and Liang, F. (2013). Supplementary material for `Weak convergence rates of population versus single-chain stochastic approximation MCMC algorithms'. Available at http://www.stat.tamu.edu/$\sim$fliang.
  • Tadi$\acute{\mbox{c}}$, V. (1997). On the convergence of stochastic iterative algorithms and their applications to machine learning. Technical report, Mihajlo Pupin Institute, Serbia, Yugoslavia. A short version of this paper was published in Proc. 36th Conf. Decision & Control, San Diego, CA, pp. 2281–2286.
  • Wang, F. and Landau, D. P. (2001). Efficient, multiple-range random walk algorithm to calculate the density of states. Phys. Rev. Lett. 86, 2050–2053.
  • Wong, W. H. and Liang, F. (1997). Dynamic weighting in Monte Carlo and optimization. Proc. Nat. Acad. Sci. USA 94, 14220–14224.
  • Younes, L. (1989). Parametric inference for imperfectly observed Gibbsian fields. Prob. Theory Relat. Fields 82, 625–645.
  • Ziedan, I. E. (1972). Explicit solution of the Lyapunov-matrix equation. IEEE Trans. Automatic Control 17, 379–381. \endharvreferences