Electronic Journal of Probability

Can the Adaptive Metropolis Algorithm Collapse Without the Covariance Lower Bound?

Matti Vihola

Full-text: Open access

Abstract

The Adaptive Metropolis (AM) algorithm is based on the symmetric random-walk Metropolis algorithm. The proposal distribution has the following time-dependent covariance matrix, at step $n+1$<em></em>, $S_n=\mathrm{Cov}(X_1,\ldots,X_n)+\varepsilon I$,<em></em> that is, the sample covariance matrix of the history of the chain plus a (small) constant $\varepsilon&gt;0$<em> </em> multiple of the identity matrix $I$<em> </em>. The lower bound on the eigenvalues of <em>$S_n$</em> induced by the factor $\varepsilon I$<em></em> is theoretically convenient, but practically cumbersome, as a good value for the parameter <em>$\varepsilon$</em> may not always be easy to choose. This article considers variants of the AM algorithm that do not explicitly bound the eigenvalues of <em>$S_n$</em> away from zero. The behaviour of <em>$S_n$</em> is studied in detail, indicating that the eigenvalues of $S_n$<em> </em> do not tend to collapse to zero in general. In dimension one, it is shown that $S_n$<em></em> is bounded away from zero if the logarithmic target density is uniformly continuous. For a modification of the AM algorithm including an additional fixed component in the proposal distribution, the eigenvalues of <em>$S_n$</em> are shown to stay away from zero with a practically non-restrictive condition. This result implies a strong law of large numbers for super-exponentially decaying target distributions with regular contours.

Article information

Source
Electron. J. Probab., Volume 16 (2011), paper no. 2, 45-75.

Dates
Accepted: 2 January 2011
First available in Project Euclid: 1 June 2016

Permanent link to this document
https://projecteuclid.org/euclid.ejp/1464820171

Digital Object Identifier
doi:10.1214/EJP.v16-840

Mathematical Reviews number (MathSciNet)
MR2749772

Zentralblatt MATH identifier
1226.65007

Subjects
Primary: 65C40: Computational Markov chains
Secondary: 60J27: Continuous-time Markov processes on discrete state spaces 93E15: Stochastic stability 93E35: Stochastic learning and adaptive control

Keywords
adaptive Markov chain Monte Carlo Metropolis algorithm stability stochastic approximation

Rights
This work is licensed under aCreative Commons Attribution 3.0 License.

Citation

Vihola, Matti. Can the Adaptive Metropolis Algorithm Collapse Without the Covariance Lower Bound?. Electron. J. Probab. 16 (2011), paper no. 2, 45--75. doi:10.1214/EJP.v16-840. https://projecteuclid.org/euclid.ejp/1464820171


Export citation

References

  • Andrieu, Christophe; Moulines, Éric. On the ergodicity properties of some adaptive MCMC algorithms. Ann. Appl. Probab. 16 (2006), no. 3, 1462–1505.
  • Andrieu, Christophe; Robert, Christian. Controlled MCMC for optimal sampling. Technical Report Ceremade 0125, Universite Paris Dauphine (2001).
  • Andrieu, Christophe; Thoms, Johannes. A tutorial on adaptive MCMC. Stat. Comput. 18 (2008), no. 4, 343–373.
  • Atchadé, Yves; Fort, Gersende. Limit theorems for some adaptive MCMC algorithms with subgeometric kernels. Bernoulli 16 (2010), no. 1, 116–154. (Review)
  • Atchadé, Yves; Rosenthal, Jeffrey. On adaptive Markov chain Monte Carlo algorithms. Bernoulli 11 (2005), no. 5, 815-828.
  • Athreya, K. B.; Ney, P. A new approach to the limit theory of recurrent Markov chains. Trans. Amer. Math. Soc. 245 (1978), 493–501.
  • Bai, Yan; Roberts, Gareth; Rosenthal, Jeffrey. On the containment condition for adaptive Markov chain Monte Carlo algorithms. Preprint (2008). Available at this URL.
  • Esseen, Carl-Gustav. On the Kolmogorov-Rogozin inequality for the concentration function. Z. Wahrscheinlichkeitstheorie verw. Gebiete 5 (1966), no. 3, 210-216.
  • Haario, Heikki; Saksman, Eero; Tamminen Johanna. An adaptive Metropolis algorithm. Bernoulli 7 (2001), no. 2, 223-242.
  • Hall, P.; Heyde, C. C. Martingale limit theory and its application. Probability and Mathematical Statistics. Academic Press, Inc. [Harcourt Brace Jovanovich, Publishers], New York-London, 1980. xii+308 pp. ISBN: 0-12-319350-8
  • Jarner, Søren Fiig; Hansen, Ernst. Geometric ergodicity of Metropolis algorithms. Stochastic Process. Appl. 85 (2000), no. 2, 341–361.
  • Nummelin, Esa. A splitting technique for Harris recurrent Markov chains. Z. Wahrscheinlichkeitstheorie verw. Gebiete 43 (1978), no. 3, 309-318.
  • Roberts, Gareth O.; Rosenthal, Jeffrey S. General state space Markov chains and MCMC algorithms. Probab. Surv. 1 (2004), 20–71 (electronic).
  • Roberts, Gareth O.; Rosenthal, Jeffrey S. Coupling and ergodicity of adaptive Markov chain Monte Carlo algorithms. J. Appl. Probab., 44 (2007), no. 2, 458-475.
  • Roberts, Gareth O.; Rosenthal, Jeffrey S. Examples of adaptive MCMC. J. Comput. Graph. Statist. 18 (2009), no. 2, 349-367.
  • Rogozin, B. A. An estimate for concentration functions. Theory Probab. Appl. 6 (1961), no. 1, 94-97.
  • Saksman, Eero; Vihola, Matti. On the ergodicity of the adaptive Metropolis algorithm on unbounded domains. Ann. Appl. Probab. 20 (2010), no. 6, 2178-2203.
  • Vihola, Matti. On the stability and ergodicity of an adaptive scaling Metropolis algorithm. Preprint (2009), arXiv:0903.4061v2