The Annals of Applied Probability

Variance bounding Markov chains

Gareth O. Roberts and Jeffrey S. Rosenthal

Full-text: Open access


We introduce a new property of Markov chains, called variance bounding. We prove that, for reversible chains at least, variance bounding is weaker than, but closely related to, geometric ergodicity. Furthermore, variance bounding is equivalent to the existence of usual central limit theorems for all L2 functionals. Also, variance bounding (unlike geometric ergodicity) is preserved under the Peskun order. We close with some applications to Metropolis–Hastings algorithms.

Article information

Ann. Appl. Probab., Volume 18, Number 3 (2008), 1201-1214.

First available in Project Euclid: 26 May 2008

Permanent link to this document

Digital Object Identifier

Mathematical Reviews number (MathSciNet)

Zentralblatt MATH identifier

Primary: 60J10: Markov chains (discrete-time Markov processes on discrete state spaces)
Secondary: 65C40: Computational Markov chains 47A10: Spectrum, resolvent

Markov chain Monte Carlo Metropolis–Hastings algorithm central limit theorem variance Peskun order geometric ergodicity spectrum


Roberts, Gareth O.; Rosenthal, Jeffrey S. Variance bounding Markov chains. Ann. Appl. Probab. 18 (2008), no. 3, 1201--1214. doi:10.1214/07-AAP486.

Export citation


  • [1] Baxter, J. R. and Rosenthal, J. S. (1995). Rates of convergence for everywhere-positive Markov chains. Statist. Probab. Lett. 22 333–338.
  • [2] Bradley, R. C. (2005). Basic properties of strong mixing conditions: A survey and some open questions. Probab. Surv. 2 107–144.
  • [3] Chan, K. S. and Geyer, C. J. (1994). Discussion to “Markov chains for exploring posterior distributions” by L. Tierney. Ann. Statist. 22 1747–1758.
  • [4] Conway, J. B. (1985). A Course in Functional Analysis. Springer, New York.
  • [5] Diaconis, P., Holmes, S. and Neal, R. M. (2000). Analysis of a non-reversible Markov chain sampler. Ann. Appl. Probab. 10 726–752.
  • [6] Fill, J. A. (1991). Eigenvalue bounds on convergence to stationarity for non-reversible Markov chains, with an application to the exclusion process. Ann. Appl. Probab. 1 62–87.
  • [7] Geyer, C. J. (1992). Practical Markov chain Monte Carlo. Statist. Sci. 7 473–483.
  • [8] Gilks, W. R. and Roberts, G. O. (1995). Strategies for improving MCMC. In MCMC in Practice (W. R. Gilks, D. J. Spiegelhalter and S. Richardson, eds.) 89–114. Chapman and Hall, London.
  • [9] Hastings, W. K. (1970). Monte Carlo sampling methods using Markov chains and their applications. Biometrika 57 97–109.
  • [10] Hobert, J. P., Jones, G. L., Presnell, B. and Rosenthal, J. S. (2002). On the applicability of regenerative simulation in Markov chain Monte carlo. Biometrika 89 731–743.
  • [11] Hobert, J. P. and Rosenthal, J. S. (2007). Norm comparisons for data augmentation. Preprint.
  • [12] Ibragimov, I. A. and Linnik, Y. V. (1971). Independent and Stationary Sequences of Random Variables. Wolters-Noordhoff, Groningen.
  • [13] Jones, G. L. (2004). On the Markov chain central limit theorem. Probab. Surv. 1 299–320.
  • [14] Jones, G. L. and Hobert, J. P. (2001). Honest exploration of intractable probability distributions via Markov chain Monte Carlo. Statist. Sci. 16 312–334.
  • [15] Jones, G. L. and Hobert, J. P. (2004). Sufficient burn-in for Gibbs samplers for a hierarchical random effects model. Ann. Statist. 32 784–817.
  • [16] Kipnis, C. and Varadhan, S. R. S. (1986). Central limit theorem for additive functionals of reversible Markov processes and applications to simple exclusions. Comm. Math. Phys. 104 1–19.
  • [17] Lawler, G. F. and Sokal, A. D. (1988). Bounds on the L2 spectrum for Markov chains and Markov processes: A generalization of Cheeger’s inequality. Trans. Amer. Math. Soc. 309 557–580.
  • [18] Liu, J. S., Wong, W. and Kong, A. (1994). Covariance structure of the Gibbs sampler with applications to the comparisons of estimators and augmentation schemes. Biometrika 81 27–40.
  • [19] Mengersen, K. L. and Tweedie, R. L. (1996). Rates of convergence of the Hastings and Metropolis algorithms. Ann. Statist. 24 101–121.
  • [20] Metropolis, N., Rosenbluth, A., Rosenbluth, M., Teller, A. and Teller, E. (1953). Equations of state calculations by fast computing machines. J. Chem. Phys. 21 1087–1091.
  • [21] Meyn, S. P. and Tweedie, R. L. (1993). Markov Chains and Stochastic Stability. Springer, London.
  • [22] Mira, A. (2001). Ordering and improving the performance of Monte Carlo Markov chains. Statist. Sci. 16 340–350.
  • [23] Mira, A. and Geyer, C. J. (2000). On non-reversible Markov chains. In Fields Institute Communications 26. Monte Carlo Methods (N. Madras, ed.) 93–108. Amer. Math. Soc., Providence, RI.
  • [24] Mira, A., Møller, J. and Roberts, G. O. (2001). Perfect slice samplers. J. Roy. Statist. Soc. Ser. B 63 593–606.
  • [25] Neal, R. M. (2003). Slice sampling (with discussion). Ann. Statist. 31 705–767.
  • [26] Peskun, P. H. (1973). Optimum Monte Carlo sampling using Markov chains. Biometrika 60 607–612.
  • [27] Roberts, G. O. (1999). A note on acceptance rate criteria for CLTs for Metropolis–Hastings algorithms. J. Appl. Probab. 36 1210–1217.
  • [28] Roberts, G. O. and Rosenthal, J. S. (1997). Geometric ergodicity and hybrid Markov chains. Electron. Comm. Probab. 2 13–25.
  • [29] Roberts, G. O. and Rosenthal, J. S. (1998). Markov chain Monte Carlo: Some practical implications of theoretical results (with discussion). Canad. J. Statist. 26 5–31.
  • [30] Roberts, G. O. and Rosenthal, J. S. (1999). Convergence of slice sampler Markov chains. J. Roy. Statist. Soc. Ser. B 61 643–660.
  • [31] Roberts, G. O. and Rosenthal, J. S. (2006). Examples of adaptive MCMC. Preprint.
  • [32] Roberts, G. O. and Tweedie, R. L. (1996). Geometric convergence and central limit theorems for multidimensional Hastings and Metropolis algorithms. Biometrika 83 95–110.
  • [33] Roberts, G. O. and Tweedie, R. L. (1996). Exponential convergence of Langevin diffusions and their discrete approximations. Bernoulli 2 341–364.
  • [34] Rosenthal, J. S. (1995). Minorization conditions and convergence rates for Markov chain Monte Carlo. J. Amer. Statist. Assoc. 90 558–566.
  • [35] Rosenthal, J. S. (2002). Quantitative convergence rates of Markov chains: A simple account. Electron. Comm. Probab. 7 123–128.
  • [36] Rosenthal, J. S. (2003). Asymptotic variance and convergence rates of nearly-periodic MCMC algorithms. J. Amer. Statist. Assoc. 98 169–177.
  • [37] Häggström, O. and Rosenthal, J. S. (2007). On variance conditions for Markov chain CLTs. Electron. Comm. Probab. To appear.
  • [38] Rudin, W. (1991). Functional Analysis, 2nd ed. McGraw-Hill, New York.
  • [39] Smith, A. F. M. and Roberts, G. O. (1993). Bayesian computation via the Gibbs sampler and related Markov chain Monte Carlo methods (with discussion). J. Roy. Statist. Soc. Ser. B 55 3–24.
  • [40] Tierney, L. (1994). Markov chains for exploring posterior distributions (with discussion). Ann. Statist. 22 1701–1762.
  • [41] Tierney, L. (1998). A note on Metropolis–Hastings kernels for general state spaces. Ann. Appl. Probab. 8 1–9.