Annals of Applied Probability

Harris recurrence of Metropolis-within-Gibbs and trans-dimensional Markov chains

Gareth O. Roberts and Jeffrey S. Rosenthal

Full-text: Open access


A ϕ-irreducible and aperiodic Markov chain with stationary probability distribution will converge to its stationary distribution from almost all starting points. The property of Harris recurrence allows us to replace “almost all” by “all,” which is potentially important when running Markov chain Monte Carlo algorithms. Full-dimensional Metropolis–Hastings algorithms are known to be Harris recurrent. In this paper, we consider conditions under which Metropolis-within-Gibbs and trans-dimensional Markov chains are or are not Harris recurrent. We present a simple but natural two-dimensional counter-example showing how Harris recurrence can fail, and also a variety of positive results which guarantee Harris recurrence. We also present some open problems. We close with a discussion of the practical implications for MCMC algorithms.

Article information

Ann. Appl. Probab., Volume 16, Number 4 (2006), 2123-2139.

First available in Project Euclid: 17 January 2007

Permanent link to this document

Digital Object Identifier

Mathematical Reviews number (MathSciNet)

Zentralblatt MATH identifier

Primary: 60J05: Discrete-time Markov processes on general state spaces
Secondary: 65C05: Monte Carlo methods 60J22: Computational methods in Markov chains [See also 65C40] 62F15: Bayesian inference

Harris recurrence Metropolis algorithm Markov chain Monte Carlo phi-irreducibility trans-dimensional Markov chains


Roberts, Gareth O.; Rosenthal, Jeffrey S. Harris recurrence of Metropolis-within-Gibbs and trans-dimensional Markov chains. Ann. Appl. Probab. 16 (2006), no. 4, 2123--2139. doi:10.1214/105051606000000510.

Export citation


  • Billingsley, P. (1995). Probability and Measure, 3rd ed. Wiley, New York.
  • Brooks, S. P., Guidici, P. and Roberts, G. O. (2003). Efficient construction of reversible jump Markov chain Monte Carlo proposal distributions (with discussion). J. Roy. Statist. Soc. Ser. B 65 3–55.
  • Chan, K. S. and Geyer, C. J. (1994). Comment on “Markov chains for exploring posterior distributions” by L. Tierney. Ann. Statist. 22 1747–1758.
  • Gelfand, A. E. and Smith, A. F. M. (1990). Sampling-based approaches to calculating marginal densities. J. Amer. Statist. Assoc. 85 398–409.
  • Geyer, C. J. (2003). Personal communication.
  • Geyer, C. J. (1996). Harris Recurrence web page. Available at
  • Green, P. J. (1995). Reversible jump MCMC computation and Bayesian model determination. Biometrika 82 711–732.
  • Harris, T. E. (1956). The existence of stationary measures for certain Markov processes. In Proc. 3rd Berkeley Symp. Math. Statist. Probab. 2 113–124. Univ. California Press, Berkeley.
  • Hastings, W. K. (1970). Monte Carlo sampling methods using Markov chains and their applications. Biometrika 57 97–109.
  • Hoel, P. G., Port, S. C. and Stone, C. J. (1972). Introduction to Stochastic Processes. Waveland Press, Prospect Heights, Illinois.
  • Jones, G. L. and Hobert, J. P. (2001). Honest exploration of intractable probability distributions via Markov chain Monte Carlo. Statist. Sci. 16 312–334.
  • Metropolis, N., Rosenbluth, A., Rosenbluth, M., Teller, A. and Teller, E. (1953). Equations of state calculations by fast computing machines. J. Chem. Phys. 21 1087–1091.
  • Meyn, S. P. and Tweedie, R. L. (1993). Markov Chains and Stochastic Stability. Springer, London.
  • Norman, G. E. and Filinov, V. S. (1969). Investigations of phase transitions by a Monte Carlo method. High Temperature 7 216–222.
  • Nummelin, E. (1984). General Irreducible Markov Chains and Nonnegative Operators. Cambridge Univ. Press.
  • Preston, C. J. (1977). Spatial birth-and-death processes. Bull. Inst. Internat. Statist. 46 371–391.
  • Robers, G. O. and Rosenthal, I. S. (2004). General state space Markov chains and MCMC algorithms. Probab. Surveys 1 20–71.
  • Roberts, G. O., Rosenthal, J. S. and Schwartz, P. O. (1998). Convergence properties of perturbed Markov chains. J. Appl. Probab. 35 1–11.
  • Roberts, G. O. and Tweedie, R. L. (1999). Bounds on regeneration times and convergence rates for Markov chains. Stochastic Process. Appl. 80 211–229. [Correction Stochastic Process. Appl. 91 (2001) 337–338.]
  • Rosenthal, J. S. (1995). Minorization conditions and convergence rates for Markov chain Monte Carlo. J. Amer. Statist. Assoc. 90 558–566.
  • Rosenthal, J. S. (1995). Convergence rates of Markov chains. SIAM Rev. 37 387–405.
  • Rosenthal, J. S. (2000). A First Look at Rigorous Probability Theory. World Scientific, River Edge, NJ.
  • Rosenthal, J. S. (2001). A review of asymptotic convergence for general state space Markov chains. Far East J. Theor. Stat. 5 37–50.
  • Rosenthal, J. S. (2002). Quantitative convergence rates of Markov chains: A simple account. Electron. Comm. Probab. 7 123–128.
  • Tierney, L. (1994). Markov chains for exploring posterior distributions (with discussion). Ann. Statist. 22 1701–1762.
  • Tierney, L. (1998). A note on Metropolis–Hastings kernels for general state spaces. Ann. Appl. Probab. 8 1–9.