## Bernoulli

• Bernoulli
• Volume 19, Number 5A (2013), 2033-2066.

### Nonasymptotic bounds on the estimation error of MCMC algorithms

#### Abstract

We address the problem of upper bounding the mean square error of MCMC estimators. Our analysis is nonasymptotic. We first establish a general result valid for essentially all ergodic Markov chains encountered in Bayesian computation and a possibly unbounded target function $f$. The bound is sharp in the sense that the leading term is exactly $\sigma_{\mathrm{as}}^{2}(P,f)/n$, where $\sigma_{\mathrm{as}}^{2}(P,f)$ is the CLT asymptotic variance. Next, we proceed to specific additional assumptions and give explicit computable bounds for geometrically and polynomially ergodic Markov chains under quantitative drift conditions. As a corollary, we provide results on confidence estimation.

#### Article information

Source
Bernoulli, Volume 19, Number 5A (2013), 2033-2066.

Dates
First available in Project Euclid: 5 November 2013

Permanent link to this document
https://projecteuclid.org/euclid.bj/1383661213

Digital Object Identifier
doi:10.3150/12-BEJ442

Mathematical Reviews number (MathSciNet)
MR3129043

Zentralblatt MATH identifier
06254553

#### Citation

Łatuszyński, Krzysztof; Miasojedow, Błażej; Niemiro, Wojciech. Nonasymptotic bounds on the estimation error of MCMC algorithms. Bernoulli 19 (2013), no. 5A, 2033--2066. doi:10.3150/12-BEJ442. https://projecteuclid.org/euclid.bj/1383661213

#### References

• [1] Adamczak, R. (2008). A tail inequality for suprema of unbounded empirical processes with applications to Markov chains. Electron. J. Probab. 13 1000–1034.
• [2] Aldous, D. (1987). On the Markov chain simulation method for uniform combinatorial distributions and simulated annealing. Probab. Engrg. Inform. Sci. 1 33–46.
• [3] Athreya, K.B. and Ney, P. (1978). A new approach to the limit theory of recurrent Markov chains. Trans. Amer. Math. Soc. 245 493–501.
• [4] Baxendale, P.H. (2005). Renewal theory and computable convergence rates for geometrically ergodic Markov chains. Ann. Appl. Probab. 15 700–738.
• [5] Bednorz, W., Łatuszyński, K. and Latała, R. (2008). A regeneration proof of the central limit theorem for uniformly ergodic Markov chains. Electron. Commun. Probab. 13 85–98.
• [6] Bertail, P. and Clémençon, S. (2010). Sharp bounds for the tails of functionals of Markov chains. Theory Probab. Appl. 54 1–19.
• [7] Clémençon, S.J.M. (2001). Moment and probability inequalities for sums of bounded additive functionals of regular Markov chains via the Nummelin splitting technique. Statist. Probab. Lett. 55 227–238.
• [8] Davison, A.C. (2003). Statistical Models. Cambridge Series in Statistical and Probabilistic Mathematics 11. Cambridge: Cambridge Univ. Press.
• [9] Douc, R., Fort, G., Moulines, E. and Soulier, P. (2004). Practical drift conditions for subgeometric rates of convergence. Ann. Appl. Probab. 14 1353–1377.
• [10] Douc, R., Guillin, A. and Moulines, E. (2008). Bounds on regeneration times and limit theorems for subgeometric Markov chains. Ann. Inst. Henri Poincaré Probab. Stat. 44 239–257.
• [11] Douc, R., Moulines, E. and Rosenthal, J.S. (2004). Quantitative bounds on convergence of time-inhomogeneous Markov chains. Ann. Appl. Probab. 14 1643–1665.
• [12] Douc, R., Moulines, E. and Soulier, P. (2007). Computable convergence rates for sub-geometric ergodic Markov chains. Bernoulli 13 831–848.
• [13] Fort, G. (2003). Computable bounds for V-geometric ergodicity of Markov transition kernels. Preprint, Université Joseph Fourier, Grenoble, France.
• [14] Fort, G. and Moulines, E. (2000). $V$-subgeometric ergodicity for a Hastings–Metropolis algorithm. Statist. Probab. Lett. 49 401–410.
• [15] Fort, G. and Moulines, E. (2003). Convergence of the Monte Carlo expectation maximization for curved exponential families. Ann. Statist. 31 1220–1259.
• [16] Fort, G. and Moulines, E. (2003). Polynomial ergodicity of Markov transition kernels. Stochastic Process. Appl. 103 57–99.
• [17] Fort, G., Moulines, E., Roberts, G.O. and Rosenthal, J.S. (2003). On the geometric ergodicity of hybrid samplers. J. Appl. Probab. 40 123–146.
• [18] Gelfand, A.E. and Smith, A.F.M. (1990). Sampling-based approaches to calculating marginal densities. J. Amer. Statist. Assoc. 85 398–409.
• [19] Gillman, D. (1998). A Chernoff bound for random walks on expander graphs. SIAM J. Comput. 27 1203–1220.
• [20] Glynn, P.W. and Ormoneit, D. (2002). Hoeffding’s inequality for uniformly ergodic Markov chains. Statist. Probab. Lett. 56 143–146.
• [21] Hobert, J.P. and Geyer, C.J. (1998). Geometric ergodicity of Gibbs and block Gibbs samplers for a hierarchical random effects model. J. Multivariate Anal. 67 414–430.
• [22] Jarner, S.F. and Roberts, G.O. (2002). Polynomial convergence rates of Markov chains. Ann. Appl. Probab. 12 224–247.
• [23] Jarner, S.F. and Roberts, G.O. (2007). Convergence of heavy-tailed Monte Carlo Markov chain algorithms. Scand. J. Statist. 34 781–815.
• [24] Jarner, S.F. and Tweedie, R.L. (2003). Necessary conditions for geometric and polynomial ergodicity of random-walk-type Markov chains. Bernoulli 9 559–578.
• [25] Jerrum, M.R., Valiant, L.G. and Vazirani, V.V. (1986). Random generation of combinatorial structures from a uniform distribution. Theoret. Comput. Sci. 43 169–188.
• [26] Johnson, A.A. and Jones, G.L. (2007). Gibbs sampling for a Bayesian hierarchical version of the general linear mixed model. Preprint, available at arXiv:0712.3056v1.
• [27] Johnson, A.A. and Jones, G.L. (2010). Gibbs sampling for a Bayesian hierarchical general linear model. Electron. J. Stat. 4 313–333.
• [28] Jones, G.L., Haran, M., Caffo, B.S. and Neath, R. (2006). Fixed-width output analysis for Markov chain Monte Carlo. J. Amer. Statist. Assoc. 101 1537–1547.
• [29] Jones, G.L. and Hobert, J.P. (2001). Honest exploration of intractable probability distributions via Markov chain Monte Carlo. Statist. Sci. 16 312–334.
• [30] Jones, G.L. and Hobert, J.P. (2004). Sufficient burn-in for Gibbs samplers for a hierarchical random effects model. Ann. Statist. 32 784–817.
• [31] Joulin, A. and Ollivier, Y. (2010). Curvature, concentration and error estimates for Markov chain Monte Carlo. Ann. Probab. 38 2418–2442.
• [32] Kontorovich, L. and Ramanan, K. (2008). Concentration inequalities for dependent random variables via the martingale method. Ann. Probab. 36 2126–2158.
• [33] Kontoyiannis, I., Lastras-Montano, L.A. and Meyn, S.P. (2005). Relative entropy and exponential deviation bounds for general Markov chains. In IEEE, International Symposium on Information Theory 1563–1567. Adelaide, Australia: IEEE.
• [34] Łatuszyński, K., Miasojedow, B. and Niemiro, W. (2012). Nonasymptotic bounds on the mean square error for MCMC estimates via renewal techniques. In Monte Carlo and Quasi-Monte Carlo 2010. Springer Proceedings in Mathematics and Statistics. 23 539–555. Berlin: Springer.
• [35] Łatuszyński, K. and Niemiro, W. (2011). Rigorous confidence bounds for MCMC under a geometric drift condition. J. Complexity 27 23–38.
• [36] León, C.A. and Perron, F. (2004). Optimal Hoeffding bounds for discrete reversible Markov chains. Ann. Appl. Probab. 14 958–970.
• [37] Lorden, G. (1970). On excess over the boundary. Ann. Math. Statist. 41 520–527.
• [38] Lund, R.B. and Tweedie, R.L. (1996). Geometric convergence rates for stochastically ordered Markov chains. Math. Oper. Res. 21 182–194.
• [39] Marchev, D. and Hobert, J.P. (2004). Geometric ergodicity of van Dyk and Meng’s algorithm for the multivariate Student’s $t$ model. J. Amer. Statist. Assoc. 99 228–238.
• [40] Marton, K. (1996). A measure concentration inequality for contracting Markov chains. Geom. Funct. Anal. 6 556–571.
• [41] Mathé, P. and Novak, E. (2007). Simple Monte Carlo and the Metropolis algorithm. J. Complexity 23 673–696.
• [42] Meyn, S.P. and Tweedie, R.L. (1993). Markov Chains and Stochastic Stability. Communications and Control Engineering Series. London: Springer London Ltd.
• [43] Meyn, S.P. and Tweedie, R.L. (1994). Computable bounds for geometric convergence rates of Markov chains. Ann. Appl. Probab. 4 981–1011.
• [44] Mykland, P., Tierney, L. and Yu, B. (1995). Regeneration in Markov chain samplers. J. Amer. Statist. Assoc. 90 233–241.
• [45] Niemiro, W. and Pokarowski, P. (2009). Fixed precision MCMC estimation by median of products of averages. J. Appl. Probab. 46 309–329.
• [46] Nummelin, E. (1978). A splitting technique for Harris recurrent Markov chains. Z. Wahrsch. Verw. Gebiete 43 309–318.
• [47] Nummelin, E. (2002). MC’s for MCMC’ists. International Statistical Review 70 215–240.
• [48] Papaspiliopoulos, O. and Roberts, G. (2008). Stability of the Gibbs sampler for Bayesian hierarchical models. Ann. Statist. 36 95–117.
• [49] Roberts, G.O. and Rosenthal, J.S. (2001). Small and pseudo-small sets for Markov chains. Stoch. Models 17 121–145.
• [50] Roberts, G.O. and Rosenthal, J.S. (2004). General state space Markov chains and MCMC algorithms. Probab. Surv. 1 20–71.
• [51] Roberts, G.O. and Rosenthal, J.S. (2011). Quantitative non-geometric convergence bounds for independence samplers. Methodol. Comput. Appl. Probab. 13 391–403.
• [52] Roberts, G.O. and Tweedie, R.L. (1999). Bounds on regeneration times and convergence rates for Markov chains. Stochastic Process. Appl. 80 211–229.
• [53] Rosenthal, J.S. (1995). Minorization conditions and convergence rates for Markov chain Monte Carlo. J. Amer. Statist. Assoc. 90 558–566.
• [54] Rosenthal, J.S. (1995). Rates of convergence for Gibbs sampling for variance component models. Ann. Statist. 23 740–761.
• [55] Rosenthal, J.S. (2002). Quantitative convergence rates of Markov chains: A simple account. Electron. Commun. Probab. 7 123–128 (electronic).
• [56] Roy, V. and Hobert, J.P. (2010). On Monte Carlo methods for Bayesian multivariate regression models with heavy-tailed errors. J. Multivariate Anal. 101 1190–1202.
• [57] Rudolf, D. (2009). Explicit error bounds for lazy reversible Markov chain Monte Carlo. J. Complexity 25 11–24.
• [58] Samson, P.M. (2000). Concentration of measure inequalities for Markov chains and $\Phi$-mixing processes. Ann. Probab. 28 416–461.
• [59] Tan, A. and Hobert, J.P. (2009). Block Gibbs sampling for Bayesian random effects models with improper priors: Convergence and regeneration. J. Comput. Graph. Statist. 18 861–878.
• [60] Tierney, L. (1994). Markov chains for exploring posterior distributions. Ann. Statist. 22 1701–1762. With discussion and a rejoinder by the author.