The Annals of Applied Probability

Stability of adversarial Markov chains, with an application to adaptive MCMC algorithms

Radu V. Craiu, Lawrence Gray, Krzysztof Łatuszyński, Neal Madras, Gareth O. Roberts, and Jeffrey S. Rosenthal

Full-text: Open access

Abstract

We consider whether ergodic Markov chains with bounded step size remain bounded in probability when their transitions are modified by an adversary on a bounded subset. We provide counterexamples to show that the answer is no in general, and prove theorems to show that the answer is yes under various additional assumptions. We then use our results to prove convergence of various adaptive Markov chain Monte Carlo algorithms.

Article information

Source
Ann. Appl. Probab., Volume 25, Number 6 (2015), 3592-3623.

Dates
Received: March 2014
Revised: August 2014
First available in Project Euclid: 1 October 2015

Permanent link to this document
https://projecteuclid.org/euclid.aoap/1443703783

Digital Object Identifier
doi:10.1214/14-AAP1083

Mathematical Reviews number (MathSciNet)
MR3404645

Zentralblatt MATH identifier
1328.60169

Subjects
Primary: 60J05: Discrete-time Markov processes on general state spaces
Secondary: 60J22: Computational methods in Markov chains [See also 65C40] 62F10: Point estimation 62F15: Bayesian inference

Keywords
Markov chain stability convergence ergodicity perturbation adaptive MCMC algorithms

Citation

Craiu, Radu V.; Gray, Lawrence; Łatuszyński, Krzysztof; Madras, Neal; Roberts, Gareth O.; Rosenthal, Jeffrey S. Stability of adversarial Markov chains, with an application to adaptive MCMC algorithms. Ann. Appl. Probab. 25 (2015), no. 6, 3592--3623. doi:10.1214/14-AAP1083. https://projecteuclid.org/euclid.aoap/1443703783


Export citation

References

  • [1] Andrieu, C. and Moulines, É. (2006). On the ergodicity properties of some adaptive MCMC algorithms. Ann. Appl. Probab. 16 1462–1505.
  • [2] Atchadé, Y. and Fort, G. (2010). Limit theorems for some adaptive MCMC algorithms with subgeometric kernels. Bernoulli 16 116–154.
  • [3] Atchadé, Y. F. and Rosenthal, J. S. (2005). On adaptive Markov chain Monte Carlo algorithms. Bernoulli 11 815–828.
  • [4] Bai, Y., Craiu, R. V. and Di Narzo, A. F. (2011). Divide and conquer: A mixture-based approach to regional adaptation for MCMC. J. Comput. Graph. Statist. 20 63–79.
  • [5] Bai, Y., Roberts, G. O. and Rosenthal, J. S. (2011). On the containment condition for adaptive Markov chain Monte Carlo algorithms. Adv. Appl. Stat. 21 1–54.
  • [6] Borodin, A., Kleinberg, J., Raghavan, P., Sudan, M. and Williamson, D. P. (2001). Adversarial queuing theory. J. ACM 48 13–38.
  • [7] Brooks, S., Gelman, A., Jones, G. L. and Meng, X.-L., eds. (2011). Handbook of Markov Chain Monte Carlo. CRC Press, Boca Raton, FL.
  • [8] Craiu, R. V., Rosenthal, J. and Yang, C. (2009). Learn from thy neighbor: Parallel-chain and regional adaptive MCMC. J. Amer. Statist. Assoc. 104 1454–1466.
  • [9] Flegal, J. (2012). Documentation for the R package ‘mcmcse.’ Available at http://cran.r-project.org/web/packages/mcmcse/mcmcse.pdf.
  • [10] Giordani, P. and Kohn, R. (2010). Adaptive independent Metropolis–Hastings by fast estimation of mixtures of normals. J. Comput. Graph. Statist. 19 243–259.
  • [11] Haario, H., Saksman, E. and Tamminen, J. (2001). An adaptive Metropolis algorithm. Bernoulli 7 223–242.
  • [12] Haas, M. (1998). Value of IgG subclasses and ultrastructural markers in predicting latent membranous lupus nephritis. Mod. Pathol. 11 147A.
  • [13] Hajek, B. (1982). Hitting-time and occupation-time bounds implied by drift analysis with applications. Adv. in Appl. Probab. 14 502–525.
  • [14] Kac, M. (1947). On the notion of recurrence in discrete stochastic processes. Bull. Amer. Math. Soc. 53 1002–1010.
  • [15] Łatuszyński, K., Roberts, G. O. and Rosenthal, J. S. (2013). Adaptive Gibbs samplers and related MCMC methods. Ann. Appl. Probab. 23 66–98.
  • [16] Latuszyński, K. and Rosenthal, J. S. (2015). The containment condition and AdapFail algorithms. J. Appl. Probab. 51 1189–1195.
  • [17] Meyn, S. P. and Tweedie, R. L. (1993). Markov Chains and Stochastic Stability. Springer, London.
  • [18] Miasojedow, B. (2011). Oszacowania błȩdów estymatorów stosowanych w markowowskich metodach Monte Carlo. Ph.D. thesis, Univ. Warsaw, Polish.
  • [19] Nummelin, E. (1984). General Irreducible Markov Chains and Nonnegative Operators. Cambridge Tracts in Mathematics 83. Cambridge Univ. Press, Cambridge.
  • [20] Orey, S. (1971). Lecture Notes on Limit Theorems for Markov Chain Transition Probabilities. Van Nostrand-Reinhold, London.
  • [21] Pemantle, R. and Rosenthal, J. S. (1999). Moment conditions for a sequence with negative drift to be uniformly bounded in $L^{r}$. Stochastic Process. Appl. 82 143–155.
  • [22] Roberts, G. O. and Rosenthal, J. S. (1998). Two convergence properties of hybrid samplers. Ann. Appl. Probab. 8 397–407.
  • [23] Roberts, G. O. and Rosenthal, J. S. (2004). General state space Markov chains and MCMC algorithms. Probab. Surv. 1 20–71.
  • [24] Roberts, G. O. and Rosenthal, J. S. (2007). Coupling and ergodicity of adaptive Markov chain Monte Carlo algorithms. J. Appl. Probab. 44 458–475.
  • [25] Roberts, G. O. and Rosenthal, J. S. (2009). Examples of adaptive MCMC. J. Comput. Graph. Statist. 18 349–367.
  • [26] Rosenthal, J. S. (2011). Optimal proposal distributions and adaptive MCMC. In Handbook of Markov Chain Monte Carlo 93–111. CRC Press, Boca Raton, FL.
  • [27] Rudin, W. (1976). Principles of Mathematical Analysis, 3rd ed. McGraw-Hill, New York.
  • [28] Saksman, E. and Vihola, M. (2010). On the ergodicity of the adaptive Metropolis algorithm on unbounded domains. Ann. Appl. Probab. 20 2178–2203.
  • [29] van Dyk, D. A. and Meng, X.-L. (2001). The art of data augmentation. J. Comput. Graph. Statist. 10 1–111.