Electronic Journal of Statistics

Geometric ergodicity of Rao and Teh’s algorithm for Markov jump processes and CTBNs

Błażej Miasojedow and Wojciech Niemiro

Full-text: Open access

Abstract

Rao and Teh (2012, 2013) introduced an efficient MCMC algorithm for sampling from the posterior distribution of a hidden Markov jump process. The algorithm is based on the idea of sampling virtual jumps. In the present paper we show that the Markov chain generated by Rao and Teh’s algorithm is geometrically ergodic. To this end we establish a geometric drift condition towards a small set. A similar result is also proved for a special version of the algorithm, used for probabilistic inference in Continuous Time Bayesian Networks.

Article information

Source
Electron. J. Statist., Volume 11, Number 2 (2017), 4629-4648.

Dates
Received: November 2016
First available in Project Euclid: 18 November 2017

Permanent link to this document
https://projecteuclid.org/euclid.ejs/1510974128

Digital Object Identifier
doi:10.1214/17-EJS1348

Mathematical Reviews number (MathSciNet)
MR3724970

Zentralblatt MATH identifier
06816627

Subjects
Primary: 65C40: Computational Markov chains
Secondary: 65C05: Monte Carlo methods 60J27: Continuous-time Markov processes on discrete state spaces

Keywords
Continuous time Markov processes MCMC hidden Markov models posterior sampling geometric ergodicity drift condition small set continuous time Bayesian network

Rights
Creative Commons Attribution 4.0 International License.

Citation

Miasojedow, Błażej; Niemiro, Wojciech. Geometric ergodicity of Rao and Teh’s algorithm for Markov jump processes and CTBNs. Electron. J. Statist. 11 (2017), no. 2, 4629--4648. doi:10.1214/17-EJS1348. https://projecteuclid.org/euclid.ejs/1510974128


Export citation

References

  • Boys, R. J., Wilkinson, D. J. and Kirkwood, T. B. (2008). Bayesian inference for a discretely observed stochastic kinetic model., Statistics and Computing 18 125–135.
  • Carter, C. K. and Kohn, R. (1994). On Gibbs Sampling for State Space Models., Biometrika 81 541–553.
  • El-Hay, T., Friedman, N. and Kupferman, R. (2008). Gibbs Sampling in Factorized Continuous-Time Markov Processes. In, Proceedings of the Twenty-Fourth Conference Annual Conference on Uncertainty in Artificial Intelligence (UAI-08) 169–178. AUAI Press, Corvallis, Oregon.
  • Fan, Y. and Shelton, C. R. (2008). Sampling for Approximate Inference in Continuous Time Bayesian Networks. In, Tenth International Symposium on Artificial Intelligence and Mathematics.
  • Fan, Y., Xu, J. and Shelton, C. R. (2010). Importance Sampling for Continuous Time Bayesian Networks., Journal of Machine Learning Research 11 2115–2140.
  • Frühwirth-Schnatter, S. (1994). Data augmentation and dynamic linear models., Journal of Time Series Analysis 15 183–202.
  • Golightly, A., Henderson, D. and Sherlock, C. (2015). Delayed acceptance particle MCMC for exact inference in stochastic kinetic models., Statistics and Computing 25 1039–1055.
  • Golightly, A. and Wilkinson, D. J. (2011). Bayesian parameter inference for stochastic biochemical network models using particle Markov chain Monte Carlo., Interface Focus.
  • Golightly, A. and Wilkinson, D. J. (2014). Bayesian inference for Markov jump processes with informative observations., ArXiv e-prints.
  • Johnson, A. A. (2009). Geometric Ergodicity of Gibbs Samplers., PHD thesis, Univeristy of Minnesota.
  • Lauritzen, S. L. (2001). Causal inference from graphical models., Complex stochastic systems 63–107.
  • Miasojedow, B. and Niemiro, W. (2016). Geometric ergodicity of Rao and Teh’s algorithm for homogeneous Markov jump processes., Statistics & Probability Letters 113 1–6.
  • Miasojedow, B., Niemiro, W., Noble, J. and Opalski, K. (2014). Metropolis-type algorithms for Continuous Time Bayesian Networks., arXiv preprint arXiv:1403.4035.
  • Nodelman, U., Shelton, C. R. and Koller, D. (2002a). Continuous time Bayesian networks. In, Proceedings of the Eighteenth conference on Uncertainty in artificial intelligence 378–387.
  • Nodelman, U., Shelton, C. R. and Koller, D. (2002b). Learning continuous time Bayesian networks. In, Proceedings of the Nineteenth conference on Uncertainty in Artificial Intelligence 451–458. Morgan Kaufmann Publishers Inc.
  • Pearl, J. (1994). A Probabilistic Calculus of Actions., Proceedings of the Tenth International conference on Uncertainty in Artificial Intelligence.
  • Rao, V. and Teh, Y. W. (2012). MCMC for continuous-time discrete-state systems. In, Advances in Neural Information Processing Systems 701–709.
  • Rao, V. and Teh, Y. W. (2013). Fast MCMC sampling for Markov jump processes and extensions., Journal of Machine Learning Research 14 3207–3232.
  • Roberts, G. O. and Rosenthal, J. S. (2004). General state space Markov chains and MCMC algorithms., Probability Surveys 1 20–71.
  • Schweder, T. (1970). Composable markov processes., Journal of applied probability 7 400–410.