## Bernoulli

• Bernoulli
• Volume 24, Number 3 (2018), 2401-2428.

### On the Poisson equation for Metropolis–Hastings chains

#### Abstract

This paper defines an approximation scheme for a solution of the Poisson equation of a geometrically ergodic Metropolis–Hastings chain $\Phi$. The scheme is based on the idea of weak approximation and gives rise to a natural sequence of control variates for the ergodic average $S_{k}(F)=(1/k)\sum_{i=1}^{k}F(\Phi_{i})$, where $F$ is the force function in the Poisson equation. The main results show that the sequence of the asymptotic variances (in the CLTs for the control-variate estimators) converges to zero and give a rate of this convergence. Numerical examples in the case of a double-well potential are discussed.

#### Article information

Source
Bernoulli, Volume 24, Number 3 (2018), 2401-2428.

Dates
Received: August 2016
Revised: October 2016
First available in Project Euclid: 2 February 2018

Permanent link to this document
https://projecteuclid.org/euclid.bj/1517540478

Digital Object Identifier
doi:10.3150/17-BEJ932

Mathematical Reviews number (MathSciNet)
MR3757533

Zentralblatt MATH identifier
06839270

#### Citation

Mijatović, Aleksandar; Vogrinc, Jure. On the Poisson equation for Metropolis–Hastings chains. Bernoulli 24 (2018), no. 3, 2401--2428. doi:10.3150/17-BEJ932. https://projecteuclid.org/euclid.bj/1517540478

#### References

• [1] Andradóttir, S., Heyman, D.P. and Ott, T.J. (1993). Variance reduction through smoothing and control variates for Markov chain simulations. ACM Trans. Model. Comput. Simul. 3 167–189.
• [2] Baxendale, P.H. (2005). Renewal theory and computable convergence rates for geometrically ergodic Markov chains. Ann. Appl. Probab. 15 700–738.
• [3] Dellaportas, P. and Kontoyiannis, I. (2012). Control variates for estimation based on reversible Markov chain Monte Carlo samplers. J. R. Stat. Soc. Ser. B. Stat. Methodol. 74 133–161.
• [4] Devraj, A. and Meyn, S. (2016). Differential TD learning for value function approximation. Available at arXiv:1604.01828v1.
• [5] Geyer, C.J. (1992). Practical Markov chain Monte Carlo. Statist. Sci. 473–483.
• [6] Glynn, P.W. and Meyn, S.P. (1996). A Liapounov bound for solutions of the Poisson equation. Ann. Probab. 24 916–931.
• [7] Hastings, W.K. (1970). Monte Carlo sampling methods using Markov chains and their applications. Biometrika 57 97–109.
• [8] Henderson, S.G. (1997). Variance Reduction Via an Approximating Markov Process. Ph.D. thesis, Department of Operations Research, Stanford University.
• [9] Henderson, S.G. and Glynn, P.W. (2002). Approximating martingales for variance reduction in Markov process simulation. Math. Oper. Res. 27 253–271.
• [10] Henderson, S.G., Meyn, S.P. and Tadić, V.B. (2003). Performance evaluation and policy selection in multiclass networks. Discrete Event Dyn. Syst. 13 149–189.
• [11] Hernández-Lerma, O. and Bernard Lasserre, J. (1999). Further Topics on Discrete-Time Markov Control Processes. Applications of Mathematics (New York) 42. New York: Springer.
• [12] Hoekstra, A.H. and Steutel, F.W. (1984). Limit theorems for Markov chains of finite rank. Linear Algebra Appl. 60 65–77.
• [13] Jarner, S.F. and Hansen, E. (2000). Geometric ergodicity of Metropolis algorithms. Stochastic Process. Appl. 85 341–361.
• [14] Kipnis, C. and Varadhan, S.R.S. (1986). Central limit theorem for additive functionals of reversible Markov processes and applications to simple exclusions. Comm. Math. Phys. 104 1–19.
• [15] Madras, N. and Randall, D. (2002). Markov chain decomposition for convergence rate analysis. Ann. Appl. Probab. 12 581–606.
• [16] Makowski, A.M. and Shwartz, A. (2002). The Poisson equation for countable Markov chains: Probabilistic methods and interpretations. In Handbook of Markov Decision Processes. Internat. Ser. Oper. Res. Management Sci. 40 269–303. Boston, MA: Kluwer Academic.
• [17] Mengersen, K.L. and Tweedie, R.L. (1996). Rates of convergence of the Hastings and Metropolis algorithms. Ann. Statist. 24 101–121.
• [18] Metropolis, N., Rosenbluth, A.W., Rosenbluth, M.N., Teller, A.H. and Teller, E. (1953). Equation of state calculations by fast computing machines. J. Chem. Phys. 21 1087–1092.
• [19] Meyn, S. (2008). Control Techniques for Complex Networks. Cambridge: Cambridge Univ. Press.
• [20] Meyn, S. and Tweedie, R.L. (2009). Markov Chains and Stochastic Stability, 2nd ed. Cambridge: Cambridge Univ. Press.
• [21] Meyn, S.P. and Tweedie, R.L. (1994). Computable bounds for geometric convergence rates of Markov chains. Ann. Appl. Probab. 4 981–1011.
• [22] Mijatović, A. (2007). Spectral properties of trinomial trees. Proc. R. Soc. Lond. Ser. A Math. Phys. Eng. Sci. 463 1681–1696.
• [23] Mijatović, A. and Pistorius, M. (2013). Continuously monitored barrier options under Markov processes. Math. Finance 23 1–38.
• [24] Mijatović, A., Vidmar, M. and Jacka, S. (2014). Markov chain approximations for transition densities of Lévy processes. Electron. J. Probab. 19 no. 7, 37.
• [25] Roberts, G.O. and Rosenthal, J.S. (1997). Geometric ergodicity and hybrid Markov chains. Electron. Commun. Probab. 2 13–25.
• [26] Roberts, G.O. and Rosenthal, J.S. (2004). General state space Markov chains and MCMC algorithms. Probab. Surv. 1 20–71.
• [27] Roberts, G.O. and Tweedie, R.L. (1996). Geometric convergence and central limit theorems for multidimensional Hastings and Metropolis algorithms. Biometrika 83 95–110.
• [28] Roberts, G.O. and Tweedie, R.L. (1996). Exponential convergence of Langevin distributions and their discrete approximations. Bernoulli 2 341–363.
• [29] Rosenthal, J.S. (1992). Convergence of pseudo-finite markov chains. Unpublished manuscript.
• [30] Runnenburg, J.T. and Steutel, F.W. (1962). On Markov chains, the transition function of which is a finite sum of products of functions on one variable: Preliminary report. Stichting Mathematisch Centrum. Statistische Afdeling S304 1–22.
• [31] Tierney, L. (1994). Markov chains for exploring posterior distributions. Ann. Statist. 22 1701–1762.