Open Access
May 2019 Convergence rates for a class of estimators based on Stein’s method
Chris J. Oates, Jon Cockayne, François-Xavier Briol, Mark Girolami
Bernoulli 25(2): 1141-1159 (May 2019). DOI: 10.3150/17-BEJ1016

Abstract

Gradient information on the sampling distribution can be used to reduce the variance of Monte Carlo estimators via Stein’s method. An important application is that of estimating an expectation of a test function along the sample path of a Markov chain, where gradient information enables convergence rate improvement at the cost of a linear system which must be solved. The contribution of this paper is to establish theoretical bounds on convergence rates for a class of estimators based on Stein’s method. Our analysis accounts for (i) the degree of smoothness of the sampling distribution and test function, (ii) the dimension of the state space, and (iii) the case of non-independent samples arising from a Markov chain. These results provide insight into the rapid convergence of gradient-based estimators observed for low-dimensional problems, as well as clarifying a curse-of-dimension that appears inherent to such methods.

Citation

Download Citation

Chris J. Oates. Jon Cockayne. François-Xavier Briol. Mark Girolami. "Convergence rates for a class of estimators based on Stein’s method." Bernoulli 25 (2) 1141 - 1159, May 2019. https://doi.org/10.3150/17-BEJ1016

Information

Received: 1 March 2017; Revised: 1 August 2017; Published: May 2019
First available in Project Euclid: 6 March 2019

zbMATH: 07049402
MathSciNet: MR3920368
Digital Object Identifier: 10.3150/17-BEJ1016

Keywords: asymptotics , control functionals , reproducing kernel , scattered data , variance reduction

Rights: Copyright © 2019 Bernoulli Society for Mathematical Statistics and Probability

Vol.25 • No. 2 • May 2019
Back to Top