Open Access
2013 Large deviation principle for invariant distributions of memory gradient diffusions
Sébastien Gadat, Fabien Panloup, Clément Pellegrini
Author Affiliations +
Electron. J. Probab. 18: 1-34 (2013). DOI: 10.1214/EJP.v18-2031

Abstract

In this paper, we consider a class of diffusion processes based on a memory gradient descent, i.e. whose drift term is built as the average all along the trajectory of the gradient of a coercive function U. Under some classical assumptions on U, this type of diffusion is ergodic and admits a unique invariant distribution. In view to optimization applications, we want to understand the behaviour of the invariant distribution when the diffusion coefficient goes to 0. In the non-memory case, the invariant distribution is explicit and the so-called Laplace method shows that a Large Deviation Principle (LDP) holds with an explicit rate function, that leads to a concentration of the invariant distribution around the global minimums of U. Here, excepted in the linear case, we have no closed formula for the invariant distribution but we show that a LDP can be obtained. Then, in the one-dimensional case, we get some bounds for the rate function that lead to the concentration around the global minimum under some assumptions on the second derivative of U.;

Citation

Download Citation

Sébastien Gadat. Fabien Panloup. Clément Pellegrini. "Large deviation principle for invariant distributions of memory gradient diffusions." Electron. J. Probab. 18 1 - 34, 2013. https://doi.org/10.1214/EJP.v18-2031

Information

Accepted: 6 September 2013; Published: 2013
First available in Project Euclid: 4 June 2016

zbMATH: 1286.60033
MathSciNet: MR3109620
Digital Object Identifier: 10.1214/EJP.v18-2031

Subjects:
Primary: 60F10
Secondary: 35H10 , 60G10 , 60J60 , 60K35 , 93D30

Keywords: Freidlin and Wentzell The- ory , Hamilton-Jacobi equations , Hypoelliptic diffusions , large deviation principle , small stochastic perturbations

Vol.18 • 2013
Back to Top