Journal of Applied Probability
- J. Appl. Probab.
- Volume 52, Number 2 (2015), 419-440.
Sample-path optimal stationary policies in stable Markov decision chains with the average reward criterion
This paper concerns discrete-time Markov decision chains with denumerable state and compact action sets. Besides standard continuity requirements, the main assumption on the model is that it admits a Lyapunov function ℓ. In this context the average reward criterion is analyzed from the sample-path point of view. The main conclusion is that if the expected average reward associated to ℓ 2 is finite under any policy then a stationary policy obtained from the optimality equation in the standard way is sample-path average optimal in a strong sense.
J. Appl. Probab., Volume 52, Number 2 (2015), 419-440.
First available in Project Euclid: 23 July 2015
Permanent link to this document
Digital Object Identifier
Mathematical Reviews number (MathSciNet)
Zentralblatt MATH identifier
Cavazos-Cadena, Rolando; Montes-de-Oca, Raúl; Sladký, Karel. Sample-path optimal stationary policies in stable Markov decision chains with the average reward criterion. J. Appl. Probab. 52 (2015), no. 2, 419--440. doi:10.1239/jap/1437658607. https://projecteuclid.org/euclid.jap/1437658607