Advances in Applied Probability
- Adv. in Appl. Probab.
- Volume 45, Number 3 (2013), 837-859.
The expected total cost criterion for Markov decision processes under constraints
In this work, we study discrete-time Markov decision processes (MDPs) with constraints when all the objectives have the same form of expected total cost over the infinite time horizon. Our objective is to analyze this problem by using the linear programming approach. Under some technical hypotheses, it is shown that if there exists an optimal solution for the associated linear program then there exists a randomized stationary policy which is optimal for the MDP, and that the optimal value of the linear program coincides with the optimal value of the constrained control problem. A second important result states that the set of randomized stationary policies provides a sufficient set for solving this MDP. It is important to note that, in contrast with the classical results of the literature, we do not assume the MDP to be transient or absorbing. More importantly, we do not impose the cost functions to be non-negative or to be bounded below. Several examples are presented to illustrate our results.
Adv. in Appl. Probab., Volume 45, Number 3 (2013), 837-859.
First available in Project Euclid: 30 August 2013
Permanent link to this document
Digital Object Identifier
Mathematical Reviews number (MathSciNet)
Zentralblatt MATH identifier
Primary: 90C40: Markov and semi-Markov decision processes
Secondary: 60J10: Markov chains (discrete-time Markov processes on discrete state spaces) 90C90: Applications of mathematical programming
Dufour, François; Piunovskiy, A. B. The expected total cost criterion for Markov decision processes under constraints. Adv. in Appl. Probab. 45 (2013), no. 3, 837--859. doi:10.1239/aap/1377868541. https://projecteuclid.org/euclid.aap/1377868541