The Annals of Applied Probability
- Ann. Appl. Probab.
- Volume 21, Number 5 (2011), 2016-2049.
Discounted continuous-time constrained Markov decision processes in Polish spaces
This paper is devoted to studying constrained continuous-time Markov decision processes (MDPs) in the class of randomized policies depending on state histories. The transition rates may be unbounded, the reward and costs are admitted to be unbounded from above and from below, and the state and action spaces are Polish spaces. The optimality criterion to be maximized is the expected discounted rewards, and the constraints can be imposed on the expected discounted costs. First, we give conditions for the nonexplosion of underlying processes and the finiteness of the expected discounted rewards/costs. Second, using a technique of occupation measures, we prove that the constrained optimality of continuous-time MDPs can be transformed to an equivalent (optimality) problem over a class of probability measures. Based on the equivalent problem and a so-called w̄-weak convergence of probability measures developed in this paper, we show the existence of a constrained optimal policy. Third, by providing a linear programming formulation of the equivalent problem, we show the solvability of constrained optimal policies. Finally, we use two computable examples to illustrate our main results.
Ann. Appl. Probab., Volume 21, Number 5 (2011), 2016-2049.
First available in Project Euclid: 25 October 2011
Permanent link to this document
Digital Object Identifier
Mathematical Reviews number (MathSciNet)
Zentralblatt MATH identifier
Guo, Xianping; Song, Xinyuan. Discounted continuous-time constrained Markov decision processes in Polish spaces. Ann. Appl. Probab. 21 (2011), no. 5, 2016--2049. doi:10.1214/10-AAP749. https://projecteuclid.org/euclid.aoap/1319576616