Annals of Applied Probability
- Ann. Appl. Probab.
- Volume 16, Number 2 (2006), 730-756.
Average optimality for continuous-time Markov decision processes in Polish spaces
This paper is devoted to studying the average optimality in continuous-time Markov decision processes with fairly general state and action spaces. The criterion to be maximized is expected average rewards. The transition rates of underlying continuous-time jump Markov processes are allowed to be unbounded, and the reward rates may have neither upper nor lower bounds. We first provide two optimality inequalities with opposed directions, and also give suitable conditions under which the existence of solutions to the two optimality inequalities is ensured. Then, from the two optimality inequalities we prove the existence of optimal (deterministic) stationary policies by using the Dynkin formula. Moreover, we present a “semimartingale characterization” of an optimal stationary policy. Finally, we use a generalized Potlach process with control to illustrate the difference between our conditions and those in the previous literature, and then further apply our results to average optimal control problems of generalized birth–death systems, upwardly skip-free processes and two queueing systems. The approach developed in this paper is slightly different from the “optimality inequality approach” widely used in the previous literature.
Ann. Appl. Probab., Volume 16, Number 2 (2006), 730-756.
First available in Project Euclid: 29 June 2006
Permanent link to this document
Digital Object Identifier
Mathematical Reviews number (MathSciNet)
Zentralblatt MATH identifier
Guo, Xianping; Rieder, Ulrich. Average optimality for continuous-time Markov decision processes in Polish spaces. Ann. Appl. Probab. 16 (2006), no. 2, 730--756. doi:10.1214/105051606000000105. https://projecteuclid.org/euclid.aoap/1151592249