Advances in Applied Probability
- Adv. in Appl. Probab.
- Volume 47, Number 1 (2015), 106-127.
Impulsive control for continuous-time Markov decision processes
In this paper our objective is to study continuous-time Markov decision processes on a general Borel state space with both impulsive and continuous controls for the infinite time horizon discounted cost. The continuous-time controlled process is shown to be nonexplosive under appropriate hypotheses. The so-called Bellman equation associated to this control problem is studied. Sufficient conditions ensuring the existence and the uniqueness of a bounded measurable solution to this optimality equation are provided. Moreover, it is shown that the value function of the optimization problem under consideration satisfies this optimality equation. Sufficient conditions are also presented to ensure on the one hand the existence of an optimal control strategy, and on the other hand the existence of a ε-optimal control strategy. The decomposition of the state space into two disjoint subsets is exhibited where, roughly speaking, one should apply a gradual action or an impulsive action correspondingly to obtain an optimal or ε-optimal strategy. An interesting consequence of our previous results is as follows: the set of strategies that allow interventions at time t = 0 and only immediately after natural jumps is a sufficient set for the control problem under consideration.
Adv. in Appl. Probab., Volume 47, Number 1 (2015), 106-127.
First available in Project Euclid: 31 March 2015
Permanent link to this document
Digital Object Identifier
Mathematical Reviews number (MathSciNet)
Zentralblatt MATH identifier
Primary: 90C40: Markov and semi-Markov decision processes
Secondary: 60J25: Continuous-time Markov processes on general state spaces
Dufour, François; Piunovskiy, Alexei B. Impulsive control for continuous-time Markov decision processes. Adv. in Appl. Probab. 47 (2015), no. 1, 106--127. doi:10.1239/aap/1427814583. https://projecteuclid.org/euclid.aap/1427814583