Open Access
November 2000 Asymptotic optimality of tracking policies in stochastic networks
Nicole Bäuerle
Ann. Appl. Probab. 10(4): 1065-1083 (November 2000). DOI: 10.1214/aoap/1019487606

Abstract

Control problems in stochastic queuing networks are hard to solve. However, it is well known that the fluid model provides a useful approximation to the stochastic network.We will formulate a general class of control problems in stochastic queuing networks and consider the corresponding fluid optimization problem ($F$) which is a deterministic control problem and often easy to solve. Contrary to previous literature, our cost rate function is rather general.The value function of ($F$) provides an asymptotic lower bound on the value function of the stochastic network under fluid scaling. Moreover, we can construct from the optimal control of ($F$) a so-called tracking policy for the stochastic queuing network which achieves the lower bound as the fluid scaling parameter tends to $\infty$. In this case we say that the tracking policy is asymptotically optimal. This statement is true for multiclass queuing networks and admission and routing problems.The convergence is monotone under some convexity assumptions. The tracking policy approach also shows that a given fluid model solution can be attained as a fluid limit of the original discrete model.

Citation

Download Citation

Nicole Bäuerle. "Asymptotic optimality of tracking policies in stochastic networks." Ann. Appl. Probab. 10 (4) 1065 - 1083, November 2000. https://doi.org/10.1214/aoap/1019487606

Information

Published: November 2000
First available in Project Euclid: 22 April 2002

zbMATH: 1057.90003
MathSciNet: MR1810864
Digital Object Identifier: 10.1214/aoap/1019487606

Subjects:
Primary: 60K25
Secondary: 68M20

Keywords: fluid model , Markov decision process , Stochastic network , stochastic orderings , weak convergence

Rights: Copyright © 2000 Institute of Mathematical Statistics

Vol.10 • No. 4 • November 2000
Back to Top