Abstract
A fluid approximation gives the main term in the asymptotic expression of the value function for a controllable stochastic network. The policies that have the same asymptotic of their value functions as the value function of the optimal policy are called asymptotically optimal policies. We consider the problem of finding from this set of asymptotically optimal policies a best one in the sense that the next term of its asymptotic expression is minimal. The analysis of this problem is closely connected with large-deviations problems for a random walk.
Citation
Alexander Gajrat. Arie Hordijk. Ad Ridder. "Large-deviations analysis of the fluid approximation for a controllable tandem queue." Ann. Appl. Probab. 13 (4) 1423 - 1448, November 2003. https://doi.org/10.1214/aoap/1069786504
Information