September 2013 The expected total cost criterion for Markov decision processes under constraints
François Dufour, A. B. Piunovskiy
Author Affiliations +
Adv. in Appl. Probab. 45(3): 837-859 (September 2013). DOI: 10.1239/aap/1377868541


In this work, we study discrete-time Markov decision processes (MDPs) with constraints when all the objectives have the same form of expected total cost over the infinite time horizon. Our objective is to analyze this problem by using the linear programming approach. Under some technical hypotheses, it is shown that if there exists an optimal solution for the associated linear program then there exists a randomized stationary policy which is optimal for the MDP, and that the optimal value of the linear program coincides with the optimal value of the constrained control problem. A second important result states that the set of randomized stationary policies provides a sufficient set for solving this MDP. It is important to note that, in contrast with the classical results of the literature, we do not assume the MDP to be transient or absorbing. More importantly, we do not impose the cost functions to be non-negative or to be bounded below. Several examples are presented to illustrate our results.


Download Citation

François Dufour. A. B. Piunovskiy. "The expected total cost criterion for Markov decision processes under constraints." Adv. in Appl. Probab. 45 (3) 837 - 859, September 2013.


Published: September 2013
First available in Project Euclid: 30 August 2013

zbMATH: 1298.90126
MathSciNet: MR3102474
Digital Object Identifier: 10.1239/aap/1377868541

Primary: 90C40
Secondary: 60J10 , 90C90

Keywords: constraints , expected total cost criterion , linear programming , Markov decision process , occupation measure

Rights: Copyright © 2013 Applied Probability Trust


This article is only available to subscribers.
It is not available for individual sale.

Vol.45 • No. 3 • September 2013
Back to Top