December 2011 Sensitivity analysis in Markov decision processes with uncertain reward parameters
Chin Hon Tan, Joseph C. Hartman
Author Affiliations +
J. Appl. Probab. 48(4): 954-967 (December 2011). DOI: 10.1239/jap/1324046012

Abstract

Sequential decision problems can often be modeled as Markov decision processes. Classical solution approaches assume that the parameters of the model are known. However, model parameters are usually estimated and uncertain in practice. As a result, managers are often interested in how estimation errors affect the optimal solution. In this paper we illustrate how sensitivity analysis can be performed directly for a Markov decision process with uncertain reward parameters using the Bellman equations. In particular, we consider problems involving (i) a single stationary parameter, (ii) multiple stationary parameters, and (iii) multiple nonstationary parameters. We illustrate the applicability of this work through a capacitated stochastic lot-sizing problem.

Citation

Download Citation

Chin Hon Tan. Joseph C. Hartman. "Sensitivity analysis in Markov decision processes with uncertain reward parameters." J. Appl. Probab. 48 (4) 954 - 967, December 2011. https://doi.org/10.1239/jap/1324046012

Information

Published: December 2011
First available in Project Euclid: 16 December 2011

zbMATH: 1231.90374
MathSciNet: MR2896661
Digital Object Identifier: 10.1239/jap/1324046012

Subjects:
Primary: 90C40
Secondary: 90C31 , 90C39

Keywords: dynamic programming , Markov decision process , sensitivity analysis

Rights: Copyright © 2011 Applied Probability Trust

JOURNAL ARTICLE
14 PAGES

This article is only available to subscribers.
It is not available for individual sale.
+ SAVE TO MY LIBRARY

Vol.48 • No. 4 • December 2011
Back to Top