Open Access
April, 1972 Discrete Dynamic Programming with Unbounded Rewards
J. Michael Harrison
Ann. Math. Statist. 43(2): 636-644 (April, 1972). DOI: 10.1214/aoms/1177692643

Abstract

Countable state and action Markov decision processes are investigated, the objective being to maximize expected discounted reward. Well-known results of Maitra and Blackwell are generalized, their assumption of bounded rewards being replaced by weaker conditions, the most important of which is as follows. The expected reward to be received at time $n + 1$ minus the actual reward received at time $n$, viewed as a function of the state at time $n$, the action at time $n$ and the decision rule to be followed at time $n + 1$, can be bounded. It is shown that there exists an $\varepsilon$-optimal stationary policy for every $\varepsilon > 0$ and that there exists an optimal stationary policy in the finite action case.

Citation

Download Citation

J. Michael Harrison. "Discrete Dynamic Programming with Unbounded Rewards." Ann. Math. Statist. 43 (2) 636 - 644, April, 1972. https://doi.org/10.1214/aoms/1177692643

Information

Published: April, 1972
First available in Project Euclid: 27 April 2007

zbMATH: 0262.90064
MathSciNet: MR354023
Digital Object Identifier: 10.1214/aoms/1177692643

Rights: Copyright © 1972 Institute of Mathematical Statistics

Vol.43 • No. 2 • April, 1972
Back to Top