We consider the batch (off-line) policy learning problem in the infinite horizon Markov decision process. Motivated by mobile health applications, we focus on learning a policy that maximizes the long-term average reward. We propose a doubly robust estimator for the average reward and show that it achieves semiparametric efficiency. Further, we develop an optimization algorithm to compute the optimal policy in a parameterized stochastic policy class. The performance of the estimated policy is measured by the difference between the optimal average reward in the policy class and the average reward of the estimated policy and we establish a finite-sample regret guarantee. The performance of the method is illustrated by simulation studies and an analysis of a mobile health study promoting physical activity.
Peng Liao was supported by NIH Grants P50DA039838, R01AA023187, and U01 CA229437.
Susan Murphy was supported by NIH Grants P50DA039838, R01AA023187, P50DA054039, P41EB028242, U01 CA229437, UG3DE028723, and UH3DE028723.
The first two authors contributed equally. The third author’s work started prior to joining Amazon. The authors would also like to thank two reviewers, the Associate Editor and the Editor for helpful comments and suggestions that led to substantial improvement in the presentation.
"Batch policy learning in average reward Markov decision processes." Ann. Statist. 50 (6) 3364 - 3387, December 2022. https://doi.org/10.1214/22-AOS2231