Open Access
December 2022 Batch policy learning in average reward Markov decision processes
Peng Liao, Zhengling Qi, Runzhe Wan, Predrag Klasnja, Susan A. Murphy
Author Affiliations +
Ann. Statist. 50(6): 3364-3387 (December 2022). DOI: 10.1214/22-AOS2231


We consider the batch (off-line) policy learning problem in the infinite horizon Markov decision process. Motivated by mobile health applications, we focus on learning a policy that maximizes the long-term average reward. We propose a doubly robust estimator for the average reward and show that it achieves semiparametric efficiency. Further, we develop an optimization algorithm to compute the optimal policy in a parameterized stochastic policy class. The performance of the estimated policy is measured by the difference between the optimal average reward in the policy class and the average reward of the estimated policy and we establish a finite-sample regret guarantee. The performance of the method is illustrated by simulation studies and an analysis of a mobile health study promoting physical activity.

Funding Statement

Peng Liao was supported by NIH Grants P50DA039838, R01AA023187, and U01 CA229437.
Susan Murphy was supported by NIH Grants P50DA039838, R01AA023187, P50DA054039, P41EB028242, U01 CA229437, UG3DE028723, and UH3DE028723.


The first two authors contributed equally. The third author’s work started prior to joining Amazon. The authors would also like to thank two reviewers, the Associate Editor and the Editor for helpful comments and suggestions that led to substantial improvement in the presentation.


Download Citation

Peng Liao. Zhengling Qi. Runzhe Wan. Predrag Klasnja. Susan A. Murphy. "Batch policy learning in average reward Markov decision processes." Ann. Statist. 50 (6) 3364 - 3387, December 2022.


Received: 1 November 2021; Revised: 1 June 2022; Published: December 2022
First available in Project Euclid: 21 December 2022

MathSciNet: MR4524500
zbMATH: 07641129
Digital Object Identifier: 10.1214/22-AOS2231

Primary: 62G05

Keywords: Average reward , doubly robust estimator , Markov decision process , policy optimization

Rights: Copyright © 2022 Institute of Mathematical Statistics

Vol.50 • No. 6 • December 2022
Back to Top