Abstract
In this paper, we study the nonasymptotic and asymptotic performances of the optimal robust policy and value function of robust Markov Decision Processes (MDPs), where the optimal robust policy and value function are estimated from a generative model. While prior work focusing on nonasymptotic performances of robust MDPs is restricted in the setting of the KL uncertainty set and -rectangular assumption, we improve their results and also consider other uncertainty sets, including the and balls. Our results show that when we assume -rectangular on uncertainty sets, the sample complexity is about . In addition, we extend our results from the -rectangular assumption to the s-rectangular assumption. In this scenario, the sample complexity varies with the choice of uncertainty sets and is generally larger than the case under the -rectangular assumption. Moreover, we also show that the optimal robust value function is asymptotically normal with a typical rate under the and s-rectangular assumptions from both theoretical and empirical perspectives.
Funding Statement
This work has been supported by the National Key Research and Development Project of China (No. 2018AAA0101004).
Acknowledgments
The authors would like to thank the anonymous referees, the Associate Editor and the Editor for their detailed and constructive comments that improved the quality of this paper. The authors would also thank Xiang Li and Dachao Lin for a discussion related to DRO and some inequalities.
Citation
Wenhao Yang. Liangyu Zhang. Zhihua Zhang. "Toward theoretical understandings of robust Markov decision processes: Sample complexity and asymptotics." Ann. Statist. 50 (6) 3223 - 3248, December 2022. https://doi.org/10.1214/22-AOS2225
Information