The Annals of Applied Statistics
- Ann. Appl. Stat.
- Volume 12, Number 3 (2018), 1914-1938.
Tree-based reinforcement learning for estimating optimal dynamic treatment regimes
Dynamic treatment regimes (DTRs) are sequences of treatment decision rules, in which treatment may be adapted over time in response to the changing course of an individual. Motivated by the substance use disorder (SUD) study, we propose a tree-based reinforcement learning (T-RL) method to directly estimate optimal DTRs in a multi-stage multi-treatment setting. At each stage, T-RL builds an unsupervised decision tree that directly handles the problem of optimization with multiple treatment comparisons, through a purity measure constructed with augmented inverse probability weighted estimators. For the multiple stages, the algorithm is implemented recursively using backward induction. By combining semiparametric regression with flexible tree-based learning, T-RL is robust, efficient and easy to interpret for the identification of optimal DTRs, as shown in the simulation studies. With the proposed method, we identify dynamic SUD treatment regimes for adolescents.
Ann. Appl. Stat., Volume 12, Number 3 (2018), 1914-1938.
Received: October 2016
Revised: August 2017
First available in Project Euclid: 11 September 2018
Permanent link to this document
Digital Object Identifier
Mathematical Reviews number (MathSciNet)
Tao, Yebin; Wang, Lu; Almirall, Daniel. Tree-based reinforcement learning for estimating optimal dynamic treatment regimes. Ann. Appl. Stat. 12 (2018), no. 3, 1914--1938. doi:10.1214/18-AOAS1137. https://projecteuclid.org/euclid.aoas/1536652980
- Supplementary material A for article “Tree-based reinforcement learning for estimating optimal dynamic treatment regimes”. Additional simulation results for the proposed method and competing methods.
- Supplementary material B for article “Tree-based reinforcement learning for estimating optimal dynamic treatment regimes”. R codes and sample data to implement the proposed method.