Comment on “Dynamic treatment regimes: Technical challenges and applications”

: Inference for parameters associated with optimal dynamic treat-ment regimes is challenging as these estimators are nonregular when there are non-responders to treatments. In this discussion, we comment on three aspects of alleviating this nonregularity. We ﬁrst discuss an alternative approach for smoothing the quality functions. We then discuss some further details on our existing work to identify non-responders through penalization. Third, we propose a clinically meaningful value assessment whose estimator does not suﬀer from nonregularity.


Introduction
The authors are to be congratulated for their excellent and thoughtful paper on statistical inference for dynamic treatment regimens.They have addressed several important and long-standing issues in this area.As discussed by the authors, nonsmoothness of the problem in some of the parameters of interest leads to estimators that are not smooth in the data.This in turn makes inference for these parameters challenging.In the following, we comment on a few additional strategies to alleviate the resulting nonregularity due to nonsmoothness.First, we discuss replacing the nonsmooth objective functions via a SoftMax Q-learning approach, which directly addresses the trade-off between bias and variance of the maximum operation in the local asymptotic framework.Proofs are given in the Appendix.
Nonregularity of the estimators for the parameters associated with the optimal treatment regimes is mainly due to the existence of non-responders to treatments.Therefore, it would be useful and important if we could identify these non-responders.In the second part, we review our existing work on nonresponder identification via penalization.We also discuss how this penalization can alleviate, although not solve, some regularity issues.
For the third and final aspect we wish to discuss, we note that in some public health settings, the parameters in the dynamic treatment regime are not as important as the value function which reflects the overall population impact of the estimated regime and is perhaps the most important quantity to focus on for public health policy.We propose a truncated value function which only focuses on those subjects who are expected to have large treatment effects.We claim that this alternative value function is clinically meaningful and does not suffer from nonregularity.

SoftMax Q-learning
In this section we study the effect of replacing the max operator with a smoother version of it in the two-stage Q-learning algorithm discussed by Laber et al.We show that this smoothing can reduce the bias and can be controlled under local alternatives.The proposed SoftMax approach also sheds light on the bias/variance tradeoff which can be obtained by using over/under smoothing.In what follows, we briefly describe the SoftMax Q-learning algorithm, and then present some theoretical and simulation results.

Proposed algorithm
Consider the Q-learning algorithm discussed by Laber et al. in Section 2. In step 2 of the algorithm, the stage outcome is predicted by We propose replacing Y with a SoftMax version of it.Define the SoftMax function by (see Fig. 1) The estimator β1 of β 1 is given by Σ−1 1 P n B 1 Y .We note that the algorithm discussed by Laber et al. is obtained as the limit, as α goes to infinity, of the SoftMax Q-learning algorithm discussed here.

Theory
In the following we briefly discuss the asymptotic properties of β1 .We first discuss the limiting distribution of √ n( β1 − β * 1 ).We then discuss this limiting distribution under local alternatives.Finally, we discuss the asymptotic bias.The proofs appear in the Appendix.Laber et al., and let

Theorem 1. Assume (A1)-(A2) from
For local alternatives the limiting distribution is given below.
Theorem 2. Assume (A1)-(A3) from Laber et al., and let where The bound of the bias, scaled by root-n, under both standard and local alternatives asymptotics, is given below.
Corollary 1.Let Bias( β1 , c) and Bias( β1 , c, s) be defined as in Laber et al.. Assume (A1)-(A2) from Laber et al., and When (A3) from Laber et al. also holds, then The above results show that by choosing the scale of α, the bias can be controlled.Theorem 2 shows that this control of the bias directly influences the variance, at least under local alternatives.
For inference, we need to discuss two different settings.When holding α fixed, as n goes to infinity, standard inference for the parameters is valid, as the problem becomes regular.However, this comes with the price that the bias does not vanish even asymptotically (see also the discussion in Section 4).As proved in Theorem 2, when taking α to infinity, as n goes to infinity, the problem is nonregular.Thus, adaptive confidence intervals, such as the one suggested by Laber et al., are needed in order to perform valid inference.

Simulations for SoftMax
We compare the small-sample behaviour of SoftMax to that of soft-threshholding using the example setting discussed in Section 3 of Laber et al.Let θ * = max(µ * 0 , µ * 1 ).The max estimator is defined by A soft-thresholdling estimator is defined by .
Finally, the SoftMax estimator is defined by θ = SoftMax (μ 0 , μ1 , α).Let Y |A ∼ N (µ a , 1), a = 0, 1, and assume that the treatment assignment is perfectly balanced.We use 1000 Monte Carlo replicates to estimate the bias for each parameter setting.Figure 2 below shows the bias as a function of the treatment effect µ * 1 − µ * 0 and with tuning parameters σ ∈ [0, 5] and α ∈ [1, 6] for the soft-thresholding and SoftMax, respectively.It appears that the SoftMax does not suffer from large bias on points away from µ * 1 − µ * 0 = 0. Also, as expected from Theorem 1, the bias decreases as α increases.

Penalized and adaptive Q-learning
In Penalized Q-learning (Song et al., 2011) and adaptive Q-learning (Goldberg et al., 2013), penalties were imposed on the term H ′ 2,1 β 2,1 for each individual.This use of penalized estimation allows us to simultaneously estimate the second stage parameters and select individuals whose value functions are not affected by treatments, i.e., those individuals whose true values of H ′ 2,1 β * 2,1 are zero.Although the penalized method does not solve the non-regularity issue in estimating β's, our numerical studies have demonstrated that penalized Q-learning is not only able to reduce bias, but also provides better coverage of confidence intervals in a number of scenarios, as compared to the hard thresholding method of Moodie and Richardson (2010) and some soft thresholding methods including resampling approaches.Furthermore, the inference approach for penalized methods described in Zhang and Zhang (2014) appears to be able to handle diverging model perturbations.Finally, a nice feature of our penalized learning is that it enables us to identify non-responders, who may also have small treatment benefits even under a local alternative.Since it is clinically and practically most useful to target groups whose treatment benefit is large, identifying those subjects with small treatment benefits is useful for better allocation of resources and for reducing costs.

Truncated value function
The non-regularity issue arises primarily in settings where there are some subjects who do not respond to treatments at the second stage and where inference focuses on effect size.In the context of public health policy, we think that (i) the overall benefit (value) may be of greater interest compared to individual effect sizes and (ii) those subjects who are not sensitive to treatments (approximate non-responders) should not have a large impact on the overall decision making process.Thus, we propose an appropriate alternative criterion, namely the ǫ-truncated value, for evaluating the optimal policy as follows: where δ(X 1 ) and δ(X 2 ) denote the expected treatment effects at the first and second stages respectively.Here, ǫ is a small constant indicating a clinically meaningful effect size.
Under a SMART trial with randomization probabilities π k at stage k (k = 1, 2), this truncated value is equal to Compared to the usual value function, we can see that V ǫ (d 1 , d 2 ) differs by at most O(ǫ).Using the Q-learning model, the above value function for the estimated rule is One advantage of considering this value function is that non-regularity will no longer be an issue since we have excluded the non-responders from the above statistic.One can easily show √ n( Vǫ ( d1 , d2 ) − V ǫ (d 1 , d 2 )) converges to the same normal distribution under local alternatives whether P (β 2 X 2 = 0) > 0 or not.

Concluding remarks
We again thank the authors for their very interesting work which likely stimulate additional future research on this crucial topic.It is clear that there are many fundamental and unresolved computational, methodological and theoretical challenges remaining which will benefit from many diverse problem solving approaches.We look forward to seeing this intriguing research area continue to develop.

Appendix: Proofs
Sketch of proof of Theorem 1.Using the same arguments that lead to Eq. 2 in Laber et al., we have where S n is smooth and asymptotically normal and Note that ), where the last equality follows by taking derivatives, and where For the remainder term, note that Lemma B.6 shows the consistency of β2,1 , and that the expectation of the Hessian of 1 αn log{1 + e αnH ′ β } is bounded by Assumption (A1).Hence, by applying Lemma B.5 to the matrix Σ1 that appears in the remainder term, we conclude that √ n( β1 − β * 1 ) = S n + T n + W n + o P (1), where x = 0 0 x < 0 , we obtain that for a given h 2,1 , Define the function w : D p1 × l ∞ (F ) × R p21 × [0, 1] → R p1 by w(Σ, µ, ν, a) = Σ −1 µ(g(ν, B 1 , H 2,1 , a)), where

Fig 2 .
Fig 2. Left: Bias for soft-thresholding.Right: Bias for SoftMax.In both panels the bias is measured in units of 1/ √ n for n = 10, as a function of effect size and of the tuning parameters, σ and α, for the soft-thresholding and SoftMax, respectively.