UvA-DARE (Digital Academic Repository) A simulation-extrapolation approach for the mixture cure model with mismeasured covariates

: We consider survival data from a population with cured subjects in the presence of mismeasured covariates. We use the mixture cure model to account for the individuals that will never experience the event and at the same time distinguish between the eﬀect of the covariates on the cure probabilities and on survival times. In particular, for practical applications, it seems of interest to assume a logistic form of the incidence and a Cox proportional hazards model for the latency. To correct the estimators for the bias introduced by the measurement error, we use the simex algorithm, which is a very general simulation based method. It essentially estimates this bias by introducing additional error to the data and then recovers bias corrected estimators through an extrapolation approach. The estimators are shown to be consistent and asymptotically normally distributed when the true extrapolation function is known. We investigate their ﬁnite sample performance through a simulation study and apply the proposed method to analyse the eﬀect of the prostate speciﬁc antigen (PSA) on patients with prostate cancer.


INTRODUCTION
Classical survival analysis methods are designed to deal with time-to-event data in the presence of censoring and covariates.However, they often fail to address various challenges presented by real-life problems.In recent times, significant advances have been made in adapting and extending traditional methods for handling data with more complex features.In this article we account simultaneously for a cure fraction of the population, referring to those subjects that are immune to the event of interest, and covariates measured with error.Such situations arise frequently in practice.For instance, in cancer studies it is known that some of the patients will never experience recurrence or cancer related death and certain biomarker expressions such as the hemoglobin level or tumor size cannot be measured precisely.The systolic blood pressure is also known to be an error-prone predictor for the development of the coronary heart disease.Examples of variables that cannot be measured precisely end events that are not experienced by the whole population can also be found in economic and social studies.Ignoring both these characteristics in the statistical procedures would most probably lead to incorrect inferences.
Cure rate models were first introduced by Boag (1949) and Berkson & Gage (1952), but only quite recently they have attracted attention in the statistical literature and applications.The proposed models can be divided in two main categories: mixture cure models and promotion time models (see Amico & Van Keilegom (2018) for a detailed review).The first ones assume that the population consists of two subpopulations, the cured and the susceptible ones, and model separately the incidence (the probability of being noncured) and the latency (the survival of the noncured subjects) using parametric or nonparametric models.The latter ones have a proportional hazards structure and extend the classical Cox regression model to allow for the survival function to flatten at a level greater than zero.There is no clear indication of which approach is more appropriate but in general, mixture cure models are preferred when one wants to distinguish between variables that affect the cure probability and the survival of the uncured subjects.
On the other hand, there is a vast literature about bias correction methods mainly in regression models with covariates contaminated by measurement error (see Carroll et al. (2006)).The classical additive error model is generally accepted and the most common methods to deal with it are the so called functional ones, which do not make any assumption on the distribution of the unobserved true covariates.They can be divided in three large classes of models: regression calibration, score functional methods and simulation-extrapolation (simex).The latter one is in particular quite appealing because it is a simulation based method and it can be easily adapted to any kind of model.It only requires an estimation method in the absence of measurement error and can be easily implemented (though computationally more intensive).In survival analysis it has been applied to the semiparametric Cox model (Carroll et al. (2006)), the marginal hazards model for multivariate failure time data (Greene & Cai (2004)), the frailty model for clustered survival data (Li & Lin (2003)) etc.
However, there are only limited studies on cure rate models with measurement error.This problem was first addressed by Mizoi et al. (2007) and Ma & Yin (2008), who propose a corrected score approach for the parametric and semiparametric promotion time models respectively.Afterwards, the simex procedure was introduced as an alternative estimation method in a more general version of promotion time models by Bertrand et al. (2017a) and an extensive simulation study was done by Bertrand et al. (2017b) to compare it with the corrected score approach and get a better understanding on the robustness of the method.In the context of mixture cure models, the simex algorithm has only been proposed for left-truncated right-censored data when a transformation model is assumed for the latency (Chen (2019)).However, Chen (2019) considers only the case in which the mismeasured covariate affects only the latency and theory is developed for one specific estimation method based on martingale integral representations.In particular, the most commonly used logistic/Cox mixture cure model for right-censored data and the maximum likelihood estimation method (based on the EM algorithm) have not been investigated in presence of measurement error.The popularity of this model motivates us to search for solutions to correct estimates for the biases induced by the measurement error.
Here we propose a simex approach for a general mixture cure model with a parametric form of the incidence and a semiparametric model for the latency.Any estimation method in the absence of measurement error can be used within the simex algorithm.We focus mainly on the logistic/Cox setting, given its practical relevance, but the proposed procedure and the asymptotic theory hold for other mixture cure models as well, provided that the considered estimation method in the absence of measurement error satisfies certain conditions.In particular, these conditions are satisfied for the maximum likelihood estimator introduced in Sy & Taylor (2000) and the presmoothing approach proposed by Musta et al. (2020).We use both these estimators in the simex procedure and compare them through a simulation study.In contrast to the previously considered promotion time models, here we find that if the mismeasured covariate affects only one of the two components (incidence or latency), the estimation of the other component remains undisturbed even if the variables are correlated.However, the use of the simex algorithm to correct for the bias, not always leads to better results in terms of mean squared error.The decision on whether to choose simex over the naive approach (ignoring the bias) depends on a number of factors.In particular, a large sample size, a strong effect of the covariate, a relatively large measurement error and low censoring favour the use of the simex approach.
The article is organized as follows.We start by describing a general parametric/semiparametric mixture cure model with measurement error in Section 2 and then explain the simex estimation procedure in Section 3. Asymptotic properties of the estimators are presented in Section 4, while their practical performance for the logistic/Cox mixture cure model is demonstrated through simulation studies in Section 5. Finally, in Section 6, we apply the proposed method to a prostate cancer dataset to account for measurement error in the values of the prostate specific antigen.

MIXTURE CURE MODEL WITH MEASUREMENT ERROR
Suppose we are interested in the time T until a certain event happens.In contrast to classical survival analysis, in cure models it is possible to have T = ∞ (the event never happens), reflecting the presence of a cure fraction.On the other hand, a finite survival time corresponds to susceptible subjects that will experience the event at some time point.If we indicate by B the uncured status, i.e.B = 1 {T <∞} , then we can write with T 0 representing the survival time for an uncured individual.The challenge of dealing with this type of models arises from the fact that, because of finite censoring times, it is impossible to completely separate the two groups.To be precise, if C denotes the censoring time, then we only observe the follow-up time Y = min(T, C) and the censoring indicator ∆ = 1 {T ≤C} .Hence, for the observations with ∆ = 0, we do not know whether they are cured or susceptible.In addition to the cure fraction and censoring, it is desirable to also account for the impact of certain covariates on the time to event variable.Let (X T , Z T ) T a (p + q)-dimensional vector of covariates, where x T denotes the transpose of the vector x.The advantage of mixture cure models with respect to promotion time models is that they can distinguish between the covariates X, which affect the cure rate, and Z, which affect the survival of the uncured subjects, i.e.
However, it is possible for X and Z to be the same or share some of the components.As commonly done in studies of cure models, we assume that the censoring time and the survival time are independent given the covariates which is equivalent to requiring T 0 ⊥ (C, X)|Z and B ⊥ (C, T 0 , Z)|X (see Lemma 1 in Appendix A of Musta et al. (2020)).
In this paper we deal with situations in which some of the continuous covariates included in X and/or Z are subject to measurement error.For ease of notation and interpretation we define the vector of unique covariates (E (1) T , E (2) T , E (3) T ) T ∈ R p+q 1 where E (1) denotes the covariates in X that are not present in Z, E (2) denotes the common components of X and Z, E (3) denotes the covariates in Z that are not present in X and q 1 is the number of covariates in E (3) .In other words, we are removing the repeated covariates from the vector (X T , Z T ) T without loosing any information.In the presence of measurement error, instead of (E where U ∈ R p+q 1 is the vector of measurement errors.We assume that U is independent of (X, Z, T, C) and it follows a continuous distribution with mean zero and known variance matrix V .The elements of V corresponding to covariates with no measurement error (including non-continuous covariates) are set to zero.However, no parametric assumption is made on the distribution of the errors.In particular, the measurement error is not required to be normally distributed.
We consider a general mixture cure model with a parametric form of the incidence and a semiparametric model for the latency.To be precise, the cure probability of a subject with covariate x is for some known function φ : R p × R p → [0, 1] and γ 0 ∈ R p , while the conditional survival function of the noncured subjects S u (•|z) depends on a parametric component β 0 and a nonparametric non-decreasing function Λ 0 (for example the cumulative baseline hazard).As a result, the conditional survival function corresponding to T is then The logistic model, where is perhaps the most common one when a parametric form of the cure probability is adequate.On the other hand, the Cox proportional hazards model (Cox (1972)) and the accelerated failure time model where Λ 0 is the baseline cumulative hazard function, are both widely used semiparametric modelling approaches for the latency.However, our methodology applies more in general to parametric/semiparametric mixture cure models provided that an estimation method for the case without measurement error is available.The goal is to estimate the true parameters γ 0 , β 0 and Λ 0 on the basis of n i.i.d.observations (Y 1 , ∆ 1 , W 1 ), . . ., (Y n , ∆ n , W n ), knowing the variance matrix V of the measurement error.In the next section we propose a simulation-extrapolation approach designed to reduce the bias due to the measurement error.

METHODOLOGY
The basic idea behind the simex algorithm is that we can gain insights on how the measurement error affects the estimators by creating artificial data with increasing levels of measurement error and estimating the parameters as if there was no error.The obtained information is then used in the second step to recover the bias corrected estimators through an extrapolation approach.Next we describe the details of this procedure.
Step 1. (Simulation) We choose K levels of added noise λ 1 , . . ., λ K ≥ 0 and for each of them we generate a large number B of artificially contaminated samples.To be precise, for each λ ∈ {λ 1 , . . ., λ K } and b ∈ {1, . . ., B}, we simulate independent identically distributed variables { Ũb,i } n i=1 , independently of the observed data and with distribution N D (0, I D ), where D = p + q 1 is the dimension of the vector W . Afterwards, we construct new covariates Ũi,b , where V is the covariance matrix of the error in (2).Distributions different from Gaussian can be used too but here we focus on normal errors.The mixture model satisfied by the new covariates i,λ,b , W i,λ,b , W i,λ,b , W i,λ,b , W , is characterized by the parameters γ λ , β λ and Λ λ .Using {Y i , ∆ i , W i,λ,b } n i=1 we estimate γ λ , β λ and Λ λ , as if there was no measurement error, obtaining γλ,b , βλ,b and Λλ,b .The latter one is an estimator of Λ 0 over some compact interval [0, τ ].Any available estimation method can be used.For example, in the logistic/Cox mixture cure model, the maximum likelihood estimation (Sy & Taylor (2000); Cai et al. (2012)) or the presmoothing approach proposed by Musta et al. (2020) can be considered.
At the end, for each level of contamination, the average values of all the B estimates are calculated: Λλ,b (t). (5) Note that, if the estimators Λλ,b are piecewise constant with jumps at the observed event times, then also Λλ is piecewise constant with jumps at the observed event times.The parameters to be chosen in this step are K, the λs and B. Common values are K = 5, λ ∈ {0, 0.5, 1, 1.5, 2} and B = 50 (Carroll et al. (1996); Cook & Stefanski (1994)).
Step 2. (Extrapolation) Note that, by independence, the covariance matrix of the simulated covariates This means that the variance has been inflated by a factor 1 + λ and that the ideal case of no measurement error corresponds to λ = −1 (adding 'negative' variance).Hence, the idea is to model the relationship between λ and the estimators γλ , βλ , Λλ by fitting a regression function and then extrapolate to λ = −1.First, an extrapolant function needs to be chosen (e.g.linear, quadratic or fractional) for each component of γλ , βλ , Λλ as a function of λ.For example, for the quadratic case and λ ∈ {λ 1 , . . ., λ K }, we have γλ,j = g γ,j (a * γ j , λ) + γ,λ,j = a * γ j ,1 + a * γ j ,2 λ + a * γ j ,3 λ 2 + γ,λ,j , j = 1, . . ., p βλ,j = g β,j (a * β j , λ) + β,λ,j = a * β j ,1 + a * β j ,2 λ + a * β j ,3 λ 2 + β,λ,j , j = 1, . . ., q where β,λ,j , γ,λ,j and Λ,λ,t are the error terms in the extrapolant model, assumed to have mean zero and to be independent.Estimators âγ j = (â γ j ,1 , âγ j ,2 , âγ j ,3 ), âβ j = (â β j ,1 , âβ j ,2 , âβ j ,3 ) and ât = (â t,1 , ât,2 , ât,3 ) of the unknown parameters of the extrapolant function are obtained by fitting the previous regression models using the method of least squares.Finally, the simex estimators are defined by If the initial estimators Λλ,b are piecewise constant with jumps at the observed event times, then the extrapolation procedure needs to be applied only for the observed event times t ∈ {T (1) , . . ., T (m) }.Equivalently, the procedure can be applied to the jump sizes for different coefficients a * and a possibly different extrapolation function (if it is not polynomial).Even though this does not guarantee that the resulting estimator Λsimex is non-decreasing, in practice this is often the case.If one is interested in estimation of Λ 0 on the whole support and Λsimex is not monotone, an isotonized version of it, using for example the pool-adjacent-violators algorithm (Robertson et al. (1988)), would be a more reasonable estimate.However, here we focus on estimation of the parameters γ, β and do not further exploit this aspect.Note also that different extrapolation functions lead to different results.Hence it is important to have a good approximation of the true extrapolation function.

ASYMPTOTIC PROPERTIES 4.1. General results
In this section we establish some theoretical results regarding the large-sample properties of the proposed estimators.A drawback of the simex approach is that consistency and asymptotic normality of the estimators hold only if we knew the true extrapolation function, which is usually not the case in practice.When the true extrapolant function is not known, but an approximation of it is used, the results of Theorems 1 and 2 hold with γ 0 , β 0 , Λ 0 (t) replaced by lim λ→−1 g γ (a γ , λ), lim λ→−1 g β (a β , λ) and lim λ→−1 g Λ (a t , λ) respectively.Here g γ (a γ , λ) denotes the vector (g γ,1 (a γ 1 , λ), . . ., g γ,p (a γp , λ)) T and g β (a β , λ), g Λ (a t , λ) are defined similarly.We first establish the asymptotic results in a general mixture cure model as described in Section 2, assuming that the used estimation method for obtaining γλ,b , βλ,b , Λλ,b (ignoring the measurement error) satisfies certain conditions.Afterwards, we will focus on two estimation methods for the logistic/Cox mixture cure model and show that the required conditions are met.All the proofs can be found in the Supplementary Material.
For a fixed λ > 0 consider observations (Y, ∆, W λ ), where W λ = W + (λV ) 1/2 Ũ and the mixture cure model with conditional survival where, as mentioned in Section 2, the decomposition of W λ in three components corresponds to the covariates that influence only the cure probability, those that are common for the incidence and the latency and the ones that affect only the latency.The survival of the uncured subject S u,λ depends on the regression parameters β λ and the nonparametric function Λ λ .Suppose we have an estimation method that provides estimates γλ , βλ and Λλ , the latter one being a non-decreasing function.The following conditions will be needed in order to establish the asymptotic results.
(A1) With probability one and for some τ > 0 we have as n → ∞, i.e. the estimators are strongly consistent.By • we denote the Euclidean norm.(A2) For m < ∞, define is uniformly bounded and Donsker.
Moreover, in what follows, we assume that the extrapolant functions g(a, λ) are such that the matrix ġ(a, λ) of partial derivatives with respect to the elements of a is bounded and continuous at the true parameters a * and has full rank, i.e. ġ(a * , λ) T ġ(a * , λ) is invertible.
THEOREM 1. Suppose that condition (A1) is satisfied and that Λ 0 is continuous.If the measurement error variance and the true extrapolant functions are known then, with probability one, THEOREM 2. Suppose that conditions (A1)-(A2) are satisfied and that Λ 0 is continuous.If the measurement error variance and the true extrapolant functions are known, then n 1/2 (γ simex − γ 0 ) converges in distribution to N (0, Σ γ ) and n 1/2 ( βsimex − β 0 ) converges in distribution to N (0, Σ β ), with Σ γ and Σ β as in (S2) and (S3) in the Supplementary Material.Moreover, The proofs of Theorems 1 and 2 follow the usual arguments for simex estimators.In particular, consistency relies mainly on the consistency of the estimators for each λ and consistency of the estimated extrapolant functions.Moreover, the i.i.d.representation in condition (A2) and the expressions in (5) allow us to obtain convergence to a Gaussian process for any λ.Finally, the asymptotic normality of the simex estimators follows by the delta method.Details of the proofs can be found in the Supplementary Material.

Example: logistic/Cox mixture cure model
The logistic/Cox mixture cure model is perhaps the most commonly used one for studying survival data in the presence of a cure fraction.It assumes that the function φ(γ, x) is as in ( 3), where the first component of x is equal to one and γ 1 corresponds to the intercept.On the other hand, the distribution of the uncured subjects follows a Cox proportional hazards model as in ( 4), where Λ 0 is the baseline cumulative hazard, β T 0 Z does not contain an intercept and the matrix var(Z) is assumed to have full rank for the Cox model to be identifiable.The classical estimator in this setting is the maximum likelihood estimator proposed by Sy & Taylor (2000) and implemented in the R package smcure.The estimator is computed through the expectation maximization algorithm because of the unobserved latent variable B and its asymptotic properties are investigated in Lu (2008).Recently, an alternative estimation procedure relying on presmoothing was proposed by Musta et al. (2020).It uses a preliminary nonparametric estimator for the cure probabilities and ignores the Cox model when estimating γ 0 .It is shown through simulations that, if the interest is focused on estimation of the parameters of the incidence, this method usually performs better that the maximum likelihood estimator.However, both methods lead to very similar results when estimating the latency.Next we show that these two estimators satisfy our conditions (A1)-(A2) and as a result, both procedures can be used in the SIMEX algorithm leading to consistent and square-root convergent estimators.THEOREM 3. Consider the maximum likelihood estimation method proposed by Sy & Taylor (2000).Assume that conditions 1-4 in Lu (2008) are satisfied.Then our conditions (A1)-(A2) above hold with Ψ λ (y, δ, w, h 1 , h 2 , h 3 ) as in (S9) in the Supplementary Material.THEOREM 4. Consider the estimation method proposed by Musta et al. (2020) and assume that their assumptions (C1)-( C4), (AC2), (AC5)-(AC7) are satisfied.Then our conditions (A1)-(A2) above hold with Ψ λ (y, δ, w, h 1 , h 2 , h 3 ) as in (S11) in the Supplementary Material.
In order for the mixture cure model to be identifiable, In practice cure rate models are used when there is a long follow-up beyond the largest observed event time T (m) and the zero-tail constraint is applied, i.e. the censored subjects with follow-up time larger than T (m) are considered cured.For being able to develop the asymptotic theory, in Lu (2008) it is assumed that inf z P(T 0 = τ 0 |Z = z) > 0, while Musta et al. (2020) argue that this assumption can be avoided thanks to the presmoothing step.

NUMERICAL STUDY
5.1.Setup In this section we investigate the finite-sample behaviour of the simex method in the logistic/Cox mixture cure model.The two estimation approaches considered in Section 4.2 are used within the simex algorithm and compared with each other in the context of mismeasured covariates.Results for a variety of models and scenarios are presented in the next subsections.We try to cover a wide range of situations and capture the effect of the cure rate, censoring rate, sample size and measurement error variance.Unless stated otherwise, the error distribution is Gaussian and the used extrapolation function is quadratic, which seems to be a good compromise in terms of bias and variance (Cook & Stefanski (1994); Carroll et al. (2006); Li & Lin (2003); Bertrand et al. (2017a)).Finally, we also briefly investigate the robustness of the method with respect to the extrapolation function, misspecification of the error distribution and variance.In all the simulation studies, for the simex method, we choose B = 50, K = 5, λ ∈ {0, 0.5, 1, 1.5, 2} (as these seem to be quite common choices in the literature) and for each setting 500 simulated datasets were used to compute the bias, variance and mean squared error (MSE) of the estimators.We also compare the bias corrected estimators with the naive estimators, which do not take the measurement error into account.The bandwidth for the estimator based on presmoothing is chosen as in Musta et al. (2020), i.e. the cross-validation optimal bandwidth for estimation of the conditional distribution H(t|x) for t ≤ Y (m) truncated from above at 2, where Y (m) is the largest uncensored observation and x is the continuous covariate affecting the incidence (standardized).To reduce computational time, we compute this bandwidth only once for the initial dataset and use the same for the data with added noise.We observed that not updating the bandwidth for each b ∈ {1, . . ., B} and λ ∈ {0.5, 1, 1.5, 2} does not have a significant impact on the final results.Moreover, we assume to know the standard deviation of the error, which is usually not the case in practice.In such situations, a preliminary step of variance estimation is required before applying the simex procedure (see for example Bertrand et al. (2019)).

One mismeasured covariate
We start by considering a simplified model in which there is only one covariate of interest, measured with error, affecting both the cure probability and the survival of the uncured subjects.
Model 1.Both incidence and latency depend on one covariate X, which is a standard normal random variable.We generate the cure status B as a Bernoulli random variable with success probability φ(γ, x) = 1/(1 + exp(−γ 1 − γ 2 x)).The survival times for the uncured observations are generated according to a Weibull proportional hazards model and are truncated at τ 0 = 7 for ρ = 1.75, µ = 1.5 and β = 1.The censoring times are independent from X and T .They are generated from the exponential distribution with parameter λ C and are truncated at τ = 9.Various choices of the parameters γ and λ C with the corresponding cure and censoring rates can be found in Table 1.Here and in what follows, the truncation of the survival times and censoring times is done in such a way that τ 0 < τ and it is unlikely to observe an event time at τ 0 .This mimics real-life situations in which cure models are adequate.X is measured with error, i.e. instead of X we observe W = X + U , where U ∼ N (0, v 2 ).
Results for sample size n = 200 (n = 400) and measurement error variance v 2 = 0.7 2 are given in Table 2 (Table S1 in the Supplementary Material).This corresponds to a large error situation since the ratio between the standard deviation of the error and the standard deviation of the covariate is 0.7.Below we will consider also settings with smaller measurement error.
First of all, we observe that in the presence of measurement error there is usually no advantage of using the presmoothing approach instead of maximum likelihood estimation.In particular, when the bias induced by the measurement error is large, it seems that the estimator based on presmoothing is more affected for both the naive and the simex method.Moreover, most of the time the bias is observed only for the coefficients that correspond to the variables measured with error.As expected, in all cases, the simex algorithm reduces this bias at the price of a larger variance.In terms of mean squared error, it is better to use the naive approach for coefficients that are small in absolute value (the case of γ 2 in setting 1), while the simex method is preferred when the absolute value of the coefficient is large (i.e. the covariate has a greater effect on the cure/survival).In this setting, for n = 200, γ 2 = 0.5 seems to be a borderline case, meaning that the simex method performs better when the censoring rate is low, while the naive method has smaller MSE when the censoring rate is high.In addition, results show that when the coefficient of a mismeasured covariate is large, there might be induced bias even for the intercept, which is also corrected by the simex algorithm.As the sample size increases, the bias created by the measurement error increases but the variance decreases for both naive and simex estimators.Furthermore, the advantage of using simex instead of ignoring the bias becomes more significant.At the same time, the threshold absolute value of a coefficient for which bias correction leads to better MSE decreases (simex is preferred for γ 2 = 0.5 in setting 2, which was a borderline case for n = 200).

More realistic scenarios
Through the following four models we try to cover more realistic situations and investigate the effect of the measurement error on the naive and bias corrected estimators.
Model 2. Both incidence and latency depend on two independent covariates: X 1 has a uniform distribution on the interval [−1, 1] and X 2 is a Bernoulli random variable with success probability 0.5.We generate the cure status B as a Bernoulli random variable with success probability φ(γ, x) = 1/(1 + exp(−γ 1 − γ 2 x 1 − γ 3 x 2 )).The survival times for the uncured observations are generated according to a Weibull proportional hazards model and are truncated at τ 0 for ρ = 1.75 and µ = 1.5.The censoring times are independent from (X, T ).They are generated from the exponential distribution with parameter λ C and are truncated at τ .Instead of X 1 we observe W = X 1 + U , where U ∼ N (0, v 2 ).We consider v ∈ {0.2, 0.4} corresponding to small and large error settings respectively.Model 3.For the incidence we consider two independent covariates: X 1 has a uniform distribution on the interval [−1, 1] and X 2 is a Bernoulli random variable with success probability 0.5.The latency also depends on two covariates: Z 1 = X 1 and Z 2 is independent of the previous ones and normally distributed with mean zero and standard deviation 0.3.We generate the cure status B as a Bernoulli random variable with success probability φ(γ, x) = 1/(1 + exp(−γ 1 − γ 2 x 1 − γ 3 x 2 )).The survival times for the uncured observations are generated according to a Weibull proportional hazards model and are truncated at τ 0 for ρ = 1.75 and µ = 1.5.The censoring times are independent from (T, X, Z).They are generated from the exponential distribution with parameter λ C and are truncated at τ .The mismeasured covariate is Z 2 , i.e. we only observe W = Z 2 + U , where U ∼ N (0, v 2 ) and v ∈ {0.1, 0.2} corresponding to small and large error settings respectively.Model 4. The incidence depends on one covariate X which is a standard normal random variable.The latency depends on two covariates: Z 1 = X and Z 2 is independent of X and uniformly distributed on [−1, 1].We generate the cure status B as a Bernoulli random variable with success probability φ(γ, x) = 1/(1 + exp(−γ 1 − γ 2 x)).The survival times for the uncured observations are generated according to a Weibull proportional hazards model and are truncated at τ 0 for ρ = 1.75 and µ = 1.5.The censoring times are independent of the vector (X, Z, T ).They are generated from the exponential distribution with parameter λ C and are truncated at τ .Instead of X and Z 2 we observe W 1 = X + U 1 and W 2 = Z 2 + U 2 , where the error terms U 1 ∼ N (0, v 2 1 ) and U 2 ∼ N (0, v 2 2 ) are independent.We consider (v 1 , v 2 ) = (0.35, 0.2) and (v 1 , v 2 ) = (0.7, 0.4) corresponding to small and large error settings respectively.
Model 5.The incidence depends on one covariate X which is a standard normal random variable.The latency depends on two correlated covariates: Z 1 = X and Z 2 = −X + N , where N is a normal random variable with mean zero and standard deviation 0.5 independent of X.We generate the cure status B as a Bernoulli random variable with success probability φ(γ, x) = 1/(1 + exp(−γ 1 − γ 2 x)).The survival times for the uncured observations are generated according to the Weibull proportional hazards model in (6) and are truncated at τ 0 for ρ = 1.75 and µ = 1.5.The censoring times are independent of the vector (X, Z, T ).They are generated from the exponential distribution with parameter λ C and are truncated at τ .The covariate Z 2 is measured with error, i.e. instead of Z 2 we observe W = Z 2 + U , where U ∼ N (0, v 2 ) is independent of the previous variables.We consider v = 0.39 and v = 0.78 corresponding to small and large error settings respectively.Table 3: Parameter values and model characteristics for each scenario in Models 2-5.
For the four models, various choices of the parameters γ, β, λ C and (τ 0 , τ ) are considered, in such a way that we obtain three scenarios for the cure rate (20%, 30% and 50%) and different levels of censoring (see Table 3).The sample size is fixed at n = 200, while the variance of the measurement error is chosen as described in each model, corresponding to a ratio between Table 4: Bias, variance and MSE of γ and β for the naive and simex method based on the maximum likelihood (1) or the presmoothing (2) approach in Models 2-5 (n = 200).The first column gives the model, scenario and the standard deviation of the measurement error.All numbers were multiplied by 100.
the standard deviation of the error and the standard deviation of the covariate equal to 0.35 and 0.7.Some of the results are given in Table 4 and the rest can be found in Tables S2-S3 in the Supplementary Material.
Once more we observe that the maximum likelihood estimator and the estimator based on presmoothing give comparable results for both the naive and the simex method.As expected, the measurement error mainly affects the estimators of the coefficients corresponding to the mis-measured covariates.However, the measurement error induces bias also on variables correlated to the mismeasured covariate within the same component.For example in Model 5, the measurement error of Z 2 leads to biased estimators for β 1 and β 2 , but does not affect the estimation of γ 2 even though Z 1 = X.In all settings, the simex method corrects for the bias due to the measurement error.Nevertheless, in terms of mean squared error, the naive approach is still preferred when the measurement error is small and the absolute value of the coefficient corresponding to the standardized covariate is small (the covariate has a weak effect on cure or survival).On the contrary, a strong effect (large coefficient) and a large measurement error favour the use of the simex method.

Robustness of the method
Here we investigate the robustness of the simex approach with respect to the choice of the extrapolation function, misspecification of the error distribution and of the error standard deviation.We focus on Model 2, where the mismeasured covariate is X 1 = Z 1 affecting both the cure probability and the survival.The sample size is n = 200 and the error standard deviation is v = 0.2 or v = 0.4.
In addition to the quadratic extrapolant used in Table 4, we consider also a linear and a cubic extrapolant.Results in Table 5 show that, as the order of the extrapolation function increases, the difference between the maximum likelihood estimators and the estimators based on presmoothing becomes more significant.In particular, it favours the first method over the latter one mainly due to a smaller variance.As expected, the choice of the extrapolation function has stronger effect on the coefficients corresponding to the mismeasured covariates and when the error is large.For v = 0.4, the bias decreases as the extrapolation order increases while there is no clear conclusion when v is small.In terms of mean squared error, linear extrapolation is preferred when the measurement error variance is low or more in general in situations where the naive method would do better than the simex approach.In cases where simex outperforms the naive estimators, the quadratic extrapolant seems to be the best choice.To understand what happens if the error distribution is misspecified we generate the measurement error from three other distributions: a uniform distribution U ∼ Unif(−a, a), a Student-t distribution with k degrees of freedom a −1 U ∼ t k and a chi-squared distribution with k degrees of freedom a −1 U + k ∼ χ 2 k .The constant a is chosen in such a way that the standard deviation of U is v = 0.2 or v = 0.4.In all three cases we still use the Gaussian distribution in the simex procedure.Results are given in Table 6.We observe that, when the true distribution is uniform or Student-t, the method still behaves quite well and there is little impact on the estimators.However, when the true distribution is not symmetric (χ 2 ) there is a significant increase in mean squared error, in particular for large v. Table 6: Bias, variance and MSE of γ and β for the simex method based on the maximum likelihood (1) or the presmoothing (2) approach when the error distribution is misspecified.All numbers were multiplied by 100.
Finally we investigate the effect of error variance misspecification.We simulate the error from a normal distribution with standard deviation v = 0.2 and v = 0.4 but in the estimation process the variance is misspecified v E ∈ {v − 0.1, v + 0.1}.Results reported in Table 7 show that the misspecification affects estimation of all the parameters but the difference is larger for those that correspond to the mismeasured covariates.As expected, increasing the specified variance v 2 E leads to an increased variance of the simex estimators.For small v, the lowest bias is obtained when v is correctly specified while for large v, the bias decreases as the specified variance increases.In terms of mean squared error, in situations where simex performs worse than the naive approach underspecifying the variance works better.On the other hand, when simex outperforms the naive estimators, overspecifying the error variance is preferred over underspecification.

APPLICATION: PROSTATE CANCER STUDY
In this section we illustrate the practical use of the proposed simex procedure for a medical dataset concerning patients with prostate cancer.According to the American Cancer Society, prostate cancer is the second most common cancer among American men (after skin cancer) and it is estimated that about 1 man in 9 is diagnosed with prostate cancer during his lifetime.Even though most men diagnosed with prostate cancer do not die from it, it can sometimes be a serious disease.The 5-year survival rate based on the stage of the cancer at diagnoses is almost Table 7: Bias, variance and MSE of γ and β for the simex method based on the maximum likelihood (1) or the presmoothing (2) approach when the error variance is misspecified.All numbers were multiplied by 100.
100% for localized or regional stage and drops to 31% for distant stage.Among other factors, the prostate-specific antigen (PSA) blood level is a good indicator of the presence of the cancer and is used as a tool to both diagnose and monitor the development of the disease.In most cases, elevated PSA levels indicate a poor prostate cancer prognosis.Even though most studies do not take it in consideration, the PSA measurements are not error-free because of the inaccuracy of the measuring technique and own fluctuations of the PSA levels.Here we try to analyse the effect of PSA on cure probability and survival while accounting for measurement error.
We obtain the data from the Surveillance, Epidemiology and End Results (SEER) database, which is a collection of cancer incidence data from population-based cancer registries in the US.We select the database 'Incidence -SEER 18 Regs Research Data' and extract the prostate cancer data for the county of San Bernardino in California during the period 2004 − 2014.We restrict to only white patients, aged 35 − 65 years old, with stage at diagnosis: localized, regional or distant and follow-up time greater than zero.Since a PSA level smaller than 4 ng/ml of blood is considered as normal and a PSA value between 4 and 10 ng/ml is considered as a borderline range, we focus only on patients with PSA level greater than 10 ng/ml.The event time is death because of prostate cancer.This cohort consists of 726 observations out of which 654 do not experience cancer related death (i.e.around 90% are censored).The follow-up time ranges from 2 to 155 months.For most of the patients the cancer has been diagnosed at early stage (localized), while for 228 of them the stage at diagnosis is 'regional' and only for 51 it is 'distant'.The PSA level varies from 10.1 to 94 ng/ml, with median value 15.4 ng/ml, mean value 21.9 ng/ml and standard deviation 16 ng/ml.We use a logistic/Cox mixture cure model to analyse this dataset and the covariates of interest are the PSA level (continuous variable centered to the mean and measured with error) and stage at diagnosis.The latter one is classified using two dummy Bernoulli variables S 1 and S 2 , indicating distant and regional stage respectively.The use of the cure models is justified from the presence of a long plateau containing around 18% of the observations visible in the Kaplan-Meier curve (Kaplan & Meier (1958)) in Figure 1.Moreover, the Kaplan-Meier curves depending on stage at diagnosis in Figure 1 confirm that being in the distant stage significantly affects the probability of being cured.We first estimate the model ignoring the measurement error ('naive') and then we apply the simex procedure with quadratic extrapolation function for two levels of measurement error, namely with standard deviation v = 4.8 and v = 8, corresponding to a ratio between the standard deviation of the error and the standard deviation of the covariate equal to 0.3 and 0.5 (we considered slightly smaller error than in the simulation setting in order to be closer to real life scenarios).In all three cases we use both the maximum likelihood estimation method and the presmoothing based method.The standard deviations of the estimates are computed through 1000 bootstrap samples.We consider such a large number of bootstrap samples because we noted that the estimated standard deviation for γ 3 (distant stage) is not very stable due to the small sample size of that category.The results are reported in Table 8.First of all we observe that, independently of the estimation method that we use, the PSA level and being in the distant stage are significant for the cure probability, while only the latter one is significant for survival of uncured patients (at level 5%).The positive sign of the coefficients confirms that high PSA level and distant stage are related to low cure probability and poor survival.Note that the estimated coefficient for the PSA value seems very small but it corresponds to a coefficient around 0.5 for the standardized variable.Given that the sample size is also large, we expect that, if the measurement error is relatively large, the use of the simex procedure would give more accurate results.Moreover, since there is some correlation between the PSA level and the stage of cancer, the measurement error might induce bias also in the other coefficients.For the maximum likelihood estimator, the estimated effect of the PSA level on the cure probability is slightly stronger when taking into account the measurement error, while the effect of the distant stage is slightly weakened.The opposite happens with the estimation based on presmoothing.
To understand what these differences in the estimates mean in practical terms we compute the cure probability for patients with distant or localized stage and three different PSA levels: 10 ng/ml, 22 ng/ml (mean value) and 34 ng/ml (see Table 9).Contrary to our expectation, we see that, in this example, there is not much difference between the naive and the simex approach.We observed in the simulation study that, when the bias induced by the measurement error is large, it is significantly reduced by the simex procedure and otherwise simex has little effect (see for example estimation of γ 2 in Model 1 and Model 2, Scenario 1 with v = 0.2 or estimation of β 2 in Model 5, Scenario 2 with v 2 = 0.2).Hence, we can conclude that in this example, the bias induced by the mismeasured PSA value is small.This is probably due to the fact that the effect of the PSA value on survival is weak (the absolute value of its coefficient is small compared to the intercept and the coefficient of S 1 ).The very high cure and censoring rate might also play a role.On the other hand, correlation between PSA and the stage of cancer would lead to induced  bias even for the coefficients corresponding to S 1 and S 2 .From the simulation study (see Model 5) we expect this bias to be of the same order as for the mismeasured covariate.Thus, since here the bias for the coefficient of the PSA value is small, even for the coefficients of S 1 and S 2 we do not observe much difference between the naive and simex method.Finally, we find that the estimated cure probabilities are larger when using the estimators based on presmoothing.Based again on the simulation study (cases with small bias), it is more likely that presmoothing behaves better than the maximum likelihood approach.
A simulation-extrapolation approach for the mixture cure model with mismeasured covariates Supplementary Material Eni Musta and Ingrid Van Keilegom
Consistency of βsimex can be proven in the same way.For the function Λsimex we suppose that for every t ∈ [0, τ ], Λ λ (t) can be specified by a function g Λ,t (a t , λ) depending on a parametric vector a t and Λ 0 (t) = g Λ,t (a t , −1).Hence, arguing as above, for any fixed t ∈ [0, τ 0 ] we can show that Uniform consistency on the compact [0, τ ] follows from the fact that Λ 0 is continuous and Λ SIMEX is non-decreasing.

B. ADDITIONAL SIMULATION RESULTS
In this section we report the simulation results for sample size n = 400 in Model 1, and results for Models 2-5 (n = 200) that were omitted from the main paper.Table S3: Bias, variance and MSE of γ and β for the naive and simex method based on the maximum likelihood (1) or the presmoothing (2) approach in Models 4 and 5 (n = 200).The first column gives the model, scenario and the standard deviation of the measurement error.All numbers were multiplied by 100.

Fig. 1 :
Fig. 1: Left panel: Kaplan Meier survival curve for the prostate cancer data.Right panel: Kaplan Meier survival curves based on cancer stage at diagnosis, localized (solid), regional (dotted) and distant (dashed).

Table 1 :
Setting γ 2 Scenario Cure rate γ 1 Cens.rate λ C Cens. level Plateau Parameter values and characteristics of each scenario for Model 1.

Table 2 :
Bias, variance and MSE of γ and β for the naive and simex method based on the maximum likelihood (1) or the presmoothing (2) approach for Model 1 (n = 200).The first column gives the setting/scenario/cens.level.All numbers were multiplied by 100.

Table 5 :
Bias, variance and MSE of γ and β for the simex method based on the maximum likelihood (1) or the presmoothing (2) approach with three different extrapolation functions.All numbers were multiplied by 100.

Table 8 :
Coefficient estimates, estimated standard deviations and p-values for the prostate cancer data using the naive and the simex method based on the maximum likelihood (1) and the presmoothing (2) approach.

Table 9 :
Estimated cure probability for given PSA level and stage.The naive and simex estimators are computed using the maximum likelihood (1) or the presmoothing (2) approach.

Table S1 :
Bias, variance and MSE of γ and β for the naive and simex method using the maximum likelihood (1) and the presmoothing (2) approach for Model 1 (n = 400).The first column gives the setting/scenario/cens.level.All numbers were multiplied by 100.

Table S2 :
Bias, variance and MSE of γ and β for the naive and simex method based on the maximum likelihood (1) or the presmoothing (2) approach in Models 2 and 3 (n = 200).The first column gives the model, scenario and the standard deviation of the measurement error.All numbers were multiplied by 100.