Delete or merge regressors for linear model selection

We consider a problem of linear model selection in the presence of both continuous and categorical predictors. Feasible models consist of subsets of numerical variables and partitions of levels of factors. A new algorithm called delete or merge regressors (DMR) is presented which is a stepwise backward procedure involving ranking the predictors according to squared t-statistics and choosing the final model minimizing BIC. In the article we prove consistency of DMR when the number of predictors tends to infinity with the sample size and describe a simulation study using a pertaining R package. The results indicate significant advantage in time complexity and selection accuracy of our algorithm over Lasso-based methods described in the literature. Moreover, a version of DMR for generalized linear models is proposed.


Introduction
Model selection is usually understood as selection of continuous explanatory variables.However, when a categorical predictor is considered, in order to reduce model's complexity, we can either exclude the whole factor or merge its levels.
A traditional method of examining the relationship between a continuous response and categorical variables is analysis of variance (ANOVA).After detecting the overall importance of a factor, pairwise comparisons of group means are used to test significance of differences between its levels.Typically post-hoc analysis such as Tukey's honestly significant difference (HSD) test or multiple comparison adjustments (Bonferroni, Scheffe) are used.A drawback of pairwise comparisons is non-transitivity of conclusions.
For example, let us consider data barley from R library lattice discussed already in Bondell and Reich (2009).Total yield of barley for 5 varieties at 6 sites in each of two years is modeled.The dependence between the response and the varieties variable with the use of Tukey's HSD analysis (Figure 1) gives inconclusive answers: In this work we introduce a novel procedure called delete or merge regressors (DMR), which enables efficient search among partitions of factor levels, for which the issue of non-transitivity does not occur.If we apply DMR to the barley data, we get the following partition of varieties: {{S, M, V, P }, {T }}.Detailed description of the data set and the characteristics of the chosen model can be found in Section 5.5.
The idea of partitioning a set of levels of a factor into non-overlapping groups has already been discussed in the literature.In the article Tukey (1949) a stepwise backward procedure based on the studentized range which gives grouping of means for samples from normal distributions was imsart-ejs ver.2014/02/20 file: DMR.tex date: May 18, 2015  proposed.Other methods of clustering of sample means were described in Scott and Knott (1974), where the set of means is partitioned from coarsest to finest, and in Caliński and Corsten (1985) whose algorithm adapts hierarchical clustering to the problem.In more recent articles Porreca and Ferrari-Trecate (2010) and Ciampi et al. (2008) efficient algorithms for datasets partitioning using generalized likelihood ratio test can be found.However, all the mentioned methods assume an arbitrary choice of significance level for the underlying tests.In our procedure we avoid the problem by selecting the final partition according to minimal value of information criterion.
Information criterion as an objective function for partition selection is used in the procedures described in Dayton (2003).Dayton's SAS procedure, called paired comparisons information criteria (PCIC), computes AIC and BIC values for all ordered subsets of independent means for both homogeneous and heterogeneous models.In contrast to DMR these methods do not allow for simultaneous factor partitioning and selection of continuous variables.
A method introduced in Bondell and Reich (2009) called collapsing and shrinkage ANOVA (CAS-ANOVA) solves the same problem as DMR with use of the least absolute shrinkage and selection operator (Lasso; Tibshirani (1996)), where the L 1 penalty is imposed on differences between parameters corresponding to levels of each factor.This algorithm can be interpreted as a generalization of fused Lasso (Tibshirani et al. (2004)) to data with categorical variables.In Gertheiss and Tutz (2010) one can find a modification of CAS-ANOVA, which is more computationally efficient because of using the least angle regression algorithm (LARS; Efron et al. (2004)).Another algorithm, based on regularized model selection with categorical predictors and effect modifiers (Oelker, Gertheiss and Tutz (2012)) is implemented in R package gvcm.cat.It generalizes Lasso approach to simultaneous factor partitioning and selection of continuous variables to generalized linear models.The algorithm is based on local quadratic approximation and iterated reweighted least squares.
We propose a backward selection procedure called delete or merge regressors (DMR), which combines deleting continuous variables with merging levels of factors.The method employs a greedy search among linear models with a set of constraints of two types: either a parameter for a continuous variable is set to zero or parameters corresponding to two levels of a factor are set to equal each other.In each step the choice of constraint is based on the order of squared t-statistics.As a result a nested family of linear models is obtained and the final decision is made by minimization of Bayesian information criterion (BIC).The method adapts agglomerative clustering, where squared t-statistics define the dissimilarity measure.This procedure generalizes concepts introduced in Zheng and Loh (1995) and Ciampi et al. (2008) .
In the article we show that DMR algorithm is a consistent model selection method under rather weak assumptions when p tends to infinity with n.Furthermore, thanks to using a recursive formula for RSS in a nested family of linear models, the time complexity of DMR algorithm is just O(np 2 ).This makes the algorithm much faster than the competitive Lasso-based methods.In the article we describe a simulation study and discuss a pertaining R package.The simulations show that DMR in comparison to adaptive Lasso methods described in the literature gives better results in terms of accuracy without the troublesome choice of the λ grid.
The remainder of the article proceeds as follows.The class of feasible models considered when performing model selection is defined in Section 2. DMR procedure is introduced in Section 3, while its asymptotic properties are discussed in Section 4. Simulations and real data examples are given in Section 5 to illustrate the method.All proofs are given in the Appendix.

Feasible models
In this section we first introduce some definitions regarding the form of the data and models considered.In particular, we define the set of feasible models, which are linear spaces of parameters with linear constraints and we show how by change of variables the constrained problem can be replaced by unconstrained one.Later we indicate that properties of OLS (ordinary least squares) estimators transfer to feasible models.

Definitions
Let us consider data generated by a full rank linear model with n observations and p < n parameters: where: 1. ε is a vector of iid zero-mean gaussian errors, ε ∼ N (0, σ 2 I).
2. X = [1, X 0 , X 1 , . . ., X l ] is a model matrix organized as follows: X 0 is a matrix corresponding to continuous regressors and X 1 , . . ., X l are zero-one matrices encoding corresponding factors with the first level set as the reference.3.

Unconstrained parametrization of feasible models
A feasible model can be defined by a linear space of parameters where A 0M is a (p − q) × p matrix encoding q elementary constraints induced by the model.Such a constraint matrix can be expressed in many ways.In particular, every linear space can be spanned by different vectors.The number of such vectors can be greater than the dimension of the space when they are linearly dependent.In order to unify the form of a constraint matrix, we introduce the notion of regular form, which is described in the Appendix A. We assume that A 0M is in regular form.Let A 1M be a q × p complement of A 0M to invertible matrix A M , that is: imsart-ejs ver.2014/02/20 file: DMR.tex date: May 18, 2015 where A 1 M is a p × q matrix.In order to replace a constrained by an unconstrained parametrization change of variables in model M is performed.Let β M ∈ L M and ξ M = A 1M β M .We have: Indeed, From equation ( 7) we obtain Let us notice that L M is a linear space spanned by columns of A 1 M .The dimension of space L M will be called the size of model M and denoted by |M |.Note that |M | = q.Example 1 continued.Matrices A M , Z 1M and ξ M are: One can see that a change from constrained to unconstrained problem was done by adding and deleting columns of the model matrix.
The OLS estimator of β * constrained to L M is given by the following expression: Note that A 0M β M = A 0M A 1 M ξ M = 0 and thus indeed β M ∈ L M .We define the inclusion relation between two models M 1 and M 2 by inclusion of linear spaces and intersection of two models M 1 and M 2 by intersection of linear spaces: Observe that H M Xβ * = Xβ * for M ⊇ T .We define residual sum of squares for model M as RSS M = y − X β M 2 .From equation (8) we have: Let us denote: where The following decomposition of RSS in linear models is trivial, hence we omit the proof: In particular for Therefore, the predictions for constrained problem can be obtained through projecting the observations on the space spanned by columns of the model matrix for the equivalent unconstrained problem.Hence, decompositions and asymptotic properties of residual sums of squares for feasible models are inherited from unconstrained linear models.
Bayes Information Criterion for model M is defined as: The goal of our method is to find the best feasible model according to BIC, taking into account that the number of feasible models grows exponentially with p. Since for the k-th factor number of possible partitions is the Bell number B(p k ), the number of all feasible models is 2 p0 l k=1 B(p k ).In order to significantly reduce the amount of computations, we propose a greedy backward search.

DMR algorithm
In this section we introduce DMR algorithm.Because of troublesome notations, in order to make the description of the algorithm more intuitive, we present here a general idea of the algorithm.In particular, we give the details of step 3 of the algorithm in the Appendix B.
Assuming that X is of full rank the QR decomposition of the model matrix is X = QR, where Q is n × p orthogonal matrix and R is p × p upper triangular matrix.Denote minimum variance unbiased estimators of β and σ 2 for the full model F as: Algorithm 1 DMR (Delete or Merge Regressors) Input: y, X 1. Computation of t-statistics Compute the QR decomposition of the full model matrix, obtaining matrix R −1 , vector z and variance estimator σ 2 as in equation ( 12).Calculate squared t-statistics: 1. for all elementary constraints defined in (2): 2. for all elementary constraints defined in (3):

Agglomerative clustering for factors (using complete linkage clustering)
For each factor perform agglomerative clustering using D k = d ijk ij as dissimilarity matrix for k ∈ N \ {0}: We denote cutting heights obtained from the clusterings as h T 1 , h T 2 , . . ., h T l .

Sorting constraints (hypotheses) according to the squared t-statistics Combine vectors of cutting heights
, where h 0 is vector of squared t-statistics for constraints concerning continuous variables and 0 corresponds to the full model.Sort elements of h in increasing order and construct a corresponding (p − 1) × p matrix A 0 of consecutive constraints.4. Computation of RSS using a recursive formula in a nested family of models Perform QR decomposition of the matrix R −T A T 0 obtaining the orthogonal matrix W = [w 1 , . . ., where Mm denotes a model with constraints defined by m first rows of A 0 .The last formula is derived in the Appendix C, see equation ( 22).

Choosing the best model according to BIC
for m = 0, . . ., p − 1. Selected model T is the model minimizing BIC among models on the nested path:

Output: T
The time complexities of successive steps of DMR algorithm are O(np 2 ) for QR decomposition in step 1, O(p 2 ) for hierarchical clustering in step 2, O(p 3 ) for QR decomposition used in step 4. The dominating operation in the described procedure is the QR decomposition of the full model imsart-ejs ver.2014/02/20 file: DMR.tex date: May 18, 2015 matrix.Hence, the overall time complexity of DMR algorithm is O(np 2 ).Example 1 continued.For the illustrative example we have:  28.33, 26.65, 25.36, 34.68, 39.59] T .
Observe that the selected model T is the true model T .The dendrogram and cutting heights for the illustrative example obtained from clustering in step 2 are shown in Figure 2. The horizontal dashed line corresponds to the optimal partition chosen by BIC.

Asymptotic properties of DMR algorithm
In Algorithm 1 and all the simulations and examples we assumed complete linkage in hierarchical clustering and BIC for selection in the nested family of models.The proof of consistency is more general: the linkage criterion has to be a convex combination of the minimum and maximum of the pairwise distances between clusters (see equation 24 in Appendix D) and generalized information criterion is used for final model selection: where r n is the penalty for model size.Note that well known criteria AIC and BIC are special cases of GIC, if r n = 2 and r n = log(n) respectively.
In this section we use We allow the number of predictors p n to grow monotonically with the number of observations n under the condition p n ≺ n.
We distinguish the following subsets of the set of all feasible models M: 1. Uniquely defined model T , which is fixed and does not depend on sample size.We assume that the model consists of a finite number of continuous variables and a finite number of factors with finite numbers of levels.2. A set M V of models with one constraint imposed which is false: A set M T of models with one constraint imposed which is true: We denote: where ∆ M was defined in equation ( 11).Let us notice that from equation ( 8) we get Var Additionally, for finite p, Theorem 1. Assume that X is of full rank and p n ≺ r n ≺ min(n, ∆).Let T be the model selected by DMR, where linkage criterion for hierarchical clustering is a convex combination of minimum and maximum of the pairwise distances between clusters.Then (a) lim n→∞ P( Proof can be found in the Appendix D.

Numerical experiments
All experiments were performed using functions implemented in R package called DMR, which is available at the CRAN repository.The main function in the package is called DMR and implements the DMR algorithm with an optional method of hierarchical clustering (default is complete) and a value of r n in GIC (default is log(n)).The package also contains other functions that are modifications of the DMR algorithm, such as stepDMR which assumes recalculation of t-statistics after accepting every new elementary constraint and DMR4glm which can be used for model selection in generalized linear models.
We compared 2 groups of algorithms.The first one contains 3 stepwise procedures stepBIC, ffs BIC and DMR.The second group are 2 Lasso-based methods: CAS-ANOVA and gvcm.Procedure stepBIC is implemented in the function stepAIC in R package MASS and does not perform factor partitions but either deletes or keeps any of categorical predictors.A factor forward stepwise procedure (ffs BIC), implemented in R package gvcm.cat is similar to DMR but differs in the search direction (DMR is backward and ffs BIC is forward) and in the criterion of selection of the best step (DMR uses t-statistics calculated only once and hierarchical clustering and ffs BIC recalculates criterion in every step).For DMR the complete linkage method of clustering and BIC were used.Algorithm gvcm is implemented in R package gvcm.catwhere by default there are no adaptive weights and crossvalidation is used for choosing the λ parameter.We used adaptive weights and BIC criterion for choosing the tuning parameter since we got better results then.Implementation of CAS-ANOVA can be found on the website http://www4.stat.ncsu.edu/~bondell/Software/CasANOVA/CasANOVA.R.
Here the default BIC was used for choosing the λ parameter making all the methods dependent on the same criterion of choosing the tuning parameters.Adaptive weights are also default in CAS-ANOVA.When using the two Lasso-based algorithms we found difficult the selection of the λ grid.In all the experiments we tried different grids: the default ones and ours both on linear and logarithmic scales presenting only the best results.
We describe three simulation experiments.In Section 5.2 results regarding an experiment constructed in the same way as in Bondell and Reich (2009) is presented.The model consists of three factors and no continuous variables.As a continuation, simulations based on data containing one factor and eight correlated continuous predictors were carried out, the results can be found in Section 5.3.In Section 5.4 we summarize the results of an experiment regarding generalized linear models.In this experiment only 4 algorithms were compared since CAS-ANOVA applies only to normal distribution.
In Section 5.1 we introduce measures of performance which are generalizations of popular true positive rate and false discovery rate on categorical predictors.We call them T P R * and F DR * .In comparison to generalizations introduced in Gertheiss and Tutz (2010) and Bondell and Reich (2009), which we call T P R and F DR, our measures don't diminish the influence of continuous predictors and factors with a small number of levels.Hence, for evaluation of the model selection methods we used following criteria: true model (TM) represents the percentage of times the procedure chose the entirely correct model.Correct factors (CF) represents the percentage of times the non-significant factors were eliminated and the true factor was kept.1−TPR, FDR, 1−TPR * and FDR * are averaged errors made by selectors described in Section 5.1.MSEP stands for mean squared error of prediction for new data and MD is mean dimension of the selected model, both with standard deviations.
The last Section 5.5 refers to two real data examples where barley yield and prices of apartments in Munich were modeled.

Measures of performance
When performing simulations, results are usually compared to the underlying truth.Traditionally, for model selection with only continuous predictors measures such as true positive rate (TPR) or false discovery rate (FDR) are used.In the literature (Gertheiss and Tutz (2010), Bondell and Reich (2009)) their generalization to both continuous and categorical predictors can be found.
Let us consider sets of elementary constraints corresponding to the true and selected models determined by sets of indexes: True positive rate is the proportion of true differences which were correctly identified to all true differences, meaning ratio of the number of true elementary constraints which were found by the selector to the number of all true elementary constraints T P R = |B ∩ B|/|B|.False discovery rate is the proportion of false differences which were classified as true to all differences classified as true, meaning ratio of the number of false elementary constraints which were accepted by the selector to the number of all accepted elementary constraints F DR = 1 − |B ∩ B|/| B|.However, measures defined in this way diminish the influence of the continuous variables and factors with a small number of levels.As an example, consider a model with 5 continuous predictors and one factor with 5 levels.Then the number of parameters for continuous predictors is 5 and the number of possible elementary constraints equals 5.The number of parameters for the categorical variable is also 5, whereas the number of possible elementary constraints is 5 2 = 10.We introduce a different generalization of traditional performance measures using dimensions of linear spaces which define the true and selected models.We consider two models: true model T and selected model T .
We define true positive rate coefficient as T P R * = |T ∩ T |/|T | and false discovery rate coefficient as F DR * = 1 − |T ∩ T |/| T |, where T ∩ T is defined according to equation ( 10).This generalization is more fair since the influence of every parameter on the coefficients is equal.In the article the attention is focused on values: 1 − T P R * and F DR * , which correspond to the errors made by selector.
The response y was generated using the true model: A balanced design was used with c observations for each combination of factor levels, which gives n = 96 • c, c = 1, 2, 4.
The data was generated 1000 times.The best results for λ CAS-ANOVA = (0.1, 0.2, . . ., 3) T and λ gvcm = (0.01, 0.02, . . ., 3) T together with outcomes from other methods are summarized in Table 1.The results of Experiment 1 indicate that DMR and ffs BIC algorithms performed almost twice better than CAS-ANOVA and gvcm in terms of choosing the true model.Our procedure and ffs BIC chose approximately smaller models with dimension closer to the dimension of the underlying true model, whose number of parameters is three.There were no significant differences between mean squared errors of prediction for all considered algorithms.The main conclusion, that DMR and ffs BIC procedures choose models which are smaller and closer to the proper one, is supported by the obtained values of 1 -TPR * and FDR * .An exemplary run of DMR algorithm is shown in Figure 3.The horizontal dotted line indicates the cutting height for the best model chosen by BIC.
In Table 2 the computation times of the algorithms are summarized.All values are divided by the computation time of lm.fit function, which fits the linear model with the use of QR decomposition of the model matrix.
The results for CAS-ANOVA and gvcm are given for only one value of λ.By default, the searched lambda grid is of length 50 and 5001, respectively.One can see that DMR is significantly faster than ffs BIC, CAS-ANOVA and gvcm.
The best results for λ CAS-ANOVA = (0.1, 0.2, . . ., 3) T and λ gvcm = (0.01, 0.02, . . ., 5) T together with outcomes from other methods are summarized in Table 3.Despite the fact that additional continuous variables were correlated, the obtained results show a considerable advantage of DMR algorithm over other methods.

Experiment 3
Simultaneous deleting continuous variables and merging levels of factors can also be considered in the framework of generalized linear models.The problem has already been discussed in Oelker, Gertheiss and Tutz (2012), where L 1 regularization was used.After replacing squared t-statistics with squared Wald's statistics, DMR algorithm can be easily modified to generalized linear models.Simulation results for DMR algorithm for logistic regression are presented below.Let us consider a logistic regression model whose linear part consists of three factors defined as in Experiment 1.
The results of the experiment are summarized in Table 4.The best outcomes for gvcm, presented in the table, were obtained for λ grids λ gvcm = (0.01, 0.02, . . ., 5) T .Again, DMR and ffs BIC show considerable advantage over other model selection methods.
In Table 5 the computation times of the algorithms are summarized.All values are divided by the computation time of glm.fit function.The results for gvcm are given for only one value of λ, while by default the searched lambda grid is of length 5001.DMR is again significantly faster than ffs BIC and gvcm.

Real data examples
Example 1: Barley.The data set barley from R library lattice has already been discussed in the literature, for example in Bondell and Reich (2009).The response is the barley yield for each of 5 varieties (Svansota, Manchuria, Velvet, Peatland and Trebi) at 6 experimental farms in Minnesota for each year of the years 1931 and 1932 giving a total of 60 observations.The characteristics of the chosen models using different algorithms are presented in Table 6.The results for the full model  which is least squares estimator with all variables were given as a benchmark.For the two Lassobased algorithms we find difficult the selection of the λ grid.Therefore, the results for CAS-ANOVA are given for two different grids: the first one chosen so that the chosen model was the same as the one described in Bondell and Reich (2009), λ 1 = (25, 25.01, 25.02, . . ., 35) T , and the second wider superset of the first one, λ 1 = (0.1, 0.2, 0.3, . . ., 35) T .We used λ 2 grid also for gvcm.
The results show that stepwise methods give smaller models with smaller BIC values than the Lasso-based methods.The additional advantage of DMR and ffs BIC is lack of a troublesome tuning parameter.Tutz (2010).
Model selection was performed using five methods: DMR, ffs BIC, CAS-ANOVA, gvcm and stepBIC.Characteristics of the chosen models are shown in Table 7 with results for the full model added for comparison.The reason of lack of results for ffs BIC in the part of Table 7 is that the algorithm required to allocate too much memory (factor urban district has 25 levels).
We can conclude that DMR procedure and ffs BIC chose much better models than other compared methods in terms of BIC.However, DMR method can be applied to problems with larger number of parameters.

Discussion
We propose the DMR method which combines deleting continuous variables and merging levels of factors in linear models.DMR relies on ordering of elementary constraints using squared t-statistics and choosing the best model according to BIC in the nested family of models.A slightly modified version of the DMR algorithm can be applied to generalized linear models.
We proved that DMR is a consistent model selection method.The main advantage of our theorem over the analogous one for the Lasso based methods (CAS-ANOVA, gvcm) is that we allow that the number of predictors grows to infinity.
We show in simulations that DMR and ffs BIC are more accurate than the Lasso-based methods.However, DMR is much faster and less memory demanding in comparison to ffs BIC.Our results are not exceptional in comparison to others in the literature.In Example 1 in Zou and Li (2008) a similar simulation setup to our Experiment 1, n = 96, has been considered.The adaptive Lasso method (denoted there as one-step LOG) was outperformed by exhaustive BIC with 66 to 73 percent of true model selection accuracy.We repeated the simulations and got similar results with 76 percent for the Zheng-Loh algorithm (described in Zheng and Loh (1995)), which is DMR with just continuous variables.Thus, in the Zou and Li experiment the advantage of the Zheng-Loh algorithm over the adaptive Lasso is not as large as in our work, but Zou and Li used a better local linear approximations (LLA) of the penalty function in the adaptive Lasso implementation.Recall that both CAS-ANOVA and gvcm employ the local quadratic approximation (LQA) of the penalty function.
The superiority of DMR over the Lasso based methods in our experiments not only comes from weakness of LQA used in the adaptive Lasso implementation.Greedy subset selection methods similar to the Zheng-Loh algorithm have been proposed many times.Recently, in Pokarowski and Mielniczuk (2013) a combination of screening of predictors by the Lasso with the Zheng-Loh greedy selection for high-dimensional linear models has been proposed.The authors showed both theoretically and experimentally that such combination is competitive to the Multi-stage Convex Relaxation described in Zhang (2010), which is least squares with capped l 1 penalty implemented via LLA.

Appendix A: Regular form of constraint matrix
We say that A 0M is in regular form if it can be complemented to A M so that: where B M is a matrix consisting of 0, −1, 1.Then, using Schur complement we get: Constraint matrix in regular form can always be obtained by a proper permutation of model's parameters.Let us denote clusters in each partition: , where i k is the number of clusters, k ∈ N \ {0} and minimal elements in each cluster as j M ik = min{j ∈ C M ik }.Let P M 0 denote the set of continuous variables in the model.Sort model's parameters in the following order: Sort columns of model matrix X in the same way as vector β.
Constraint matrix in regular form for model M , where each row corresponds to one of the 7 elementary constraints, is: imsart-ejs ver.2014/02/20 file: DMR.tex date: May 18, 2015 and after inverting matrix A −1 M is obtained Notice that for regular constraint matrix Z M is the full model matrix X with appropriate columns deleted or added to each other.
Appendix B: Detailed description of step 3 of the DMR algorithm Since step 3 of DMR algorithm needs complicated notations concerning hierarchical clustering, we decided to present them in the Appendix for the interested reader.In particular, we show here how the cutting heights vector h and matrix of constraints A 0 are built.
Let us define vectors a(1, j, k) and a(i, j, k) (corresponding to the elementary constraints, being building blocks for A 0 ) such that : For each step s of the hierarchical clustering algorithm we use the following notation for the partitions of set {1} ∪ N k = {1, 2, . . ., p k }: We assume complete linkage clustering: Cutting heights in steps s = 1, . . ., p k − 1 are defined as: Let us denote vector ãsk as an elementary constraint corresponding to cutting height h sk , where: Step 3 of the algorithm can be now rewritten: Combine vectors of cutting heights: h = [0, h T 0 , h T 1 , . . ., h T l ] T , where h 0 is vector of cutting heights for constraints concerning continuous variables and 0 corresponds to model without constraints: where ãm:p is the elementary constraint corresponding to cutting height h m:p .Then proceed as described in Algorithm 1. Moreover Then we get from the Schur complement:

D.2. Asymptotics for residual sums of squares
Lemmas concerning dependencies between residual sums of squares have similar construction to those described in Chen and Chen (2008).Let us introduce some simplifying notations.For two sequences of random variables U n and V n we write that Residual sum of squares for model M can be decomposed into three parts Lemma 2. Assuming p ≺ n and p ≺ r n , we have where Let us notice that H F − H T is a matrix of an orthogonal projection with rank p − |T |.Therefore and since p grows monotonically with n we have either p  Lemma 3. Assuming that p ≺ ∆ (∆ is defined in equation ( 13)) we have for all δ > 1 min Proof.Using the fact that Note that ∆ M ≥ ∆, S M ∼ N (0, 4σ 2 ∆ M ), W T ∼ σ 2 χ 2 |T | and W M ∼ σ 2 χ 2 p−1 .Using assumption, S M ∆ M , W T ∆ M and W M ∆ M are o P 1 from Chebyshev's inequality.Since the dimension of the true model T is finite and independent of n, so is the number of models in M V and we have

Figure 3 :
Figure 3: An examplary run of DMR algorithm for Experiment 1.

Table 1
Results of the simulation study, Experiment 1.

Table 2
Computation times divided by the computation time of lm.fit, results obtained using system.timefunction.

Table 3
Results of the simulation study, Experiment 2.

Table 4
Results of the simulation study for logistic regression, Experiment 3.

Table 5
Computation times divided by the computation time of glm.fit, results obtained using system.timefunction.

Table 6
Characteristics of the chosen models for Barley data set.The data set miete03 comes from http://www.statistik.lmu.de/service/datenarchiv.The data consists of 2053 households interviewed for the Munich rent standard 2003.The response is monthly rent per square meter in Euros.8 categorical and 3 continuous variables give 36 and 4 (including the intercept) parameters.The data is described in detail inGertheiss and imsart-ejs ver.2014/02/20 file: DMR.tex date: May 18, 2015

Table 7
Characteristics of the chosen models for Miete data set.
Therefore E n = O P 1 and RSS T RSS F = 1 + O P