Electronic Journal of Statistics

Normalized and standard Dantzig estimators: Two approaches

Jan Mielniczuk and Hubert Szymanowski

Full-text: Open access

Abstract

We reconsider the definition of the Dantzig estimator and show that, in contrast to the LASSO, standardization of an experimental matrix leads in general to a different estimator than in the case when it is based on the original data. The properties of the first method, resulting in what is called here the normalized Dantzig estimator are studied and the results on its estimation and prediction error are compared with similar results for the standard version. It is shown that in general the normalized version yields tighter estimation and prediction bounds than the other approach. In the correct specification case tighter bounds are obtained for the normalized Dantzig estimator than for the LASSO. Numerical examples indicate that in the case of imbalanced data the normalized estimator also performs better than the standard version.

Article information

Source
Electron. J. Statist., Volume 9, Number 1 (2015), 1335-1356.

Dates
Received: October 2013
First available in Project Euclid: 22 June 2015

Permanent link to this document
https://projecteuclid.org/euclid.ejs/1434988476

Digital Object Identifier
doi:10.1214/15-EJS1040

Mathematical Reviews number (MathSciNet)
MR3358327

Zentralblatt MATH identifier
1327.62408

Subjects
Primary: 62J05: Linear regression 62J07: Ridge regression; shrinkage estimators
Secondary: 90C25: Convex programming

Keywords
Linear model high dimensionality Dantzig selector LASSO normalization constrained optimization Karush-Kuhn-Tucker conditions

Citation

Mielniczuk, Jan; Szymanowski, Hubert. Normalized and standard Dantzig estimators: Two approaches. Electron. J. Statist. 9 (2015), no. 1, 1335--1356. doi:10.1214/15-EJS1040. https://projecteuclid.org/euclid.ejs/1434988476


Export citation

References

  • [1] Bickel, P., Ritov, Y., and Tsybakov, A., Simultaneous analysis of Lasso and Dantzig selector., Annals of Statistics, 37 :1705–1732, 2009.
  • [2] Bühlmann, P. and van de Geer, S., Statistics for High-Dimensional Data. Springer, New York, 2011.
  • [3] Candès, E. and Plan, Y., Near-ideal model selection by $\ell_1$ minimization., Annals of Statistics, 37 :2145–2177, 2009.
  • [4] Candès, E. and Tao, T., The Dantzig Selector: statistical estimation when p is much larger than n., Annals of Statistics, 35 :2313–2351, 2007.
  • [5] James, G., Radchenko, P., and Lv, J., Dasso: Connections between the Dantzig selector and Lasso., Journal of the Royal Statistical Society Series B, 71:127–142, 2009.
  • [6] Koltchinskii, V., The Dantzig selector and sparsity oracle inequalities., Bernoulli, 15:799–828, 2009.
  • [7] Lounici, K., Sup-norm convergence rate and sign concentration property of Lasso and Dantzig estimators., Electronic Journal of Statistics, 2:90–102, 2008.
  • [8] Meinshausen, N., Rocha, G., and Yu, B., A tale of three cousins: LASSO, L2boosting and Dantzig (discussion of Candès and Tao’s Dantzig selector paper)., Annals of Statistics, 35 :2373–2384, 2007.
  • [9] Pokarowski, P. and Mielniczuk, J., Combined $\ell_1$ and $\ell_0$ penalized least squares., Journal of Machine Learning Research, 16(May), 2015.
  • [10] Tibshirani, R., Regression shrinkage and selection via the Lasso., Journal of the Royal Statistical Society Series B, 58:267–288, 1996.
  • [11] van de Geer, S. and Bühlmann, P., On the conditions used to prove oracle results for lasso., Electronic Journal of Statistics, 3 :1360–1392, 2009.
  • [12] Zhou, S., Thresholding procedures for high dimensional variable selection and statistical estimation. In, NIPS, pages 2304–2312, 2009.