Electronic Journal of Statistics

Smoothing 1-penalized estimators for high-dimensional time-course data

Lukas Meier and Peter Bühlmann

Full-text: Open access

Abstract

When a series of (related) linear models has to be estimated it is often appropriate to combine the different data-sets to construct more efficient estimators. We use 1-penalized estimators like the Lasso or the Adaptive Lasso which can simultaneously do parameter estimation and model selection. We show that for a time-course of high-dimensional linear models the convergence rates of the Lasso and of the Adaptive Lasso can be improved by combining the different time-points in a suitable way. Moreover, the Adaptive Lasso still enjoys oracle properties and consistent variable selection. The finite sample properties of the proposed methods are illustrated on simulated data and on a real problem of motif finding in DNA sequences.

Article information

Source
Electron. J. Statist., Volume 1 (2007), 597-615.

Dates
First available in Project Euclid: 10 December 2007

Permanent link to this document
https://projecteuclid.org/euclid.ejs/1197320663

Digital Object Identifier
doi:10.1214/07-EJS103

Mathematical Reviews number (MathSciNet)
MR2369027

Zentralblatt MATH identifier
1140.62054

Subjects
Primary: 62J07: Ridge regression; shrinkage estimators
Secondary: 62J99: None of the above, but in this section 62H12: Estimation

Keywords
Lasso Local least squares Multivariate regression Variable selection Weighted likelihood

Citation

Meier, Lukas; Bühlmann, Peter. Smoothing ℓ 1 -penalized estimators for high-dimensional time-course data. Electron. J. Statist. 1 (2007), 597--615. doi:10.1214/07-EJS103. https://projecteuclid.org/euclid.ejs/1197320663


Export citation