Annals of Applied Statistics
- Ann. Appl. Stat.
- Volume 9, Number 3 (2015), 1103-1140.
SLOPE—Adaptive variable selection via convex optimization
Małgorzata Bogdan, Ewout van den Berg, Chiara Sabatti, Weijie Su, and Emmanuel J. Candès
Abstract
We introduce a new estimator for the vector of coefficients $\beta$ in the linear model $y=X\beta+z$, where $X$ has dimensions $n\times p$ with $p$ possibly larger than $n$. SLOPE, short for Sorted L-One Penalized Estimation, is the solution to
\[\min_{b\in\mathbb{R}^{p}}\frac{1}{2}\Vert y-Xb\Vert_{\ell_{2}}^{2}+\lambda_{1}\vert b\vert_{(1)}+\lambda_{2}\vert b\vert_{(2)}+\cdots+\lambda_{p}\vert b\vert_{(p)},\] where $\lambda_{1}\ge\lambda_{2}\ge\cdots\ge\lambda_{p}\ge0$ and $\vert b\vert_{(1)}\ge\vert b\vert_{(2)}\ge\cdots\ge\vert b\vert_{(p)}$ are the decreasing absolute values of the entries of $b$. This is a convex program and we demonstrate a solution algorithm whose computational complexity is roughly comparable to that of classical $\ell_{1}$ procedures such as the Lasso. Here, the regularizer is a sorted $\ell_{1}$ norm, which penalizes the regression coefficients according to their rank: the higher the rank—that is, stronger the signal—the larger the penalty. This is similar to the Benjamini and Hochberg [J. Roy. Statist. Soc. Ser. B 57 (1995) 289–300] procedure (BH) which compares more significant $p$-values with more stringent thresholds. One notable choice of the sequence $\{\lambda_{i}\}$ is given by the BH critical values $\lambda_{\mathrm{BH}}(i)=z(1-i\cdot q/2p)$, where $q\in(0,1)$ and $z(\alpha)$ is the quantile of a standard normal distribution. SLOPE aims to provide finite sample guarantees on the selected model; of special interest is the false discovery rate (FDR), defined as the expected proportion of irrelevant regressors among all selected predictors. Under orthogonal designs, SLOPE with $\lambda_{\mathrm{BH}}$ provably controls FDR at level $q$. Moreover, it also appears to have appreciable inferential properties under more general designs $X$ while having substantial power, as demonstrated in a series of experiments running on both simulated and real data.
Article information
Source
Ann. Appl. Stat., Volume 9, Number 3 (2015), 1103-1140.
Dates
Received: May 2014
Revised: February 2015
First available in Project Euclid: 2 November 2015
Permanent link to this document
https://projecteuclid.org/euclid.aoas/1446488733
Digital Object Identifier
doi:10.1214/15-AOAS842
Mathematical Reviews number (MathSciNet)
MR3418717
Zentralblatt MATH identifier
06525980
Keywords
Sparse regression variable selection false discovery rate Lasso sorted $\ell_{1}$ penalized estimation (SLOPE)
Citation
Bogdan, Małgorzata; van den Berg, Ewout; Sabatti, Chiara; Su, Weijie; Candès, Emmanuel J. SLOPE—Adaptive variable selection via convex optimization. Ann. Appl. Stat. 9 (2015), no. 3, 1103--1140. doi:10.1214/15-AOAS842. https://projecteuclid.org/euclid.aoas/1446488733
Supplemental materials
- Supplement to “SLOPE—Adaptive variable selection via convex optimization.”. The online Appendix contains proofs of some technical results discussed in the text.Digital Object Identifier: doi:10.1214/15-AOAS842SUPP

