The Lasso Problem and Uniqueness

The lasso is a popular tool for sparse linear regression, especially for problems in which the number of variables p exceeds the number of observations n. But when p>n, the lasso criterion is not strictly convex, and hence it may not have a unique minimum. An important question is: when is the lasso solution well-defined (unique)? We review results from the literature, which show that if the predictor variables are drawn from a continuous probability distribution, then there is a unique lasso solution with probability one, regardless of the sizes of n and p. We also show that this result extends easily to $\ell_1$ penalized minimization problems over a wide range of loss functions. A second important question is: how can we deal with the case of non-uniqueness in lasso solutions? In light of the aforementioned result, this case really only arises when some of the predictor variables are discrete, or when some post-processing has been performed on continuous predictor measurements. Though we certainly cannot claim to provide a complete answer to such a broad question, we do present progress towards understanding some aspects of non-uniqueness. First, we extend the LARS algorithm for computing the lasso solution path to cover the non-unique case, so that this path algorithm works for any predictor matrix. Next, we derive a simple method for computing the component-wise uncertainty in lasso solutions of any given problem instance, based on linear programming. Finally, we review results from the literature on some of the unifying properties of lasso solutions, and also point out particular forms of solutions that have distinctive properties.


Introduction
We consider ℓ 1 penalized linear regression, also known as the lasso problem (Tibshirani 1996, Chen et al. 1998).Given a response vector y ∈ R n , a matrix X ∈ R n×p of predictor variables, and a tuning parameter λ ≥ 0, the lasso estimate can be defined as The lasso solution is unique when rank(X) = p, because the criterion is strictly convex.This is not true when rank(X) < p, and in this case, there can be multiple minimizers of the lasso criterion (emphasized by the element notation in (1)).Note that when the number of variables exceeds the number of observations, p > n, we must have rank(X) < p.
The lasso is quite a popular tool for estimating the coefficients in a linear model, especially in the high-dimensional setting, p > n.Depending on the value of the tuning parameter λ, solutions of the lasso problem will have many coefficients set exactly to zero, due to the nature of the ℓ 1 penalty.We tend to think of the support set of a lasso solution β, written A = supp( β) ⊆ {1, . . .p} and often referred to as the active set, as describing a particular subset of important variables for the linear model of y on X.Recently, there has been a lot of interesting work legitimizing this claim by proving desirable properties of β or its active set A, in terms of estimation error or model recovery.
Most of this work falls into the setting p > n.But such properties are not the focus of the current paper.Instead, our focus somewhat simpler, and at somewhat more of a basic level: we investigate issues concerning the uniqueness or non-uniqueness of lasso solutions.
Let us first take a step back, and consider the usual linear regression estimate (given by λ = 0 in (1)), as a motivating example.Students of statistics are taught to distrust the coefficients given by linear regression when p > n.We may ask: why?Arguably, the main reason is that the linear regression solution is not unique when p > n (or more precisely, when rank(X) < p), and further, this non-uniqueness occurs in such a way that we can always find a variable i ∈ {1, . . .p} whose coefficient is positive at one solution and negative at another.(Adding any element of the null space of X to one least squares solution produces another solution.)This makes it generally impossible to interpret the linear regression estimate when p > n.
Meanwhile, the lasso estimate is also not unique when p > n (or when rank(X) < p), but it is commonly used in this case, and in practice little attention is paid to uniqueness.Upon reflection, this seems somewhat surprising, because non-uniqueness of solutions can cause major problems in terms of interpretation (as demonstrated by the linear regression case).Two basic questions are: • Do lasso estimates suffer from the same sign inconsistencies as do linear regression estimates?
That is, for a fixed λ, can one lasso solution have a positive ith coefficient, and another have a negative ith coefficient?
• Must any two lasso solutions, at the same value of λ, necessarily share the same support, and differ only in their estimates of the nonzero coefficient values?Or can different lasso solutions exhibit different active sets?
Consider the following example, concerning the second question.Here we let n = 5 and p = 10.For a particular response y ∈ R 5 and predictor matrix X ∈ R 5×10 , and λ = 1, we found two solutions of the lasso problem (1), using two different algorithms.These are β(1) = (−0.893,0.620, 0.375, 0.497, . .., 0) T and β(2) = (−0.893,0.869, 0.624, 0, . .., 0) T , where we use ellipses to denote all zeros.In other words, the first solution has support set {1, 2, 3, 4}, and the second has support set {1, 2, 3}.This is not at all ideal for the purposes of interpretation, because depending on which algorithm we used to minimize the lasso criterion, we may have considered the 4th variable to be important or not.Moreover, who knows which variables may have zero coefficients at other solutions?
In Section 2, we show that if the entries of the predictor matrix X are drawn from a continuous probability distribution, then we essentially never have to worry about the latter problem-along with the problem of sign inconsistencies, and any other issues relating to non-uniqueness-because the lasso solution is unique with probability one.We emphasize that here uniqueness is ensured with probability one (over the distribution of X) regardless of the sizes of n and p.This result has basically appeared in various forms in the literature, but is perhaps not as well-known as it should be.Section 2 gives a detailed review of why this fact is true.
Therefore, the two questions raised above only need to be addressed in the case that X contains discrete predictors, or contains some kind of post-processed versions of continuously drawn predictor measurements.To put it bluntly (and save any dramatic tension), the answer to the first question is "no".In other words, no two lasso solutions can attach opposite signed coefficients to the same variable.We show this using a very simple argument in Section 4. As for the second question, the example above already shows that the answer is unfortunately "yes".However, the multiplicity of active sets can be dealt with in a principled manner, as we argue in Section 4.Here we show how to compute lower and upper bounds on the coefficients of lasso solutions of any particular problem instance-this reveals exactly which variables are assigned zero coefficients at some lasso solutions, and which variables have nonzero coefficients at all lasso solutions.

Basic facts and the KKT conditions
We begin by recalling a few basic facts about lasso solutions.
Lemma 1.For any y, X, and λ ≥ 0, the lasso problem (1) has the following properties: (i) There is either a unique lasso solution or an (uncountably) infinite number of solutions.
(ii) Every lasso solution β gives the same fitted value X β.
Proof.(i) The lasso criterion is convex and has no directions of recession (strictly speaking, when λ = 0 the criterion can have directions of recession, but these are directions in which the criterion is constant).Therefore it attains its minimum over R p (see, for example, Theorem 27.1 of Rockafellar (1970)), that is, the lasso problem has at least one solution.Suppose now that there are two solutions β(1) and β(2) , β(1) = β(2) .Because the solution set of a convex minimization problem is convex, we know that α β(1) + (1 − α) β( 2) is also a solution for any 0 < α < 1, which gives uncountably many lasso solutions as α varies over (0, 1).
(iii) By (ii), any two solutions must have the same fitted value, and hence the same squared error loss.But the solutions also attain the same value of the lasso criterion, and if λ > 0, then they must have the same ℓ 1 norm.
To go beyond the basics, we turn to the Karush-Kuhn-Tucker (KKT) optimality conditions for the lasso problem (1).These conditions can be written as Here γ ∈ R p is called a subgradient of the function f (x) = x 1 evaluated at x = β.Therefore β is a solution in (1) if and only if β satisfies (2) and (3) for some γ.
We now use the KKT conditions to write the lasso fit and solutions in a more explicit form.In what follows, we assume that λ > 0 for the sake of simplicity (dealing with the case λ = 0 is not difficult, but some of the definitions and statements need to be modified, avoided here in order to preserve readibility).First we define the equicorrelation set E by The equicorrelation set E is named as such because when y, X have been standardized, E contains the variables that have equal (and maximal) absolute correlation with the residual.We define the equicorrelation signs s by Recalling (2), we note that the optimal subgradient γ is unique (by the uniqueness of the fit X β), and we can equivalently define E, s in terms of γ, as in E = {i ∈ {1, . . .p} : The uniqueness of X β (or the uniqueness of γ) implies the uniqueness of E, s.
By definition of the subgradient γ in (3), we know that β−E = 0 for any lasso solution β.Hence the E block of (2) can be written as This means that λs ∈ row(X E ), so λs = X T E (X T E ) + λs.Using this fact, and rearranging (6), we get Therefore the (unique) lasso fit X β = X E βE is and any lasso solution β is of the form where b ∈ null(X E ).In particular, any b ∈ null(X E ) produces a lasso solution β in (8) provided that β has the correct signs over its nonzero coefficients, that is, sign( βi ) = s i for all βi = 0. We can write these conditions together as b ∈ null(X E ) and and hence any b satisfying (9) gives a lasso solution β in (8).In the next section, using a sequence of straightforward arguments, we prove that the lasso solution is unique under somewhat general conditions.

Sufficient conditions for uniqueness
From our work in the previous section, we can see that if null(X E ) = {0}, then the lasso solution is unique and is given by ( 8) with b = 0. (We note that b = 0 necessarily satisfies the sign condition in ( 9), because a lasso solution is guaranteed to exist by Lemma 1.) Then by rearranging (8), done to emphasize the rank of X E , we have the following result.
Lemma 2. For any y, X, and λ > 0, if null(X E ) = {0}, or equivalently if rank(X E ) = |E|, then the lasso solution is unique, and is given by where E and s are the equicorrelation set and signs as defined in (4) and (5).Note that this solution has at most min{n, p} nonzero components.
This sufficient condition for uniqueness has appeared many times in the literature.For example, see Osborne et al. (2000b), Fuchs (2005), Wainwright (2009), Candes & Plan (2009).We will show later in Section 5 that the same condition is actually also necessary, for all almost every y ∈ R n .
Note that E depends on the lasso solution at y, X, λ, and hence the condition null(X E ) = {0} is somewhat circular.There are more natural conditions, depending on X alone, that imply null(X E ) = {0}.To see this, suppose that null(X E ) = {0}; then for some i ∈ E, we can write By definition of the equicorrelation set, X T j r = s j λ for any j ∈ E, where r = y − X β is the lasso residual.Taking the inner product of both sides above with r, we get assuming that λ > 0. Therefore, we have shown that if null(X E ) = {0}, then for some i ∈ E, with j∈E\{i} a j = 1, which means that s i X i lies in the affine span of s j X j , j ∈ E \ {i}.Note that we can assume without a loss of generality that E \ {i} has at most n elements, since otherwise we can simply repeat the above arguments replacing E by any one of its subsets with n + 1 elements; hence the affine span of s j X j , j ∈ E \ {i} is at most n − 1 dimensional.
We say that the matrix X ∈ R n×p has columns in general position if any affine subspace L ⊆ R n of dimension k < n contains contains no more than k + 1 elements of the set {±X 1 , . ..±X p }, excluding antipodal pairs.Another way of saying this: the affine span of any k + 1 points σ 1 X i1 , . . .σ k+1 X i k+1 , for arbitrary signs σ 1 , . . .σ k+1 ∈ {−1, 1}, does not contain any element of From what we have just shown, the predictor matrix X having columns in general position is enough to ensure uniqueness.Lemma 3. If the columns of X are in general position, then for any y and λ > 0, the lasso solution is unique and is given by (10).This result has also essentially appeared in the literature, taking various forms when stated for various related problems.For example, Rosset et al. (2004) give a similar result for general convex loss functions.Dossal (2012) gives a related result for the noiseless lasso problem (also called basis pursuit).Donoho (2006) gives results tying togther the uniqueness (and equality) of solutions of the noiseless lasso problem and the corresponding ℓ 0 minimization problem.
Although the definition of general position may seem somewhat technical, this condition is naturally satisfied when the entries of the predictor matrix X are drawn from a continuous probability distribution.More precisely, if the entries of X follow a joint distribution that is absolutely continuous with respect to Lebesgue measure on R np , then the columns of X are in general position with probability one.To see this, first consider the probability P(X k+2 ∈ aff{X 1 , . . .X k+1 }), where aff{X 1 , . . .X k+1 } denotes the affine span of X 1 , . . .X k+1 .Note that, by continuity, P(X k+2 ∈ aff{X 1 , . . .X k+1 } | X 1 , . . .X k+1 ) = 0, because (for fixed X 1 , . . .X k+1 ) the set aff{X 1 , . . .X k+1 } ⊆ R n has Lebesgue measure zero.Therefore, integrating over X 1 , . . .X k+1 , we get that P(X k+2 ∈ aff{X 1 , . . .X k+1 }) = 0. Taking a union over all subsets of k + 2 columns, all combinations of k + 2 signs, and all k < n, we conclude that with probability zero the columns are not in general position.This leads us to our final sufficient condition for uniqueness of the lasso solution.
Lemma 4. If the entries of X ∈ R n×p are drawn from a continuous probability distribution on R np , then for any y and λ > 0, the lasso solution is unique and is given by (10) with probability one.
According to this result, we essentially never have to worry about uniqueness when the predictor variables come from a continuous distribution, regardless of the sizes of n and p. Actually, there is nothing really special about ℓ 1 penalized linear regression in particular-we show next that the same uniqueness result holds for ℓ 1 penalized minimization with any differentiable, strictly convex loss function.

General convex loss functions
We consider the more general minimization problem β ∈ argmin where the loss function f : R n → R is differentiable and strictly convex.To be clear, we mean that f is strictly convex in its argument, so for example the function f (u) = y − u 2 2 is strictly convex, even though f (Xβ) = y − Xβ 2 2 may not be strictly convex in β.The main ideas from Section 2.1 carry over to this more general problem.The arguments given in the proof of Lemma 1 can be applied (relying on the strict convexity of f ) to show that the same set of basic results hold for problem (11): (i) there is either a unique solution or uncountably many solutions; 1 (ii) every solution β gives the same fit X β; (iii) if λ > 0, then every solution β has the same ℓ 1 norm.The KKT conditions for (11) can be expressed as where ∇f : R n → R n is the gradient of f , and we can define the equicorrelation set and signs in the same way as before, ) .The subgradient condition (13) implies that β−E = 0 for any solution β in (11).For squared error loss, recall that we then explicitly solved for βE as a function of E and s.This is not possible for a general loss function f ; but given E and s, we can rewrite the minimization problem (11) over the coordinates in E as βE ∈ argmin Now, if null(X E ) = {0} (equivalently rank(X E ) = |E|), then the criterion in ( 14) is strictly convex, as f itself is strictly convex.This implies that there is a unique solution βE in ( 14), and therefore a unique solution β in (11).Hence, we arrive at the same conclusions as those made in Section 2.2, that there is a unique solution in (11) if the columns of X are in general position, and ultimately, the following result.
Lemma 5.If X ∈ R n×p has entries drawn from a continuous probability distribution on R np , then for any differentiable, strictly convex function f , and for any λ > 0, the minimization problem (11) has a unique solution with probability one.This solution has at most min{n, p} nonzero components.
This general result applies to any differentiable, strictly convex loss function f , which is quite a broad class.For example, it applies to logistic regression loss, 1 To be precise, if λ = 0 then problem (11) may not have a solution for an arbitrary differentiable, strictly convex function f .This is because f may have directions of recession that are not directions in which f is constant, and hence it may not attain its minimal value.For example, the function f (u) = e −u is differentiable and strictly convex on R, but does not attain its minimum.Therefore, for λ = 0, the statements in this section should all be interpreted as conditional on the existence of a solution in the first place.For λ > 0, the ℓ 1 penalty gets rid of this issue, as the criterion in (11) has no directions of recession, implying the existence of a solution.
We shift our focus in the next section, and without assuming any conditions for uniqueness, we show how to compute a solution path for the lasso problem (over the regularization parameter λ).

The LARS algorithm for the lasso path
The LARS algorithm is a great tool for understanding the behavior of lasso solutions.(To be clear, here and throughout we use the term "LARS algorithm" to refer to the version of the algorithm that computes the lasso solution path, and not the version that performs a special kind of forward variable selection.)The algorithm begins at λ = ∞, where the lasso solution is trivially 0 ∈ R p .Then, as the parameter λ decreases, it computes a solution path βLARS (λ) that is piecewise linear and continuous as a function of λ.Each knot in this path corresponds to an iteration of the algorithm, in which the path's linear trajectory is altered in order to satisfy the KKT optimality conditions.The LARS algorithm was proposed (and named) by Efron et al. (2004), though essentially the same idea appeared earlier in the works of Osborne et al. (2000a) and Osborne et al. (2000b).It is worth noting that the LARS algorithm (as proposed in any of these works) assumes that rank(X E ) = |E| throughout the lasso path.This is not necessarily correct when rank(X) < p, and can lead to errors in computing lasso solutions.(However, from what we showed in Section 2, this "naive" assumption is indeed correct with probability one when the predictors are drawn from a continuous distribution, and this is likely the reason why such a small oversight has gone unnoticed since the time of the original publications.) In this section, we extend the LARS algorithm to cover a generic predictor matrix X.2 Though the lasso solution is not necessarily unique in this general case, and we may have rank(X E ) < |E| at some points along path, we show that a piecewise linear and continuous path of solutions still exists, and computing this path requires only a simple modification to the previously proposed LARS algorithm.We describe the algorithm and its steps in detail, but delay the proof of its correctness until Appendix A.1.We also present a few properties of this algorithm and the solutions along its path.

Description of the LARS algorithm
We start with an overview of the LARS algorithm to compute the lasso path (extended to cover an arbitrary predictor matrix X), and then we describe its steps in detail at a general iteration k.The algorithm presented here is of course very similar to the original LARS algorithm of Efron et al. (2004).The key difference is the following: if X T E X E is singular, then the KKT conditions over the variables in E no longer have a unique solution, and the current algorithm uses the solution with the minimum ℓ 2 norm, as in ( 15) and ( 16).This seemingly minor detail is the basis for the algorithm's correctness in the general X case.
Algorithm 1 (The LARS algorithm for the lasso path).
Given y and X.
• While λ k > 0: 1. Compute the LARS lasso solution at λ k by least squares, as in (15) and ( 16).Continue in a linear direction from the solution for λ ≤ λ k .
2. Compute the next joining time λ join k+1 , when a variable outside the equicorrelation set achieves the maximal absolute inner product with the residual, as in (17) and (18).3. Compute the next crossing time λ cross k+1 , when the coefficient path of an equicorrelation variable crosses through zero, as in (19) and (20).4. Set λ k+1 = max{λ join k+1 , λ cross k+1 }.If λ join k+1 > λ cross k+1 , then add the joining variable to E and its sign to s; otherwise, remove the crossing variable from E and its sign from s. Update At the start of the kth iteration, the regularization parameter is λ = λ k .For the path's solution at λ k , we set the non-equicorrelation coefficients equal to zero, βLARS −E (λ k ) = 0, and we compute the equicorrelation coefficients as βLARS where c = (X E ) + y and d = (X E ) + (X T E ) + s = (X T E X E ) + s are defined to help emphasize that this is a linear function of the regularization parameter.This estimate can be viewed as the minimum ℓ 2 norm solution of a least squares problem on the variables in E (in which we consider E, s as fixed): Now we decrease λ, keeping βLARS −E (λ) = 0, and letting βLARS that is, moving in the linear direction suggested by (15).As λ decreases, we make two important checks.First, we check when (that is, we compute the value of λ at which) a variable outside the equicorrelation set E should join E because it attains the maximal absolute inner product with the residual-we call this the next joining time λ join k+1 .Second, we check when a variable in E will have a coefficient path crossing through zero-we call this the next crossing time λ cross k+1 .For the first check, for each i / ∈ E, we solve the equation A simple calculation shows that the solution is called the joining time of the ith variable.(Although the notation is ambiguous, the quantity t join i is uniquely defined, as only one of +1 or −1 above will yield a value in [0, λ k ]).Hence the next joining time is and the joining coordinate and its sign are As for the second check, note that a variable i ∈ E will have a zero coefficient when λ Because we are only considering λ ≤ λ k , we define the crossing time of the ith variable as The next crossing time is therefore and the crossing coordinate and its sign are Finally, we decrease λ until the next joining time or crossing time-whichever happens first-by setting λ k+1 = max{λ join k+1 , λ cross k+1 }.If λ join k+1 > λ cross k+1 , then we add the joining coordinate i join k+1 to E and its sign s join k+1 to s.Otherwise, we delete the crossing coordinate i cross k+1 from E and its sign s cross k+1 from s.
The proof of correctness for this algorithm shows that computed path βLARS (λ) satisfies the KKT conditions (2) and (3) at each λ, and is hence indeed a lasso solution path.It also shows that the computed path is continuous at each knot in the path λ k , and hence is globally continuous in λ.The fact that X T E X E can be singular makes the proof somewhat complicated (at least more so than it is for the case rank(X) = p), and hence we delay its presentation until Appendix A.1.

Properties of the LARS algorithm and its solutions
Two basic properties of the LARS lasso path, as mentioned in the previous section, are piecewise linearity and continuity with respect to λ.The algorithm and the solutions along its computed path possess a few other nice properties, most of them discussed in this section, and some others later in Section 5. We begin with a property of the LARS algorithm itself.Lemma 6.For any y, X, the LARS algorithm for the lasso path performs at most Proof.The idea behind the proof is quite simple, and was first noticed by Osborne et al. (2000a) for their homotopy algorithm: any given pair of equicorrelation set E and sign vector s that appear in one iteration of the algorithm cannot be revisited in a future iteration, due to the linear nature of the solution path.To elaborate, suppose that E, s were the equicorrelation set and signs at iteration k and also at iteration k ′ , with k ′ > k.Then this would imply that the constraints hold at both λ = λ k and λ = λ k ′ .But βLARS E (λ) = c − λd is a linear function of λ, and this implies that ( 21) and ( 22) also hold at every λ ∈ [λ ′ k , λ k ], contradicting the fact that k ′ and k are distinct iterations.Therefore the total number of iterations performed by the LARS algorithm is bounded by the number of distinct pairs of subsets E ⊆ {1, . . .p} and sign vectors s ∈ {−1, 1} |E| .
Remark.Mairal & Yu (2012) showed recently that the upper bound for the number of steps taken by the original LARS algorithm, which assumes that rank(X E ) = |E| throughout the path, can actually be improved to (3 p + 1)/2.Their proof is based on the following observation: if E, s are the equicorrelation set and signs at one iteration of the algorithm, then E, −s cannot appear as the equicorrelation set and signs in a future iteration.Indeed, this same observation is true for the extended version of LARS presented here, by essentially the same arguments.Hence the upper bound in Lemma 6 can also be improved to (3 p + 1)/2.Interestingly, Mairal & Yu (2012) further show that this upper bound is tight: they construct, for any p, a problem instance (y and X) for which the LARS algorithm takes exactly (3 p + 1)/2 steps.
Next, we show that the end of the LARS lasso solution path (λ = 0) is itself an interesting least squares solution.
Lemma 7.For any y, X, the LARS lasso solution converges to a minimum ℓ 1 norm least squares solution as λ → 0 + , that is, lim where βLS,ℓ1 ∈ argmin β∈R p y − Xβ 2 2 and achieves the minimum ℓ 1 norm over all such solutions.Proof.First note that by Lemma 6, the algorithm always takes a finite number of iterations before terminating, so the limit here is always attained by the algorithm (at its last iteration).Therefore we can write βLARS (0) = lim λ→0 + βLARS (λ).Now, by construction, the LARS lasso solution satisfies implying that βLARS (0) is a least squares solution.Suppose that there exists another least squares solution βLS with βLS 1 < βLARS (0) 1 .Then by continuity of the LARS lasso solution path, there exists some λ > 0 such that still βLS This contradicts the fact that βLARS (λ) is a lasso solution at λ, and therefore βLARS (0) achieves the minimum ℓ 1 norm over all least squares solutions.
We showed in Section 3.1 that the LARS algorithm constructs the lasso solution βLARS −E (λ) = 0 and βLARS + λs , by decreasing λ from ∞, and continually checking whether it needs to include or exclude variables from the equicorrelation set E. Recall our previous description (8) of the set of lasso solutions at any given λ.In (8), different lasso solutions are formed by choosing different vectors b that satisfy the two conditions given in (9): a null space condition, b ∈ null(X E ), and a sign condition, We see that the LARS lasso solution corresponds to the choice b = 0.When rank(X) = |E|, b = 0 is the only vector in null(X E ), so it satisfies the above sign condition by necessity (as we know that a lasso solution must exist Lemma 1).On the other hand, when rank(X) < |E|, it is certainly true that 0 ∈ null(X E ), but it is not at all obvious that the sign condition is satisfied by b = 0.The LARS algorithm establishes this fact by constructing an entire lasso solution path with exactly this property (b = 0) over λ ∈ [0, ∞].At the risk of sounding repetitious, we state this result next in the form of a lemma.
Lemma 8.For any y, X, and λ > 0, a lasso solution is given by and this is the solution computed by the LARS lasso path algorithm.
For one, this lemma is perhaps interesting from a computational point of view: it says that for any y, X, and λ > 0, a lasso solution (indeed, the LARS lasso solution) can be computed directly from E and s, which themselves can be computed from the unique lasso fit.Further, for any y, X, we can start with a lasso solution at λ > 0 and compute a local solution path using the same LARS steps; see Appendix A.2 for more details.Aside from computational interests, the explicit form of a lasso solution given by Lemma 8 may be helpful for the purposes of mathematical analysis; for example, this form is used by Tibshirani & Taylor (2012) to give a simpler proof of the degrees of freedom of the lasso fit, for a general X, in terms of the equicorrelation set.As another example, it is also used in Section 5 to prove a necessary condition for the uniqueness of the lasso solution (holding almost everywhere in y).
We show in Section 5 that, for almost every y ∈ R n , the LARS lasso solution is supported on all of E and hence has the largest support of any lasso solution (at the same y, X, λ).As lasso solutions all have the same ℓ 1 norm, by Lemma 1, this means that the LARS lasso solution spreads out the common ℓ 1 norm over the largest number of coefficients.It may not be surprising, then, that the LARS lasso solution has the smallest ℓ 2 norm among lasso solutions, shown next.
Lemma 9.For any y, X, and λ > 0, the LARS lasso solution βLARS has the minimum ℓ 2 norm over all lasso solutions.
Proof.From (8), we can see that any lasso solution has squared ℓ 2 norm , with equality if and only if b = 0. Mixing together the ℓ 1 and ℓ 2 norms brings to mind the elastic net (Zou & Hastie 2005), which penalizes both the ℓ 1 norm and the squared ℓ 2 norm of the coefficient vector.The elastic net utilizes two tuning parameters λ 1 , λ 2 ≥ 0 (this notation should not to be confused with the knots in the LARS lasso path), and solves the criterion For any λ 2 > 0, the elastic net solution βEN = βEN (λ 1 , λ 2 ) is unique, since the criterion is strictly convex.
Note that if λ 2 = 0, then ( 24) is just the lasso problem.On the other hand, if λ 1 = 0, then (24) reduces to ridge regression.It is well-known that the ridge regression solution βridge (λ 2 ) = βEN (0, λ 2 ) converges to the minimum ℓ 2 norm least squares solution as λ 2 → 0 + .Our next result is analogous to this fact: it says that for any fixed λ 1 > 0, the elastic net solution converges to the minimum ℓ 2 norm lasso solution-that is, the LARS lasso solution-as λ 2 → 0 + , Lemma 10.Fix any X and λ 1 > 0. For almost every y ∈ R n , the elastic net solution converges to the LARS lasso solution as λ 2 → 0 + , that is, Proof.By Lemma 13, we know that for any y / ∈ N , where N ⊆ R n is a set of measure zero, the LARS lasso at λ 1 satisfies βLARS (λ 1 ) i = 0 for all i ∈ E. Hence fix y / ∈ N .First note that we can rewrite the LARS lasso solution as βLARS −E (λ 1 ) = 0 and βLARS Define the function For fixed E, s, the function f is continuous on [0, ∞) (continuity at 0 can be verified, for example, by looking at the singular value decomposition of (X T E X E + λ 2 I) −1 .)Hence it suffices to show that for small enough λ 2 > 0, the elastic net solution at λ 1 , λ 2 is given by To this end, we show that the above proposed solution satisfies the KKT conditions for small enough λ 2 .The KKT conditions for the elastic net problem are Recall that f (0) = βLARS E (λ 1 ) are the equicorrelation coefficients of the LARS lasso solution at λ 1 .As y / ∈ N , we have f (0) i = 0 for each i ∈ E, and further, sign(f (0) i ) = s i for all i ∈ E. Therefore the continuity of f implies that for small enough λ 2 , f (λ 2 ) i = 0 and sign(f (λ 2 ) i ) = s i for all i ∈ E. Also, we know that X T −E (y − X E f (0)) ∞ < λ 1 by definition of the equicorrelation set E, and again, the continuity of f implies that for small enough λ 2 , X T This verifies the KKT conditions for small enough λ 2 , and completes the proof.
In Section 5, we discuss a few more properties of LARS lasso solutions, in the context of studying the various support sets of lasso solutions.In the next section, we present a simple method for computing lower and upper bounds on the coefficients of lasso solutions, useful when the solution is not unique.

Lasso coefficient bounds
Here we again consider a general predictor matrix X (not necessarily having columns in general position), so that the lasso solution is not necessarily unique.We show that it is possible to compute lower and upper bounds on the coefficients of lasso solutions, for any given problem instance, using linear programming.We begin by revisiting the KKT conditions.

Back to the KKT conditions
The KKT conditions for the lasso problem were given in (2) and (3).Recall that the lasso fit X β is always unique, by Lemma 1.Note that when λ > 0, we can rewrite (2) as implying that the optimal subgradient γ is itself unique.According to its definition (3), the components of γ give the signs of nonzero coefficients of any lasso solution, and therefore the uniqueness of γ immediately implies the following result.
Lemma 11.For any y, X, and λ > 0, any two lasso solutions β(1) and β(2) must satisfy In other words, any two lasso solutions must have the same signs over their common support.
In a sense, this result is reassuring-it says that even when the lasso solution is not necessarily unique, lasso coefficients must maintain consistent signs.Note that the same is certainly not true of least squares solutions (corresponding to λ = 0), which causes problems for interpretation, as mentioned in the introduction.Lemma 11 will be helpful when we derive lasso coefficient bounds shortly.
We also saw in the introduction that different lasso solutions (at the same y, X, λ) can have different supports, or active sets.The previously derived characterization of lasso solutions, given in ( 8) and ( 9), provides an understanding of how this is possible.It helps to rewrite ( 8) and ( 9) as β−E = 0 and βE = βLARS where b is subject to b ∈ null(X E ) and and βLARS is the fundamental solution traced by the LARS algorithm, as given in (23).Hence for for a lasso solution β to have an active set A = supp( β), we can see that we must have A ⊆ E and βE = βLARS E + b, where b satisfies (28) and also As we discussed in the introduction, the fact that there may be different active sets corresponding to different lasso solutions (at the same y, X, λ) is perhaps concerning, because different active sets provide different "stories" regarding which predictor variables are important.One might ask: given a specific variable of interest i ∈ E (recalling that all variables outside of E necessarily have zero coefficients), is it possible for the ith coefficient to be nonzero at one lasso solution but zero at another?The answer to this question depends on the interplay between the constraints in ( 28), and as we show next, it is achieved by solving a simple linear program.

The polytope of solutions and lasso coefficient bounds
The key observation here is that the set of lasso solutions defined by ( 27) and ( 28) forms a convex polytope.Consider writing the set of lasso solutions as where P = P row(XE ) and S = diag(s).That ( 29) is equivalent to ( 27) and ( 28) follows from the fact that βLARS The set K ⊆ R |E| is a polyhedron, since it is defined by linear equalities and inequalities, and furthermore it is bounded, as all lasso solutions have the same ℓ 1 norm by Lemma 1, making it a polytope.The component-wise extrema of K can be easily computed via linear programming.In other words, for i ∈ E, we can solve the following two linear programs: and then we know that the ith component of any lasso solution satisfies βi ∈ [ βlower i , βupper i ].These bounds are tight, in the sense that each is achieved by the ith component of some lasso solution (in fact, this solution is just the minimizer of (30), or the maximizer of ( 31)).By the convexity of K, every value between βlower i and βupper i is also achieved by the ith component of some lasso solution.Most importantly, the linear programs ( 30) and ( 31) can actually be solved in practice.Aside from the obvious dependence on y, X, and λ, the relevant quantities P, S, and βLARS E only depend on the equicorrelation set E and signs s, which in turn only depend on the unique lasso fit.Therefore, one could compute any lasso solution (at y, X, λ) in order to define E, s, and subsequently P, S and βLARS E , all that is needed in order to solve (30) and ( 31).We summarize this idea below.
1. Compute any solution β of the lasso problem (at y, X, λ), to obtain the unique lasso fit X β.
2. Define the equicorrelation set E and signs s, as in (4) and (5), respectively.], namely, that this interval cannot contain zero in its interior.Otherwise, there would be a pair of lasso solutions with opposite signs over the ith component, contradicting the lemma.Also, we know from Lemma 1 that all lasso solutions have the same ℓ 1 norm L, and this means that | βlower Lemma 12. Fix any y, X, and λ > 0. Let L be the common ℓ 1 norm of lasso solutions at y, X, λ.Then for any i ∈ E, the coefficient bounds βlower i and βupper i defined in (30) and (31) satisfy

For each i ∈ E, compute the coefficient bounds βlower
Using Algorithm 2, we can identify all variables i ∈ E with one of two categories, based on their bounding intervals: then variable i is called dispensable (to the lasso model at y, X, λ), because there is a solution that does not include this variable in its active set.By Lemma 12, this can only happen if βlower then variable i is called indispensable (to the lasso model at y, X, λ), because every solution includes this variable in its active set.By Lemma 12, this can only happen if βlower i > 0 or βupper i < 0.
It is helpful to return to the example discussed in the introduction.Recall that in this example we took n = 5 and p = 10, and for a given y, X, and λ = 1, we found two lasso solutions: one supported on variables {1, 2, 3, 4}, and another supported on variables {1, 2, 3}.In the introduction, we purposely did not reveal the structure of the predictor matrix X; given what we showed in Section 2 (that X having columns in general position implies a unique lasso solution), it should not be surprising to find out that here we have X 4 = (X 2 + X 3 )/2.A complete description of our construction of X and y is as follows: we first drew the components of the columns X 1 , X 2 , X 3 independently from a standard normal distribution, and then defined X 4 = (X 2 + X 3 )/2.We also drew the components of X 5 , . . .X 10 independently from a standard normal distribution, and then orthogonalized X 5 , . . .X 10 with respect to the linear subspace spanned by X 1 , . . ., X 4 .Finally, we defined y = −X 1 + X 2 + X 3 .The purpose of this construction was to make it easy to detect the relevant variables X 1 , . . .X 4 for the linear model of y on X.
According to the terminology defined above, variable 4 is dispensable to the lasso model when λ = 1, because it has a nonzero coefficient at one solution but a zero coefficient at another.This is perhaps not surprising, as X 2 , X 3 , X 4 are linearly dependent.How about the other variables?We ran Algorithm 2 to answer this question.The results are displayed in Table 1 For the given y, X, and λ = 1, the equicorrelation set is E = {1, 2, 3, 4}, and the sign vector is s = (−1, 1, 1, 1) T (these are given by running Steps 1 and 2 of Algorithm 2).Therefore we know that any lasso solution has zero coefficients for variables 5, . . .10, has a nonpositive first coefficient, and has nonnegative coefficients for variables 2, 3, 4. The third column of Table 1 shows the LARS lasso solution over the equicorrelation variables.The second and fourth columns show the component-wise coefficient bounds βlower i and βupper i , respectively, for i ∈ E. We see that variable 3 is dispensable, because it has a lower bound of zero, meaning that there exists a lasso solution that excludes the third variable from its active set (and this solution is actually computed by Algorithm 2, as it is the minimizer of the linear program (30) with i = 3).The same conclusion holds for variable 4. On the other hand, variables 1 and 2 are indispensable, because their bounding intervals do not contain zero.
Like variables 3 and 4, variable 2 is linearly dependent on the other variables (in the equicorrelation set), but unlike variables 3 and 4, it is indispensable and hence assigned a nonzero coefficient in every lasso solution.This is the first of a few interesting points about dispensability and indispensability, which we discuss below.
• Linear dependence does not imply dispensability.In the example, variable 2 is indispensable, as its coefficient has a lower bound of 0.2455 > 0, even though variable 2 is a linear function of variables 3 and 4. Note that in order for the 2nd variable to be dispensable, we need to be able to use the others (variables 1, 3, and 4) to achieve both the same fit and the same ℓ 1 norm of the coefficient vector.The fact that variable 2 can be written as a linear function of variables 3 and 4 implies that we can preserve the fit, but not necessarily the ℓ 1 norm, with zero weight on variable 2. Table 1 says that we can make the weight on variable 2 as small as 0.2455 while keeping the fit and the ℓ 1 norm unchanged, but that moving it below 0.2455 (and maintaining the same fit) inflates the ℓ 1 norm.
• Linear independence implies indispensability (almost everywhere).In the next section we show that, given any X and λ, and almost every y ∈ R n , the quantity col(X A ) is invariant over all active sets coming from lasso solutions at y, X, λ.Therefore, almost everywhere in y, if variable i ∈ E is linearly independent of all j / ∈ E (meaning that X i cannot be expressed as a linear function of X j , j / ∈ E), then variable i must be indispensable-otherwise the span of the active variables would be different for different active sets.
• Individual dispensability does not imply pairwise dispensability.Back to the above example, variables 3 and 4 are both dispensable, but this does not necessarily mean that there exists a lasso solution that exludes both 3 and 4 simultaneously from the active set.Note that the computed solution that achieves a value of zero for its 3rd coefficient (the minimizer of (30) for i = 3) has a nonzero 4th coefficient, and the computed solution that achieves zero for its 4th coefficient (the minimizer of (30) for i = 4) has a nonzero 3rd coefficient.While this suggests that variables 3 and 4 cannot simultaneously be zero for the current problem, it does not serve as definitive proof of such a claim.However, we can check this claim by solving (30), with i = 4, subject to the additional constraint that x 3 = 0.This does in fact yield a positive lower bound, proving that variables 3 and 4 cannot both be zero at a solution.Furthermore, moving beyond pairwise interactions, we can actually enumerate all possible active sets of lasso solutions, by recognizing that there is a one-to-one correspondence between active sets and faces of the polytope K; see Appendix A.3.
Next, we cover some properties of lasso solutions that relate to our work in this section and in the previous two sections, on uniqueness and non-uniqueness.

Related properties
We present more properties of lasso solutions, relating to issues of uniqueness and non-uniqueness.The first three sections examine the active sets generated by lasso solutions of a given problem instance, when X is a general predictor matrix.The results in these three sections are reviewed from the literature.In the last section, we give a necessary condition for the uniqueness of the lasso solution.

The largest active set
For an arbitrary X, recall from Section 4 that the active set A of any lasso solution is necessarily contained in the equicorrelation set E. We show that the LARS lasso solution has support on all of E, making it the lasso solution with the largest support, for almost every y ∈ R n .This result appeared in Tibshirani & Taylor (2012).
Lemma 13.Fix any X and λ > 0. For almost every y ∈ R n , the LARS lasso solution βLARS has an active set A equal to the equicorrelation set E, and therefore achieves the largest active set of any lasso solution.
Proof.For a matrix A, let A [i] denote its ith row.Define the set The first union above is taken over all subsets E ⊆ {1, . . .p} and sign vectors s ∈ {−1, 1} |E| , but implicitly we exclude sets E such that (X E ) + has a row that is entirely zero.Then N has measure zero, because it is a finite union of affine subspaces of dimension n − 1.Now let y / ∈ N .We know that no row of (X E ) + can be entirely zero (otherwise, this means that X E has a zero column, implying that λ = 0 by definition of the equicorrelation set, contradicting the assumption in the lemma).Then by construction we have that βLARS i = 0 for all i ∈ E.
Remark 1.In the case that the lasso solution is unique, this result says that the active set is equal to the equicorrelation set, almost everywhere.
Remark 2. Note that the equicorrelation set E (and hence the active set of a lasso solution, almost everywhere) can have size |E| = p in the worst case, even when p > n.As a trivial example, consider the case when X ∈ R n×p has p duplicate columns, with p > n.

The smallest active set
We have shown that the LARS lasso solution attains the largest possible active set, and so a natural question is: what is the smallest possible active set?The next result is from Osborne et al. (2000b) and Rosset et al. (2004).
Lemma 14.For any y, X, and λ > 0, there exists a lasso solution whose set of active variables is linearly independent.In particular, this means that there exists a solution whose active set A has size |A| ≤ min{n, p}.
Proof.We follow the proof of Rosset et al. (2004) closely.Let β be a lasso solution, let A = supp( β) be its active set, and suppose that rank(X A ) < |A|.Then by the same arguments as those given in Section 2, we can write, for some i ∈ A, Now define θ i = −s i and θ j = a j s j for j ∈ A \ {i}.
Notice that δ is guaranteed to be finite, as δ ≤ | βi |.Furthermore, we have X β = X β because θ ∈ null(X A ), and also Hence we have shown that β achieves the same fit and the same ℓ 1 norm as β, so it is indeed also lasso solution, and it has one fewer nonzero coefficient than β.We can now repeat this procedure until we obtain a lasso solution whose active set A satisfies rank(X A ) = |A|.
Remark 1.This result shows that, for any problem instance, there exists a lasso solution supported on ≤ min{n, p} variables; some works in the literature have misquoted this result by claiming that every lasso solution is supported on ≤ min{n, p} variables, which is clearly incorrect.When the lasso solution is unique, however, Lemma 14 implies that its active set has size ≤ min{n, p}.
Remark 2. In principle, one could start with any lasso solution, and follow the proof of Lemma 14 to construct a solution whose active set A is such that rank(X A ) = |A|.But from a practical perspective, this could be computationally quite difficult, as computing the constants a j in (33) requires finding a nonzero vector in null(X A )-a nontrivial task that would need to be repeated each time a variable is eliminated from the active set.To the best of our knowledge, the standard optimization algorithms for the lasso problem (such as coordinate descent, first-order methods, quadratic programming approaches) do not consistently produce lasso solutions with the property that rank(X A ) = |A| over the active set A. This is in contrast to the solution with largest active set, which is computed by the LARS algorithm.
Remark 3. The proof of Lemma 14 does not actually depend on the lasso problem in particular, and the arguments can be extended to cover the general ℓ 1 penalized minimization problem (11), with f differentiable and strictly convex.(This is in the same spirit as our extension of lasso uniqueness results to this general problem in Section 2.) Hence, to put it explicitly, for any differentiable, strictly convex f , any X, and λ > 0, there exists a solution of (11) whose active set A is such that rank(X A ) = |A|.
The title "smallest" active set is justified, because in the next section we show that the subspace col(X A ) is invariant under all choices of active sets A, for almost every y ∈ R n .Therefore, for such y, if A is an active set satisfying rank(X A ) = |A|, then one cannot possibly find a solution whose active set has size < |A|, as this would necessarily change the span of the active variables.

Equivalence of active subspaces
With the multiplicity of active sets (corresponding to lasso solutions of a given problem instance), there may be difficulty in identifying and interpreting important variables, as discussed in the introduction and in Section 4. Fortunately, it turns out that for almost every y, the span of the active variables does not depend on the choice of lasso solution, as shown in Tibshirani & Taylor (2012).Therefore, even though the linear models (given by lasso solutions) may report differences in individual variables, they are more or less equivalent in terms of their scope, almost everywhere in y.
Lemma 15.Fix any X and λ > 0. For almost every y ∈ R n , the linear subspace col(X A ) is exactly the same for any active set A coming from a lasso solution.
Due to the length and technical nature of the proof, we only give a sketch here, and refer the reader to Tibshirani & Taylor (2012) for full details.First, we define a set N ⊆ R n -somewhat like the set defined in (32) in the proof of Lemma 13-to be a union of affine subspaces of dimension ≤ n − 1, and hence N has measure zero.Then, for any y except in this exceptional set N , we consider any lasso solution at y examine its active set A. Based on the careful construction of N , we can prove the existence of an open set U containing y such that any y ′ ∈ U admits a lasso solution that has an active set A. In other words, this is a result on the local stability of lasso active sets.Next, over U , the lasso fit can be expressed in terms of the projection map onto col(X A ).The uniqueness of the lasso fit finally implies that col(X A ) is the same for any choice of active set A coming from a lasso solution at y.

A necessary condition for uniqueness (almost everywhere)
We now give a necessary condition for uniqueness of the lasso solution, that holds for almost every y ∈ R n (considering X and λ fixed but arbitrary).This is in fact the same as the sufficient condition given in Lemma 2, and hence, for almost every y, we have characterized uniqueness completely.
Lemma 16.Fix any X and λ > 0. For almost every y ∈ R n , if the lasso solution is unique, then null(X E ) = {0}.
Proof.Let N be as defined in (32).Then for y / ∈ N , the LARS lasso solution βLARS has active set equal to E. If the lasso solution is unique, then it must be the LARS lasso solution.Now suppose that null(X E ) = {0}, and take any b ∈ null(X E ), b = 0.As the LARS lasso solution is supported on all of E, we know that s i • βLARS i > 0 for all i ∈ E.

Discussion
We the lasso problem, covering conditions for uniqueness, as well as results aimed at better understanding the behavior of lasso solutions in the non-unique case.Some of the results presented in this paper were already known in the literature, and others were novel.We give a summary here.Section 2 showed that any one of the following three conditions is sufficient for uniqueness of the lasso solution: (i) null(X E ) = {0}, where E is the unique equicorrelation set; (ii) X columns in general position; (iii) X has entries drawn from a continuous probability distribution (the implication now being uniqueness with probability one).These results can all be found in the literature, in one form or another.They also apply to a more general ℓ 1 penalized minimization problem, provided that the loss function is differentiable and strictly convex when considered a function of Xβ (this covers, for example, ℓ 1 penalized logistic regression and ℓ 1 penalized Poisson regression).Section 5 showed that for the lasso problem, the condition null(X E ) = {0} is also necessary for uniqueness of the solution, almost everywhere in y.To the best of our knowledge, this is a new result.
Sections 3 and 4 contained novel work on extending the LARS path algorithm to the non-unique case, and on bounding the coefficients of lasso solutions in the non-unique case, respectively.The newly proposed LARS algorithm works for any predictor matrix X, whereas the original LARS algorithm only works when the lasso solution path is unique.Although our extension may superficially appear to be quite minor, its proof of correctness is somewhat more involved.In Section 3 we also discussed some interesting properties of LARS lasso solutions in the non-unique case.Section 4 derived a simple method for computing marginal lower and upper bounds for the coefficients of lasso solutions of any given problem instance.It is also in this section that we showed that no two lasso solutions can exhibit different signs for a common active variable, implying that the bounding intervals cannot contain zero in their interiors.These intervals allowed us to categorize each equicorrelation variable as either "dispensable"-meaning that some lasso solution excludes this variable from active set, or "indispensable"-meaning that every lasso solution includes this variable in its active set.We hope that this represents progress towards interpretation in the non-unique case.
Finally, the remainder of Section 5 reviewed existing results from the literature on the active sets of lasso solutions in the non-unique case.The first was the fact that the LARS lasso solution is fully supported on E, and hence attains the largest active set, almost everywhere in y.Next, there always exists a lasso solution whose active set A satisfies rank(X A ) = |A|, and therefore has size |A| ≤ min{n, p}.The last result gave an equivalence between all active sets of lasso solutions of a given problem instance: for almost every y, the subspace col(X A ) is the same for any active set A of a lasso solution.

A Appendix
A.1 Proof of correctness of the LARS algorithm We prove that for a general X, the LARS algorithm (Algorithm 1) computes a lasso solution path, by induction on k, the iteration counter.The key result is Lemma 17, which shows that the LARS lasso solution is continuous at each knot λ k in path, as we change the equicorrelation set and signs from one iteration to the next.We delay the presentation and proof of Lemma 17 until we discuss the proof of correctness, for the sake of clarity.
The base case k = 0 is straightforward, hence assume that the computed path is a solution path through iteration k − 1, that is, for all λ ≥ λ k .Consider the kth iteration, and let E and s denote the current equicorrelation set and signs.First we note that the LARS lasso solution, as defined in terms of the current E, s, satisfies the KKT conditions at λ k .This is implied by Lemma 17, and the fact that the KKT conditions were satisfied at λ k with the old equicorrelation set and signs.To be more explicit, Lemma 17 and the inductive hypothesis together imply that and s = sign( βLARS E (λ k )), which verifies the KKT conditions at λ k .Now note that for any λ ≤ λ k (recalling the definition of βLARS (λ)), we have where the last equality holds as s ∈ row(X E ).Therefore, as λ decreases, only one of the following two conditions can break: The first breaks at the next joining time λ join k+1 , and the second breaks at the next crossing time λ cross k+1 .Since we only decrease λ to λ k+1 = max{λ join k+1 , λ cross k+1 }, we have hence verified the KKT conditions for λ ≥ λ k+1 , completing the proof.Now we present Lemma 17, which shows that βLARS (λ) is continuous (considered as a function of λ) at every knot λ k .This means that the constructed solution path is also globally continuous, as it is simply a linear function between knots.We note that Tibshirani & Taylor (2011) proved a parallel lemma (of the same name) for their dual path algorithm for the generalized lasso.
Lemma 17 (The insertion-deletion lemma).At the kth iteration of the LARS algorithm, let E and s denote the equicorrelation set and signs, and let E * and s * denote the same quantities at the beginning of the next iteration.The two possibilities are: 1. (Insertion) If a variable joins the equicorrelation set at λ k+1 , that is, E * and s * are formed by adding elements to E and s, then: 2. (Deletion) If a variable leaves the equicorrelation set at λ k+1 , that is, E * and s * are formed by deleting elements from E and s, then: Proof.We prove each case separately.The case is actually easier so we start with this first.
Case 2: Deletion.Let the left-hand side of (35).By definition, we have x 2 = 0 because variable i cross k+1 crosses through zero at λ k+1 .Now we consider x 1 .Assume without a loss of generality that i cross k+1 is the last of the equicorrelation variables, so that we can write The point (x 1 , x 2 ) T is the minimum ℓ 2 norm solution of the linear equation: Decomposing this into blocks, Solving this for x 1 gives x 2 + b = (X E * ) + y − (X T E * ) + λ k+1 s * + b, where b ∈ null(X E * ).Recalling that x 1 must have minimal ℓ 2 norm, we compute which is smallest when b = 0.This completes the proof.
Case 1: Insertion.This proof is similar, but only a little more complicated.Now we let the right-hand side of (34).Assuming without a loss of generality that i join k+1 is the largest of the equicorrelation variables, the point (x 1 , x 2 ) T is the minimum ℓ 2 norm solution to the linear equation: If we now decompose this into blocks, we get y − λ k+1 s s join k+1 .
Solving this system for x 1 in terms of x 2 gives where b ∈ null(X E ), and as we argued in the deletion case, we know that b = 0 in order for x 1 to have minimal ℓ 2 norm.Therefore we only need to show that x 2 = 0. To do this, we solve for x 2 in the above block system, plug in what we know about x 1 , and after a bit of calculation we get where we have abbreviated P = P col(XE ) .But the expression inside the parentheses above is exactly y − X βLARS (λ k+1 ) − λs join k+1 = 0, by definition of the joining time.Hence we conclude that x 2 = 0, as desired, and this completes the proof.

A.2 Local LARS algorithm for the lasso path
We argue that there is nothing special about starting the LARS path algorithm at λ = ∞.Given any solution the lasso problem at y, X, and λ * > 0, we can define the unique equicorrelation set E and signs s, as in (4) and ( 5).The LARS lasso solution at λ * can then be explicitly constructed as in (23), and by following the same steps as those outlined in Section 3.1, we can compute the LARS lasso solution path beginning at λ * , for decreasing values of the tuning parameter; that is, over λ ∈ [0, λ * ].
In fact, the LARS lasso path can also be computed in the reverse direction, for increasing values of the tuning parameter.Beginning with the LARS lasso solution at λ * , it is not hard to see that in this direction (increasing λ) a variable enters the equicorrelation set at the next crossing time-the minimal crossing time larger than λ * , and a variable leaves the equicorrelation set at the next joining time-the minimal joining time larger than λ * .This is of course the opposite of the behavior of joining and crossing times in the usual direction (decreasing λ).Hence, in this manner, we can compute the LARS lasso path over λ ∈ [λ * , ∞].
This could be useful in studying a large lasso problem: if we knew a tuning parameter value λ * of interest (even approximate interest), then we could compute a lasso solution at λ * using one of the many efficient techniques from convex optimization (such as coordinate descent, or accelerated first-order methods), and subsequently compute a local solution path around λ * to investigate the behavior of nearby lasso solutions.This can be achieved by finding the knots to the left and right of λ * (performing one LARS iteration in the usual direction and one iteration in the reverse direction), and repeating this, until a desired range λ ∈ [λ * − δ L , λ * + δ R ] is achieved.

A.3 Enumerating all active sets of lasso solutions
We show that the facial structure of the polytope K in (29) describes the collection of active sets of lasso solutions, almost everywhere in y.
Lemma 18. Fix any X and λ > 0. For almost every y ∈ R n , there is a one-to-one correspondence between active sets of lasso solutions and nonempty faces of the polyhedron K defined in (29).
Proof.Nonnempty faces of K are sets F of the form F = K ∩ H = ∅, where H is a supporting hyperplane to K. If A is an active set of a lasso solution, then there exists an x ∈ K such that x E\A = 0. Hence, recalling the sign condition in (28), the hyperplane H E\A = {x ∈ |E| : u T x = 0}, where supports K. Furthermore, we have F = K ∩H = {x ∈ K : i∈E\A s i x i = 0} = {x ∈ K : x E\A = 0}.Therefore every active set A corresponds to a nonempty face F of K. Now we show the converse statement holds, for almost every y.Well, the facets of K are sets of the form F i = K ∩ {x ∈ R |E| : x i = 0} for some i ∈ E. 4 Each nonempty proper face F can be written as an intersection of facets: F = ∩ i∈I F i = {x ∈ K : x I = 0}, and hence F corresponds to the active set A = E \ I.The face F = K corresponds to the equicorrelation set E, which itself is an active set for almost every y ∈ R n by Lemma 13.
Note that this means that we can enumerate all possible active sets of lasso solutions, at a given y, X, λ, by enumerating the faces of the polytope K.This is a well-studied problem in computational geometry; see, for example, Fukuda et al. (1997) and the references therein.It is worth mentioning that this could be computationally intensive, as the number of faces can grow very large, even for a polytope of moderate dimensions.
linear programs (30) and (31), respectively.Lemma 11 implies a valuable property of the bounding interval [ βlower i , βupper i Combining these two properties gives the next lemma.

Table 1 :
. The results of Algorithm 2 for the small example from the introduction, with n = 5, p = 8.Shown are the lasso coefficient bounds over the equicorrelation set E = {1, 2, 3, 4}.