Statistical Properties of Convex Clustering

In this manuscript, we study the statistical properties of convex clustering. We establish that convex clustering is closely related to single linkage hierarchical clustering and $k$-means clustering. In addition, we derive the range of tuning parameter for convex clustering that yields a non-trivial solution. We also provide an unbiased estimate of the degrees of freedom, and provide a finite sample bound for the prediction error for convex clustering. We compare convex clustering to some traditional clustering methods in simulation studies.


Introduction
Let X ∈ R n×p be a data matrix with n observations and p features. We assume for convenience that the rows of X are unique. The goal of clustering is to partition the n observations into K clusters, D 1 , . . . , D K , based on some similarity measure. Traditional clustering methods such as hierarchical clustering, k-means clustering, and spectral clustering take a greedy approach (see, e.g., Hastie, Tibshirani and Friedman, 2009).
In recent years, several authors have proposed formulations for convex clustering (Pelckmans et al., 2005;Hocking et al., 2011;Lindsten, Ohlsson and Ljung, 2011;Chi and Lange, 2014a). Chi and Lange (2014a) proposed efficient algorithms for convex clustering. In addition, Radchenko and Mukherjee (2014) studied the theoretical properties of a closely related problem to convex clustering, and Zhu et al. (2014) studied the condition needed for convex clustering to recover the correct clusters.
Convex clustering of the rows, X 1. , . . . , X n. , of a data matrix X involves solving the convex optimization problem where Q q (U) = i<i U i. − U i . q for q ∈ {1, 2, ∞}. The penalty Q q (U) generalizes the fused lasso penalty proposed in Tibshirani et al. (2005), and encourages the rows ofÛ, the solution to (1), to take on a small number of unique values. On the basis ofÛ, we define the estimated clusters as follows.
Definition 1. The ith and i th observations are estimated by convex clustering to belong to the same cluster if and only ifÛ i. =Û i . .
The tuning parameter λ controls the number of unique rows ofÛ, i.e., the number of estimated clusters. When λ = 0,Û = X, and so each observation belongs to its own cluster. As λ increases, the number of unique rows ofÛ will decrease. For sufficiently large λ, all rows ofÛ will be identical, and so all observations will be estimated to belong to a single cluster. Note that (1) is strictly convex, and therefore the solutionÛ is unique.
To simplify our analysis of convex clustering, we rewrite (1). Let x = vec(X) ∈ R np and let u = vec(U) ∈ R np , where the vec(·) operator is such that x (i−1)p+j = X ij and u (i−1)p+j = U ij . Construct D ∈ R [p·( n 2 )]×np , and define the index set C(i, i ) such that the p × np submatrix D C(i,i ) satisfies D C(i,i ) Furthermore, for a vector b ∈ R p·( n 2 ) , we define Thus, we have P q (Du) = i<i D C(i,i ) u q = i<i U i. − U i . q = Q q (U). Problem (1) can be rewritten as minimize u∈R np 1 2 x − u 2 2 + λP q (Du).
When q = 1, (3) is an instance of the generalized lasso problem studied in Tibshirani and Taylor (2011). Letû be the solution to (3). By Definition 1, the ith and i th observations belong to the same cluster if and only if D C(i,i )û = 0.
In what follows, we work with (3) instead of (1) for convenience. Let D † ∈ R np×[p·( n 2 )] be the Moore-Penrose pseudo-inverse of D. We state some properties of D and D † that will prove useful in later sections. Lemma 1. The matrices D and D † have the following properties.
n DD T is a projection matrix onto the column space of D.
(v) Define Λ min (D) and Λ max (D) as the minimum non-zero singular value and maximum singular value of the matrix D, respectively. Then, Λ min (D) = Λ max (D) = √ n.
In this manuscript, we study the statistical properties of convex clustering. In Section 2, we study the dual problem of (3) and use it to establish that convex clustering is closely related to single linkage hierarchical clustering. In addition, we establish a connection between k-means clustering and convex clustering. In Section 3, we present some properties of convex clustering. More specifically, we characterize the range of the tuning parameter λ in (3) such that convex clustering yields a non-trivial solution. We also provide a finite sample bound for the prediction error, and an unbiased estimator of the degrees of freedom for convex clustering. In Section 4, we conduct numerical studies to evaluate the empirical performance of convex clustering relative to some existing proposals. We close with a discussion in Section 5.

Convex clustering, single linkage hierarchical clustering, and k-means clustering
In Section 2.1, we study the dual problem of convex clustering (3). Through its dual problem, we establish a connection between convex clustering and single linkage hierarchical clustering in Section 2.2. We then show that convex clustering is closely related to k-means clustering in Section 2.3.

Dual problem of convex clustering
We analyze convex clustering (3) by studying its dual problem. Let s, q ∈ {1, 2, ∞} satisfy 1 s + 1 q = 1. For a vector b ∈ R p·( n 2 ) , let P * q (b) denote the dual norm of P q (b), which takes the form We refer the reader to Chapter 6 in Boyd and Vandenberghe (2004) for an overview of the concept of duality. (3) is

Lemma 2. The dual problem of convex clustering
where ν ∈ R [p·( n 2 )] is the dual variable. Furthermore, letû andν be the solutions to (3) and (5), respectively. Then, While (3) is strictly convex, its dual problem (5) is not strictly convex, since D is not of full rank by Lemma 1(i). Therefore, the solutionν to (5) is not unique. Lemma 1(iv) indicates that 1 n DD T is a projection matrix onto the column space of D. Thus, the solution Dû in (6) can be interpreted as the difference between Dx, the pairwise difference between rows of X, and the projection of a dual variable onto the column space of D.
We now consider a modification to the convex clustering problem (3). Recall from Definition 1 that the ith and i'th observations are in the same estimated cluster if D C(i,i )û = 0. This motivates us to estimate γ = Du directly by solving We establish a connection between (3) and (7) by studying the dual problem of (7).
Lemma 3. The dual problem of (7) is where ν ∈ R [p·( n 2 )] is the dual variable. Furthermore, letγ andν be the solutions to (7) and (8), respectively. Then, Comparing (6) and (9), we see that the solutions to convex clustering (3) and the modified problem (7) are closely related. In particular, both Dû in (6) and γ in (9) involve taking the difference between Dx and some function of a dual variable that has P * q (·) norm less than or equal to λ. The main difference is that in (6), the dual variable is projected into the column space of D.
Problem (7) is quite simple, and in fact it amounts to a thresholding operation on Dx when q = 1 or q = 2, i.e., the solutionγ is obtained by performing soft thresholding on Dx, or group soft thresholding on D C(i,i ) x for all i < i , respectively . When q = ∞, an efficient algorithm was proposed by Duchi and Singer (2009).

Convex clustering and single linkage hierarchical clustering
In this section, we establish a connection between convex clustering and single linkage hierarchical clustering. Letγ q be the solution to (7) with P q (·) norm and let s, q ∈ {1, 2, ∞} satisfy 1 s + 1 q = 1. Since (7) is separable in γ C(i,i ) for all i < i , by Lemma 2.1 in Haris, Witten and Simon (2015), it can be verified that It might be tempting to conclude that a pair of observations (i, i ) belong to the same cluster ifγ q C(i,i ) = 0. However, by inspection of (10), it could happen that To overcome this problem, we define the n × n adjacency matrix A q (λ) as Subject to a rearrangement of the rows and columns, A q (λ) is a block-diagonal matrix with some number of blocks, denoted as R. On the basis of A q (λ), we define R estimated clusters: the indices of the observations in the rth cluster are the same as the indices of the observations in the rth block of A q (λ). We now present a lemma on the equivalence between single linkage hierarchical clustering and the clusters identified by (7) using (11). The lemma follows directly from the definition of single linkage clustering (see, for instance, Chapter 3.2 of Jain and Dubes, 1988). In other words, Lemma 4 implies that single linkage hierarchical clustering and (7) yield the same estimated clusters. Recalling the connection between (3) and (7) established in Section 2.1, this implies a close connection between convex clustering and single linkage hierarchical clustering.

Convex clustering and k-means clustering
We now establish a connection between convex clustering and k-means clustering. k-means clustering seeks to partition the n observations into K clusters by minimizing the within cluster sum of squares. That is, the clusters are given by the partitionD 1 , . . . ,D K of {1, . . . , n} that solves the optimization problem We consider convex clustering (1) with q = 0, where (13) is no longer a convex optimization problem. We now establish a connection between (12) and (13). For a given value of λ, (13) is equivalent to is an indicator function that equals to one if i ∈ E k and i / ∈ E k . Thus, we see from (12) and (14) that k-means clustering is equivalent to convex clustering with q = 0, up to a penalty term To interpret the penalty term, we consider the case when there are two clusters E 1 and E 2 . The penalty term reduces to λ|E 1 | · (n − |E 1 |), where |E 1 | is the cardinality of the set E 1 . The term λ|E 1 | · (n − |E 1 |) is minimized when |E 1 | is either 1 or n − 1, encouraging one cluster taking only one observation. Thus, compared to k-means clustering, convex clustering with q = 0 has the undesirable behavior of producing clusters whose sizes are highly unbalanced.

Properties of convex clustering
We now study the properties of convex clustering (3) with q ∈ {1, 2}. In Section 3.1, we establish the range of the tuning parameter λ in (3) such that convex clustering yields a non-trivial solution with more than one cluster. We provide finite sample bounds for the prediction error of convex clustering in Section 3.2. Finally, we provide unbiased estimates of the degrees of freedom for convex clustering in Section 3.3.

Range of λ that yields non-trivial solution
In this section, we establish the range of the tuning parameter λ such that convex clustering (3) yields a solution with more than one cluster.

Lemma 5. Let
(15) Convex clustering (3) with q = 1 or q = 2 yields a non-trivial solution of more than one cluster if and only if λ < λ upper .
By Lemma 5, we see that calculating λ upper boils down to solving a convex optimization problem. This can be solved using a standard solver such as CVX in MATLAB. In the absence of such a solver, a loose upper bound on λ upper is given by 1 n Dx ∞ for q = 1, or max i<i 1 n D C(i,i ) x 2 for q = 2. Therefore, to obtain the entire solution path of convex clustering, we need only consider values of λ that satisfy λ ≤ λ upper .

Bounds on prediction error
In this section, we assume the model x = u + , where ∈ R np is a vector of independent sub-Gaussian noise terms with mean zero and variance σ 2 , and u is an arbitrary np-dimensional mean vector. We refer the reader to pages 24-25 in Boucheron, Lugosi and Massart (2013) for the properties of sub-Gaussian random variables. We now provide finite sample bounds for the prediction error of convex clustering (3). Let λ be the tuning parameter in (3) and let λ = λ np . Lemma 6. Suppose that x = u + , where ∈ R np and the elements of are independent sub-Gaussian random variables with mean zero and variance σ 2 .
Letû be the estimate obtained from (3) where c 1 and c 2 are positive constants appearing in Lemma 10.
We see from Lemma 6 that the average prediction error is bounded by the oracle quantity Du 1 and a second term that decays to zero as n, p → ∞. Convex clustering with q = 1 is prediction consistent only if λ Du 1 = o (1). We now provide a scenario for which λ Du 1 = o (1) holds. Suppose that we are in the high-dimensional setting in which p > n and the true underlying clusters differ only with respect to a fixed number of features (Witten and Tibshirani, 2010). Also, suppose that each element of Du -that is, U ij −U i j -is of order O(1). Therefore, Du 1 = O(n 2 ), since by assumption only a fixed number of features have different means across clusters. Assume that n log(p·( n 2 )) p 2 = o(1). Under these assumptions, convex clustering with q = 1 is prediction consistent.
Next, we present a finite sample bound on the prediction error for convex clustering with q = 2.

Lemma 7.
Suppose that x = u + , where ∈ R np and the elements of are independent sub-Gaussian random variables with mean zero and variance σ 2 .
Letû be the estimate obtained from (3) where c 1 and c 2 are positive constants appearing in Lemma 10.
Under the scenario described above,

Degrees of freedom
Convex clustering recasts the clustering problem as a penalized regression problem, for which the notion of degrees of freedom is established (Efron, 1986). Under this framework, we provide an unbiased estimator of the degrees of freedom for clustering. Recall thatû is the solution to convex clustering (3). Suppose that Var(x) = σ 2 I. Then, the degrees of freedom for convex clustering is defined as 1 Efron, 1986). An unbiased estimator of the degrees of freedom for convex clustering with q = 1 follows directly from Theorem 3 in Tibshirani and Taylor (2012).

Lemma 8.
Assume that x ∼ MVN(u, σ 2 I), and letû be the solution to (3) with q = 1. Furthermore, letB 1 = {j : (Dû) j = 0}. We define the matrix D −B1 by removing the rows of D that correspond toB 1 . Then is an unbiased estimator of the degrees of freedom of convex clustering with q = 1.
The following corollary follows directly from Corollary 1 in Tibshirani and Taylor (2011).

Corollary 1.
Assume that x ∼ MVN(u, σ 2 I), and letû be the solution to (3) with q = 1. The fitû has degrees of freedom There is an interesting interpretation of the degrees of freedom estimator for convex clustering with q = 1. Suppose that there are K estimated clusters, and all elements of the estimated means corresponding to the K estimated clusters are unique. Then the degrees of freedom is Kp, the product of the number of estimated clusters and the number of features.
Next, we provide an unbiased estimator of the degrees of freedom for convex clustering with q = 2.
is an unbiased estimator of the degrees of freedom of convex clustering with q = 2.
When λ = 0, D C(i,i )û 2 = 0 for all i < i . Therefore, P = I ∈ R np×np and the degrees of freedom estimate is equal to tr(I) = np. When λ is sufficiently large thatB 2 is an empty set, one can verify that P = I − D T (DD T ) † D is a projection matrix of rank p, using the fact that rank(D) = p(n − 1) from Lemma 1(i). Thereforedf 2 = tr(P) = p.
We now assess the accuracy of the proposed unbiased estimators of the degrees of freedom. We simulate Gaussian clusters with K = 2 as described in Section 4.1 with n = p = 20 and σ = 0.5. We perform convex clustering with q = 1 and q = 2 across a fine grid of tuning parameters λ. For each λ, we compare the quantities (16) and (17) to which is an unbiased estimator of the true degrees of freedom, 1 σ 2 np j=1 Cov(û j , x j ), averaged over 500 data sets. In addition, we plot the pointwise intervals of the estimated degrees of freedom (mean ± 2 × standard deviation). Note that (18) cannot be computed in practice, since it requires knowledge of the unknown quantity u. Results are displayed in Figure 1. We see that the estimated degrees of freedom are quite close to the true degrees of freedom.

Simulation studies
We compare convex clustering with q = 1 and q = 2 to the following proposals: 1. Single linkage hierarchical clustering with the dissimilarity matrix defined by the Euclidean distance between two observations. 2. The k-means clustering algorithm (Lloyd, 1982). 3. Average linkage hierarchical clustering with the dissimilarity matrix defined by the Euclidean distance between two observations. We apply convex clustering (3) with q = {1, 2} using the R package cvxclustr (Chi and Lange, 2014b). In order to obtain the entire solution path for convex clustering, we use a fine grid of λ values for (3), in a range guided by Lemma 5. We apply the other methods by allowing the number of clusters to vary over a range from 1 to n clusters. To evaluate and quantify the performance of the different clustering methods, we use the Rand index (Rand, 1971). A high value of the Rand index indicates good agreement between the true and estimated clusters.
We consider two different types of clusters in our simulation studies: Gaussian clusters and non-convex clusters.

Gaussian clusters
We generate Gaussian clusters with K = 2 and K = 3 by randomly assigning each observation to a cluster with equal probability. For K = 2, we create the mean vectors μ 1 = 1 p and μ 2 = −1 p . For K = 3, we create the mean vectors μ 1 = −3 · 1 p , μ 2 = 0 p , and μ 3 = 3 · 1 p . We then generate the n × p data matrix X according to X i. ∼ MVN(μ k , σ 2 I) for i ∈ D k . We consider n = p = 30 and σ = {1, 2}. The Rand indices for K = 2 and K = 3, averaged over 200 data sets, are summarized in Figures 2 and 3, respectively.
Recall from Section 2.2 that there is a connection between convex clustering and single linkage clustering. However, we note that the two clustering methods are not equivalent. From Figure 2(a), we see that single linkage hierarchical clustering performs very similarly to convex clustering with q = 2 when the signal-to-noise ratio is high. However, from Figure 2(b), we see that single linkage hierarchical clustering outperforms convex clustering with q = 2 when the signalto-noise ratio is low.  We also established a connection between convex clustering and k-means clustering in Section 2.3. From Figure 2(a), we see that k-means clustering and convex clustering with q = 2 perform similarly when two clusters are estimated and the signal-to-noise ratio is high. In this case, the first term in (14) can be made extremely small if the clusters are correctly estimated, and so both k-means and convex clustering yield the same (correct) cluster estimates. In contrast, when the signal-to-noise ratio is low, the first term in (14) is relatively large regardless of whether or not the clusters are correctly estimated, and so convex clustering focuses on minimizing the penalty term in (14). Therefore, when convex clustering with q = 2 estimates two clusters, one cluster is of size one and the other is of size n − 1, as discussed in Section 2.3. Figure 2(b) illustrates this phenomenon when both methods estimate two clusters: convex clustering with q = 2 has a Rand index of approximately 0.5 while k-means clustering has a Rand index of one.
All methods outperform convex clustering with q = 1. Moreover, k-means clustering and average linkage hierarchical clustering outperform single linkage hierarchical clustering and convex clustering when the signal-to-noise ratio is low. This suggests that the minimum signal needed for convex clustering to identify the correct clusters may be larger than that of average linkage hierarchical clustering and k-means clustering. We see similar results for the case when K = 3 in Figure 3.

Non-convex clusters
We consider two types of non-convex clusters: two circles clusters (Ng, Jordan and Weiss, 2002) and two half-moon clusters (Hocking et al., 2011;Chi and Lange, 2014a). For two circles clusters, we generate 50 data points from each of the two circles that are centered at (0, 0) with radiuses two and 10, respectively. We then add Gaussian random noise with mean zero and standard deviation 0.1 to each data point. For two half-moon clusters, we generate 50 data points from each of the two half-circles that are centered at (0, 0) and (30, 3) with radius 30, respectively. We then add Gaussian random noise with mean zero and standard deviation one to each data point. Illustrations of both types of clusters are given in Figure 4. The Rand indices for both types of clusters, averaged over 200 data sets, are summarized in Figure 5.
We see from Figure 5 that convex clustering with q = 2 and single linkage hierarchical clustering have similar performance, and that they outperform all of the other methods. Single linkage hierarchical clustering is able to identify nonconvex clusters since it is an agglomerative algorithm that merges the closest pair of observations not yet belonging to the same cluster into one cluster. In contrast, average linkage hierarchical clustering and k-means clustering are known to perform poorly on identifying non-convex clusters (Ng, Jordan and Weiss, 2002;Hocking et al., 2011). Again, convex clustering with q = 1 has the worst performance. Table 1 Simulation study to evaluate the performance of the extended BIC for tuning parameter selection for convex clustering with q = 2. Results are reported over 100 simulated data sets.
We report the proportion of data sets for which the correct number of clusters was identified, and the average Rand index.

Selection of the tuning parameter λ
Convex clustering (3) involves a tuning parameter λ, which determines the estimated number of clusters. Some authors have suggested a hold-out validation approach to select tuning parameters for clustering problems (see, for instance, Tan and Witten, 2014;Chi, Allen and Baraniuk, 2014). In this section, we present an alternative approach for selecting λ using the unbiased estimators of the degrees of freedom derived in Section 3.3. The Bayesian Information Criterion (BIC) developed in Schwarz (1978) has been used extensively for model selection. However, it is known that the BIC does not perform well unless the number of observations is far larger than the number of parameters Chen, 2008, 2012). For convex clustering (3), the number of observations is equal to the number of parameters. Thus, we consider the extended BIC Chen, 2008, 2012), defined as eBIC q,γ = np · log RSS q np +df q · log(np) + 2γ ·df q · log(np), where RSS q = x −û q 2 2 ,û q is the convex clustering estimate for a given value of q and λ, γ ∈ [0, 1], anddf q is given in Section 3.3. Note that we suppress the dependence ofû q anddf q on λ for notational convenience. We see that when γ = 0, the extended BIC reduces to the classical BIC.
To evaluate the performance of the extended BIC in selecting the number of clusters, we generate Gaussian clusters with K = 2 and K = 3 as described in Section 4.1, with n = p = 20, and σ = 0.5. We perform convex clustering with q = 2 over a fine grid of λ, and select the value of λ for which the quantity eBIC q,γ is minimized. We consider γ ∈ {0, 0.5, 0.75, 1}. Table 1 reports the proportion of datasets for which the correct number of clusters was identified, as well as the average Rand index.
From Table 1, we see that the extended BIC is able to select the true number of clusters accurately for K = 2. When K = 3, the classical BIC (γ = 0) fails to select the true number of clusters. In contrast, the extended BIC with γ = 1 has the best performance.

Discussion
Convex clustering recasts the clustering problem into a penalized regression problem. By studying its dual problem, we show that there is a connection between convex clustering and single linkage hierarchical clustering. In addition, we establish a connection between convex clustering and k-means clustering. We also establish several statistical properties of convex clustering. Through some numerical studies, we illustrate that the performance of convex clustering may not be appealing relative to traditional clustering methods, especially when the signal-to-noise ratio is low.
Many authors have proposed a modification to the convex clustering problem (1), where W is an n × n symmetric matrix of positive weights, and Q q (W, (Pelckmans et al., 2005;Hocking et al., 2011;Lindsten, Ohlsson and Ljung, 2011;Chi and Lange, 2014a). For instance, the weights can be defined as W ii = exp −φ X i. − X i . 2 2 for some constant φ > 0. This yields better empirical performance than (1) (Hocking et al., 2011;Chi and Lange, 2014a). We leave an investigation of the properties of (20) to future work.

Appendix A: Proof of Lemmas 2-3
Proof of Lemma 2. We rewrite problem (3) as with the Lagrangian function where ν ∈ R [p·( n 2 )] is the Lagrangian dual variable. In order to derive the dual problem, we need to minimize the Lagrangian function over the primal variables u and η 1 . Recall from (4) that P * q (·) is the dual norm of P q (·). It can be shown that Therefore, the dual problem for (3) is We now establish an explicit relationship between the solution to convex clustering and its dual problem. Differentiating the Lagrangian function (A-1) with respect to u and setting it equal to zero, we obtain whereν is the solution to the dual problem, which satisfies P * q (ν) ≤ λ by (A-2). Multiplying both sides by D, we obtain the relationship (6).
Proof of Lemma 3. We rewrite (7) as with the Lagrangian function where ν ∈ R [p·( n 2 )] is the Lagrangian dual variable. In order to derive the dual problem, we minimize the Lagrangian function over the primal variables γ and η 2 . It can be shown that Therefore, the dual problem for (7) is We now establish an explicit relationship between the solution to (7) and its dual problem. Differentiating the Lagrangian function (A-3) with respect to γ and setting it equal to zero, we obtain whereν is the solution to the dual problem, which we know from (A-4) satisfies P * q (ν ) ≤ λ.

Appendix B: Proof of Lemma 5
Proof of Lemma 5. Since D is not of full rank by Lemma 1(i), the solution to (5) in the absence of constraint is not unique, and takes the form for ω ∈ R [p·( n 2 )] . The second equality follows from Lemma 1(iii) and the last equality follows from Lemma 1(ii).
Letû be the solution to (3). Substitutingν given in (B-1) into (6), we obtain Recall from Definition 1 that all observations are estimated to belong to the same cluster if Dû = 0. For anyν in (B-1), picking λ = P * q (ν) guarantees that the constraint on the dual problem (5) is inactive, and therefore that convex clustering has a trivial solution of Dû = 0.
Sinceν is not unique, P * q (ν) is not unique. In order to obtain the smallest tuning parameter λ such that Dû = 0, we take Any tuning parameter λ ≥ λ upper results in an estimate for which all observations belong to a single cluster. The proof is completed by recalling the definition of the dual norm P * q (·) in (4).

Appendix C: Proof of Lemmas 6-7
To prove Lemmas 6 and 7, we need a lemma on the tail bound for quadratic forms of independent sub-Gaussian random variables.
Lemma 10. (Hanson and Wright, 1971) Let z be a vector of independent sub-Gaussian random variables with mean zero and variance σ 2 . Let M be a symmetric matrix. Then, there exists some constants c 1 , c 2 > 0 such that for any t > 0, where · F and · sp are the Frobenius norm and spectral norm, respectively.
where Z = AΛ ∈ R [p·( n 2 )]×p(n−1) . Note that rank(Z) = p(n − 1) and therefore, there exists a pseudo-inverse Z † ∈ R p(n−1)×[p·( n 2 )] such that Z † Z = I. Recall from Section 1 that the set C(i, i ) contains the row indices of D such that D C(i,i ) u = U i. − U i . . Let the submatrices Z C(i,i ) and Z † C(i,i ) denote the rows of Z and the columns of Z † , respectively, corresponding to the indices in the set C(i, i ). By Lemma 1(v), Letα andβ denote the solution to (C-1).
Substituting (C-10) into (C-7), we obtain We get Lemma 7 by an application of the triangle inequality and by rearranging the terms.

Appendix D: Proof of Lemma 9
Proof of Lemma 9. Directly from the dual problem (5), D Tν is the projection of x onto the convex set K = D T ν : P * 2 (ν) ≤ λ . Using the primal-dual relationshipû = x − D Tν , we see thatû is the residual from projecting x onto the convex set K. By Lemma 1 of Tibshirani and Taylor (2012),û is continuous and almost differentiable with respect to x. Therefore, by Stein's formula, the degrees of freedom can be characterized as E tr ∂û ∂x . Recall that D C(i,i ) denotes the rows of D corresponding to the indices in the set C(i, i ). LetB 2 = {(i, i ) : D C(i,i )û 2 = 0}. By the optimality condition of (3) with q = 2, we obtain We define the matrix D −B2 by removing the rows of D that correspond to elements inB 2 . Let P = I − D T −B2 (D −B2 D T −B2 ) † D −B2 be the projection matrix onto the complement of the space spanned by the rows of D −B2 .
By the definition of D −B2 , we obtain D −B2û = 0. Therefore, Pû =û. Multiplying P onto both sides of (D-1), we obtain where the second equality follows from the fact that PD T C(i,i ) = 0 for any (i, i ) / ∈B 2 . Vaiter et al. (2014) showed that there exists a neighborhood around almost every x such that the setB 2 is locally constant with respect to x. Therefore, the derivative of (D-2) with respect to x is