High-dimensional covariance estimation by minimizing $\ell_1$-penalized log-determinant divergence

Given i.i.d. observations of a random vector $X \in \mathbb{R}^p$, we study the problem of estimating both its covariance matrix $\Sigma^*$, and its inverse covariance or concentration matrix {$\Theta^* = (\Sigma^*)^{-1}$.} We estimate $\Theta^*$ by minimizing an $\ell_1$-penalized log-determinant Bregman divergence; in the multivariate Gaussian case, this approach corresponds to $\ell_1$-penalized maximum likelihood, and the structure of $\Theta^*$ is specified by the graph of an associated Gaussian Markov random field. We analyze the performance of this estimator under high-dimensional scaling, in which the number of nodes in the graph $p$, the number of edges $s$ and the maximum node degree $d$, are allowed to grow as a function of the sample size $n$. In addition to the parameters $(p,s,d)$, our analysis identifies other key quantities covariance matrix $\Sigma^*$; and (b) the $\ell_\infty$ operator norm of the sub-matrix $\Gamma^*_{S S}$, where $S$ indexes the graph edges, and $\Gamma^* = (\Theta^*)^{-1} \otimes (\Theta^*)^{-1}$; and (c) a mutual incoherence or irrepresentability measure on the matrix $\Gamma^*$ and (d) the rate of decay $1/f(n,\delta)$ on the probabilities $ \{|\hat{\Sigma}^n_{ij}- \Sigma^*_{ij}|>\delta \}$, where $\hat{\Sigma}^n$ is the sample covariance based on $n$ samples. Our first result establishes consistency of our estimate $\hat{\Theta}$ in the elementwise maximum-norm. This in turn allows us to derive convergence rates in Frobenius and spectral norms, with improvements upon existing results for graphs with maximum node degrees $d = o(\sqrt{s})$. In our second result, we show that with probability converging to one, the estimate $\hat{\Theta}$ correctly specifies the zero pattern of the concentration matrix $\Theta^*$.


Introduction
The area of high-dimensional statistics deals with estimation in the "large p, small n" setting, where p and n correspond, respectively, to the dimensionality of the data and the sample size.Such high-dimensional problems arise in a variety of applications, among them remote sensing, computational biology and natural language processing, where the model dimension may be comparable or substantially larger than the sample size.It is well-known that such highdimensional scaling can lead to dramatic breakdowns in many classical procedures.In the absence of additional model assumptions, it is frequently impossible to obtain consistent procedures when p ≫ n.Accordingly, an active line of statistical research is based on imposing various restrictions on the model--for instance, sparsity, manifold structure, or graphical model structure--and then studying the scaling behavior of different estimators as a function of sample size n, ambient dimension p and additional parameters related to these structural assumptions.
In this paper, we study the following problem: given n i.i.d.observations {X (k) } n k=1 of a zero mean random vector X ∈ R p , estimate both its covariance matrix Σ * , and its inverse covariance or concentration matrix Θ * := Σ * −1 .Perhaps the most natural candidate for estimating Σ * is the empirical sample covariance matrix, but this is known to behave poorly in high-dimensional settings.For instance, when p/n → c > 0, and the samples are drawn i.i.d.from a multivariate Gaussian distribution, neither the eigenvalues nor the eigenvectors of the sample covariance matrix are consistent estimators of the population versions [14,15].Accordingly, many regularized estimators have been proposed to estimate the covariance or concentration matrix under various model assumptions.One natural model assumption is that reflected in shrinkage estimators, such as in the work of Ledoit and Wolf [16], who proposed to shrink the sample covariance to the identity matrix.An alternative model assumption, relevant in particular for time series data, is that the covariance or concentration matrix is banded, meaning that the entries decay based on their distance from the diagonal.Furrer and Bengtsson [11] proposed to shrink the covariance entries based on this distance from the diagonal.Wu and Pourahmadi [24] and Huang et al. [13] estimate these banded concentration matrices by using thresholding and ℓ 1 -penalties respectively, as applied to a Cholesky factor of the inverse covariance matrix.Bickel and Levina [2] prove the consistency of these banded estimators so long as (log p) 2 n → 0 and the model covariance matrix is banded as well, but as they note, these estimators depend on the presented order of the variables.
A related class of models are based on positing some kind of sparsity, either in the covariance matrix, or in the inverse covariance.Bickel and Levina [1] study thresholding estimators of covariance matrices, assuming that each row satisfies an ℓ q -ball sparsity assumption.In independent work, El Karoui [9] also studied thresholding estimators of the covariance, but based on an alternative notion of sparsity, one which captures the number of closed paths of any length in the associated graph.Other work has studied models in which the inverse covariance or concentration matrix has a sparse structure.As will be clarified in the next section, when the random vector is multivariate Gaussian, the set of non-zero entries in the concentration matrix correspond to the set of edges in an associated Gaussian Markov random field (GMRF).In this setting, imposing sparsity on the concentration matrix can be interpreted as requiring that the graph underlying the GMRF have relatively few edges.A line of recent papers [8,10,25] have proposed an estimator that minimizes the Gaussian negative log-likelihood regularized by the ℓ 1 norm of the entries (or the offdiagonal entries) of the concentration matrix.The resulting optimization problem is a log-determinant program, which can be solved in polynomial time with interior point methods [3], or by faster co-ordinate descent algorithms [8,10].In recent work, Rothman et al. [21] have analyzed some aspects of high-dimensional behavior of this estimator; assuming that the minimum and maximum eigenvalues of Σ * are bounded, they show that consistent estimates can be achieved in Frobenius and operator norm, in particular at the rate O( (s+p) log p n ).The focus of this paper is the problem of estimating the concentration matrix Θ * under sparsity conditions.We do not impose specific distributional assumptions on X itself, but rather analyze the estimator in terms of the tail behavior of the maximum deviation max i,j | Σ n ij − Σ * ij | of the sample and population covariance matrices.To estimate Θ * , we consider minimization of an ℓ 1 -penalized log-determinant Bregman divergence, which is equivalent to the usual ℓ 1 -penalized maximum likelihood when X is multivariate Gaussian.We analyze the behavior of this estimator under high-dimensional scaling, in which the number of nodes p in the graph, and the maximum node degree d are all allowed to grow as a function of the sample size n.
In addition to the triple (n, p, d), we also explicitly keep track of certain other measures of model complexity, that could potentially scale as well.The first of these measures is the ℓ ∞ -operator norm of the covariance matrix Σ * , which we denote by K Σ * := |||Σ * ||| ∞ .The next quantity involves the Hessian of the log-determinant objective function, Γ * := (Θ * ) −1 ⊗ (Θ * ) −1 .When the distribution of X is multivariate Gaussian, this Hessian has the more explicit representation Γ * (j,k),(ℓ,m) = cov{X j X k , X ℓ X m }, showing that it measures the covariances of the random variables associated with each edge of the graph.For this reason, the matrix Γ * can be viewed as an edge-based counterpart to the usual node-based covariance matrix Σ * .Using S to index the variable pairs (i, j) associated with non-zero entries in the inverse covariance.our analysis involves the quantity K Γ * = |||(Γ * SS ) −1 ||| ∞ .Finally, we also impose a mutual incoherence or irrepresentability condition on the Hessian Γ * ; this condition is similar to assumptions imposed on Σ * in previous work [22,26,19,23] on the Lasso.We provide some examples where the Lasso irrepresentability condition holds, but our corresponding condition on Γ * fails; however, we do not know currently whether one condition strictly dominates the other.
Our first result establishes consistency of our estimator Θ in the elementwise maximum-norm, providing a rate that depends on the tail behavior of the entries in the random matrix Σ n − Σ * .For the special case of sub-Gaussian random vectors with concentration matrices having at most d non-zeros per row, a corollary of our analysis is consistency in spectral norm at rate ||| Θ − Θ * ||| 2 = O( (d 2 log p)/n), with high probability, thereby strengthening previous results [21].Under the milder restriction of each element of X having bounded 4m-th moment, the rate in spectral norm is substantially slower-namely, ||| Θ − Θ * ||| 2 = O(d p1/2m / √ n)-highlighting that the familiar logarithmic dependence on the model size p is linked to particular tail behavior of the distribution of X.Finally, we show that under the same scalings as above, with probability converging to one, the estimate Θ correctly specifies the zero pattern of the concentration matrix Θ * .
The remainder of this paper is organized as follows.In Section 2, we set up the problem and give some background.Section 3 is devoted to statements of our main results, as well as discussion of their consequences.Section 4 provides an outline of the proofs, with the more technical details deferred to appendices.In Section 5, we report the results of some simulation studies that illustrate our theoretical predictions.
Notation For the convenience of the reader, we summarize here notation to be used throughout the paper.Given a vector u ∈ R d and parameter a ∈ [1, ∞], we use u a to denote the usual ℓ a norm.Given a matrix U ∈ R p×p and parameters a, b ∈ [1, ∞], we use |||U ||| a,b to denote the induced matrix-operator norm max y a=1 U y b ; see Horn and Johnson [12] for background.Three cases of particular importance in this paper are the spectral norm |||U ||| 2 , corresponding to the maximal singular value of U ; the ℓ ∞ /ℓ ∞ -operator norm, given by and the ℓ 1 /ℓ 1 -operator norm, given by |||U ||| 1 = |||U T ||| ∞ .Finally, we use U ∞ to denote the element-wise maximum max i,j |U ij |; note that this is not a matrix norm, but rather a norm on the vectorized form of the matrix.For any matrix U ∈ R p×p , we use vec(U ) or equivalently U ∈ R p 2 to denote its vectorized form, obtained by stacking up the rows of U .We use U, V := i,j U ij V ij to denote the trace inner product on the space of symmetric matrices.Note that this inner product induces the Frobenius norm |||U ||| F := i,j U 2 ij .Finally, for asymptotics, we use the following standard notation: we write f

Background and problem set-up
Let X = (X 1 , . . ., X p ) be a zero mean p-dimensional random vector.The focus of this paper is the problem of estimating the covariance matrix Σ * := E[XX T ] and concentration matrix Θ * := Σ * −1 of the random vector X given n i.i.d.observations {X (k) } n k=1 .In this section, we provide background, and set up this problem more precisely.We begin with background on Gaussian graphical models, which provide one motivation for the estimation of concentration matrices.We then describe an estimator based based on minimizing an ℓ 1 regularized log-determinant divergence; when the data are drawn from a Gaussian graphical model, this estimator corresponds to ℓ 1 -regularized maximum likelihood.We then discuss the distributional assumptions that we make in this paper.

Gaussian graphical models
One motivation for this paper is the problem of Gaussian graphical model selection.A graphical model or a Markov random field is a family of probability distributions for which the conditional independence and factorization properties are captured by a graph.Let X = (X 1 , X 2 , . . ., X p ) denote a zero-mean Gaussian random vector; its density can be parameterized by the inverse covariance or concentration matrix Θ * = (Σ * ) −1 ∈ S p + , and can be written as We can relate this Gaussian distribution of the random vector X to a graphical model as follows.Suppose we are given an undirected graph G = (V, E) with vertex set V = {1, 2, . . ., p} and edge 1 set E, so that each variable X i is associated with a corresponding vertex i ∈ V .The Gaussian Markov random field (GMRF) associated with the graph The set E(Θ * ) corresponds to the off-diagonal non-zeros (white blocks); the diagonal is also non-zero (grey squares), but these entries do not correspond to edges.The black squares correspond to non-edges, or zeros in Θ * .
G over the random vector X is then the family of Gaussian distributions with concentration matrices Θ * that respect the edge structure of the graph, in the sense that Θ * ij = 0 if (i, j) / ∈ E. Figure 1 illustrates this correspondence between the graph structure (panel (a)), and the sparsity pattern of the concentration matrix Θ * (panel (b)).The problem of estimating the entries of the concentration matrix Θ * corresponds to estimating the Gaussian graphical model instance, while the problem of estimating the off-diagonal zero-pattern of the concentration matrix--that is, the set corresponds to the problem of Gaussian graphical model selection.
With a slight abuse of notation, we define the sparsity index s := |E(Θ * )| as the total number of non-zero elements in off-diagonal positions of Θ * ; equivalently, this corresponds to twice the number of edges in the case of a Gaussian graphical model.We also define the maximum degree or row cardinality corresponding to the maximum number of non-zeros in any row of Θ * ; this corresponds to the maximum degree in the graph of the underlying Gaussian graphical model.Note that we have included the diagonal entry Θ * ii in the degree count, corresponding to a self-loop at each vertex.
It is convenient throughout the paper to use graphical terminology, such as degrees and edges, even though the distributional assumptions that we impose, as described in Section 2.3, are milder and hence apply even to distributions that are not Gaussian MRFs.

ℓ 1 -penalized log-determinant divergence
An important set in this paper is the cone formed by all symmetric positive semi-definite matrices in p dimensions.We assume that the covariance matrix Σ * and concentration matrix Θ * of the random vector X are strictly positive definite, and so lie in the interior of this cone S p + .The focus of this paper is a particular type of M -estimator for the concentration matrix Θ * , based on minimizing a Bregman divergence between symmetric matrices.A function is of Bregman type if it is strictly convex, continuously differentiable and has bounded level sets [4,7].Any such function induces a Bregman divergence of the form From the strict convexity of g, it follows that D g (A B) ≥ 0 for all A and B, with equality if and only if A = B.
As a candidate Bregman function, consider the log-determinant barrier function, defined for any matrix A ∈ S p + by As is standard in convex analysis, we view this function as taking values in the extended reals R * = R ∪ {+∞}.With this definition, the function g is strictly convex, and its domain is the set of strictly positive definite matrices.Moreover, it is continuously differentiable over its domain, with ∇g(A) = −A −1 ; see Boyd and Vandenberghe [3] for further discussion.The Bregman divergence corresponding to this log-determinant Bregman function g is given by valid for any A, B ∈ S p + that are strictly positive definite.This divergence suggests a natural way to estimate concentration matrices-namely, by minimizing the divergence D g (Θ * Θ)-or equivalently, by minimizing the function where we have discarded terms independent of Θ, and used the fact that the inverse of the concentration matrix is the covariance matrix (i.e., (Θ * ) −1 = Σ * = E[XX T ]).Of course, the convex program (8) cannot be solved without knowledge of the true covariance matrix Σ * , but one can take the standard approach of replacing Σ * with an empirical version, with the possible addition of a regularization term.
In this paper, we analyze a particular instantiation of this strategy.Given n samples, we define the sample covariance matrix To lighten notation, we occasionally drop the superscript n, and simply write Σ for the sample covariance.We also define the off-diagonal ℓ 1 regularizer where the sum ranges over all i, j = 1, . . ., p with i = j.Given some regularization constant λ n > 0, we consider estimating Θ * by solving the following ℓ 1 -regularized log-determinant program: As shown in Appendix A, for any λ n > 0 and sample covariance matrix Σ n with strictly positive diagonal, this convex optimization problem has a unique optimum, so there is no ambiguity in equation (11).When the data is actually drawn from a multivariate Gaussian distribution, then the problem ( 11) is simply ℓ 1 -regularized maximum likelihood.

Tail conditions
In this section, we describe the tail conditions that underlie our analysis.Since the estimator ( 11) is based on using the sample covariance Σ n as a surrogate for the (unknown) covariance Σ * , any type of consistency requires bounds on the difference Σ n − Σ * .In particular, we define the following tail condition: Definition 1 (Tail conditions).The random vector X satisfies tail condition T (f, v * ) if there exists a constant v * ∈ (0, ∞] and a function f : N × (0, ∞) → (0, ∞) such that for any (i, j) ∈ V × V : We adopt the convention 1/0 := +∞, so that the value v * = 0 indicates the inequality holds for any δ ∈ (0, ∞).
Two important examples of the tail function f are the following: (a) an exponential-type tail function, meaning that f (n, δ) = exp(c n δ a ), for some scalar c > 0, and exponent a > 0; and (b) a polynomial-type tail function, meaning that f (n, δ) = c n m δ 2m , for some positive integer m ∈ N and scalar c > 0.
As might be expected, if X is multivariate Gaussian, then the deviations of sample covariance matrix have an exponential-type tail function with a = 2.A bit more generally, in the following subsections, we provide broader classes of distributions whose sample covariance entries satisfy exponential and a polynomial tail bounds (see Lemmata 1 and 2 respectively).Given a larger number of samples n, we expect the tail probability bound 1/f (n, δ) to be smaller, or equivalently, for the tail function f (n, δ) to larger.Accordingly, we require that f is monotonically increasing in n, so that for each fixed δ > 0, we can define the inverse function Similarly, we expect that f is monotonically increasing in δ, so that for each fixed n, we can define the inverse in the second argument For future reference, we note a simple consequence of the monotonicity of the tail function f -namely The inverse functions n f and δ f play an important role in describing the behavior of our estimator.We provide concrete examples in the following two subsections.

Sub-Gaussian distributions
In this subsection, we study the case of i.i.d.observations of sub-Gaussian random variables.
Definition 2. A zero-mean random variable Z is sub-Gaussian if there exists a constant σ ∈ (0, ∞) such that By the Chernoff bound, this upper bound (16) on the moment-generating function implies a two-sided tail bound of the form Naturally, any zero-mean Gaussian variable with variance σ 2 satisfies the bounds ( 16) and (17).In addition to the Gaussian case, the class of sub-Gaussian variates includes any bounded random variable (e.g., Bernoulli, multinomial, uniform), any random variable with strictly log-concave density [6,17], and any finite mixture of sub-Gaussian variables.
The following lemma, proved in Appendix D, shows that the entries of the sample covariance based on i.i.d.samples of sub-Gaussian random vector satisfy an exponential-type tail bound with exponent a = 2.The argument is along the lines of a result due to Bickel and Levina [1], but with more explicit control of the constants in the error exponent: Lemma 1.Consider a zero-mean random vector (X 1 , . . ., X p ) with covariance Σ * such that each X i / Σ * ii is sub-Gaussian with parameter σ.Given n i.i.d.samples, the associated sample covariance Σ n satisfies the tail bound Thus, the sample covariance entries the tail condition T (f, v * ) with v * = max i (Σ * ii ) 8(1 + 4σ 2 ) −1 , and an exponential-type tail function with a = 2-namely A little calculation shows that the associated inverse functions take the form

Tail bounds with moment bounds
In the following lemma, proved in Appendix E, we show that given i.i.d.observations from random variables with bounded moments, the sample covariance entries satisfy a polynomial-type tail bound.See the papers [26,9] for related results on tail bounds for variables with bounded moments.
Lemma 2. Suppose there exists a positive integer m and scalar K m ∈ R such that for i = 1, . . ., p, For i.i.d.samples {X , the sample covariance matrix Σ n satisfies the bound Thus, in this case, the sample covariance satisfies the tail condition T (f, v * ) with v * = 0, so that the bound holds for all δ ∈ (0, ∞), and with the polynomial-type tail function Finally, a little calculation shows that in this case, the inverse tail functions take the form 3 Main results and some consequences In this section, we state our main results, and discuss some of their consequences.We begin in Section 3.1 by stating some conditions on the true concentration matrix Θ * required in our analysis, including a particular type of incoherence or irrepresentability condition.In Section 3.2, we state our first main result-namely, Theorem 1 on consistency of the estimator Θ, and the rate of decay of its error in elementwise ℓ ∞ norm.Section 3.3 is devoted to Theorem 2 on the model selection consistency of the estimator.Section 3.4 is devoted the relation between the log-determinant estimator and the ordinary Lasso (neighborhood-based approach) as methods for graphical model selection; in addition, we illustrate our irrepresentability assumption for some simple graphs.Finally, in Section 3.5, we state and prove some corollaries of Theorem 1, regarding rates in Frobenius and operator norms.

Conditions on covariance and Hessian
Our results involve some quantities involving the Hessian of the log-determinant barrier ( 6), evaluated at the true concentration matrix Θ * .Using standard results on matrix derivatives [3], it can be shown that this Hessian takes the form where ⊗ denotes the Kronecker matrix product.By definition, Γ * is a p 2 × p 2 matrix indexed by vertex pairs, so that entry Γ * (j,k),(ℓ,m) corresponds to the second partial derivative ∂Θ jk ∂Θ ℓm , evaluated at Θ = Θ * .When X has multivariate Gaussian distribution, then Γ * is the Fisher information of the model, and by standard results on cumulant functions in exponential families [5], we have the more specific expression Γ * (j,k),(ℓ,m) = cov{X j X k , X ℓ X m }.For this reason, Γ * can be viewed as an edge-based counterpart to the usual covariance matrix Σ * .
We define the set of non-zero off-diagonal entries in the model concentration matrix Θ * : and let S(Θ * ) = {E(Θ * ) ∪ {(1, 1), . . ., (p, p)} be the augmented set including the diagonal.We let S c (Θ * ) denote the complement of S(Θ * ) in the set {1, . . ., p} × {1, . . ., p}, corresponding to all pairs (ℓ, m) for which Θ * ℓm = 0.When it is clear from context, we shorten our notation for these sets to S and S c , respectively.Finally, for any two subsets T and T ′ of V × V , we use Γ * T T ′ to denote the |T | × |T ′ | matrix with rows and columns of Γ * indexed by T and T ′ respectively.
Our main results involve the ℓ ∞ /ℓ ∞ norm applied to the covariance matrix Σ * , and to the inverse of a sub-block of the Hessian Γ * .In particular, we define corresponding to the ℓ ∞ -operator norm of the true covariance matrix Σ * , and Our analysis keeps explicit track of these quantities, so that they can scale in a non-trivial manner with the problem dimension p.

We assume the Hessian satisfies the following type of mutual incoherence or irrepresentable condition:
Assumption 1.There exists some α ∈ (0, 1] such that The underlying intuition is that this assumption imposes control on the influence that the non-edge terms, indexed by S c , can have on the edge-based terms, indexed by S. It is worth noting that a similar condition for the Lasso, with the covariance matrix Σ * taking the place of the matrix Γ * above, is necessary and sufficient for support recovery using the ordinary Lasso [19,22,23,26].See Section 3.4 for illustration of the form taken by Assumption 1 for specific graphical models.
A remark on notation: although our analysis allows the quantities K Σ * , K Γ * as well as the model size p and maximum node-degree d to grow with the sample size n, we suppress this dependence on n in their notation.

Rates in elementwise ℓ ∞ -norm
We begin with a result that provides sufficient conditions on the sample size n for bounds in the elementwise ℓ ∞norm.This result is stated in terms of the tail function f , and its inverses n f and δ f (equations ( 13) and ( 14)), and so covers a general range of possible tail behaviors.So as to make it more concrete, we follow the general statement with corollaries for the special cases of exponential-type and polynomial-type tail functions, corresponding to sub-Gaussian and moment-bounded variables respectively.
In the theorem statement, the choice of regularization constant λ n is specified in terms of a user-defined parameter τ > 2. Larger choices of τ yield faster rates of convergence in the probability with which the claims hold, but also lead to more stringent requirements on the sample size.
Theorem 1.Consider a distribution satisfying the incoherence assumption (28) with parameter α ∈ (0, 1], and the tail condition (12) with parameters T (f, v * ).Let Θ be the unique optimum of the log-determinant program (11) with regularization parameter λ n = (8/α) δ f (n, p τ ) for some τ > 2.Then, if the sample size is lower bounded as then with probability greater than 1 − 1/p τ −2 → 1, we have: (a) The estimate Θ satisfies the elementwise ℓ ∞ -bound: that is a subset of the true edge set E(Θ * ), and includes all edges (i, j) with , so that the inverse tail function δ f (n, p τ ) (see equation ( 14)) specifies rate of convergence in the element-wise ℓ ∞ -norm.In the following section, we derive the consequences of this ℓ ∞ -bound for two specific tail functions, namely those of exponential-type with a = 2, and polynomial-type tails (see Section 2.3).Turning to the other factors involved in the theorem statement, the quantities K Σ * and K Γ * measure the sizes of the entries in the covariance matrix Σ * and inverse Hessian (Γ * ) −1 respectively.Finally, the factor (1 + 8 α ) depends on the irrepresentability assumption 1, growing in particular as the incoherence parameter α approaches 0.

Exponential-type tails
We now discuss the consequences of Theorem 1 for distributions in which the sample covariance satisfies an exponentialtype tail bound with exponent a = 2.In particular, recall from Lemma 1 that such a tail bound holds when the variables are sub-Gaussian.

Corollary 1. Under the same conditions as Theorem 1, suppose moreover that the variables
ii are sub-Gaussian with parameter σ, and the samples are drawn independently.Then if the sample size n satisfies the bound where Proof.From Lemma 1, when the rescaled variables X i / Σ * ii are sub-Gaussian with parameter σ, the sample covariance entries satisfies a tail bound where As a consequence, for this particular model, the inverse functions δ f (n, p τ ) and n f (δ, p τ ) take the form Substituting these forms into the claim of Theorem 1 and doing some simple algebra yields the stated corollary.
When K Γ * , K Σ * , α remain constant as a function of (n, p, d), the corollary can be summarized succinctly as a sample size of n = Ω(d 2 log p) samples ensures that an elementwise ℓ ∞ bound Θ − Θ * ∞ = O log p n holds with high probability.In practice, one frequently considers graphs with maximum node degrees d that either remain bounded, or that grow sub-linearly with the graph size (i.e., d = o(p)).In such cases, the sample size allowed by the corollary can be substantially smaller than the graph size, so that for sub-Gaussian random variables, the method can succeed in the p ≫ n regime.

Polynomial-type tails
We now state a corollary for the case of a polynomial-type tail function, such as those ensured by the case of random variables with appropriately bounded moments.

Corollary 2. Under the assumptions of Theorem 1, suppose the rescaled variables
ii have 4m th moments upper bounded by K m , and the sampling is i.i.d.Then if the sample size n satisfies the bound where , then with probability greater than Proof.Recall from Lemma 2 that when the rescaled variables X i / Σ * ii have bounded 4m th moments, then the sample covariance Σ satisfies the tail condition T (f, v * ) with v * = 0, and with ) .As a consequence, for this particular model, the inverse functions take the form The claim then follows by substituting these expressions into Theorem 1 and performing some algebra.
When the quantities (K Γ * , K Σ * , α) remain constant as a function of (n, p, d), Corollary 2 can be summarized succinctly as n = Ω(d 2 p τ /m ) samples are sufficient to achieve a convergence rate in elementwise ℓ ∞ -norm of the , with high probability.Consequently, both the required sample size and the rate of convergence of the estimator are polynomial in the number of variables p.It is worth contrasting these rates with the case of sub-Gaussian random variables, where the rates have only logarithmic dependence on the problem size p.

Model selection consistency
Part (b) of Theorem 1 asserts that the edge set E( Θ) returned by the estimator is contained within the true edge set E(Θ * )-meaning that it correctly excludes all non-edges-and that it includes all edges that are "large", relative to the δ f (n, p τ ) decay of the error.The following result, essentially a minor refinement of Theorem 1, provides sufficient conditions linking the sample size n and the minimum value for model selection consistency.More precisely, define the event that the estimator Θ has the same edge set as Θ * , and moreover recovers the correct signs on these edges.With this notation, we have: Theorem 2. Under the same conditions as Theorem 1, suppose that the sample size satisfies the lower bound Then the estimator is model selection consistent with high probability as p → ∞, In comparison to Theorem 1, the sample size requirement (37) differs only in the additional term involving the minimum value.This term can be viewed as constraining how quickly the minimum can decay as a function of (n, p), as we illustrate with some concrete tail functions.

Exponential-type tails
Recall the setting of Section 2.3.1,where the random variables ii } are sub-Gaussian with parameter σ.Let us suppose that the parameters (K Γ * , K Σ * , α) are viewed as constants (not scaling with (p, d).Then, using the expression (32) for the inverse function n f in this setting, a corollary of Theorem 2 is that a sample size is sufficient for model selection consistency with probability greater than 1 − 1/p τ −2 .Alternatively, we can state that n = Ω(τ d 2 log p) samples are sufficient, as along as the minimum value scales as θ min = Ω( log p n ).

Polynomial-type tails
Recall the setting of Section 2.3.2,where the rescaled random variables X i / Σ * ii have bounded 4m th moments.Using the expression (34) for the inverse function n f in this setting, a corollary of Theorem 2 is that a sample size is sufficient for model selection consistency with probability greater than 1 − 1/p τ −2 .Alternatively, we can state than n = Ω(d 2 p τ /m ) samples are sufficient, as long as the minimum value scales as θ min = Ω(p τ /(2m) / √ n).

Comparison to neighbor-based graphical model selection
Suppose that X follows a multivariate Gaussian distribution, so that the structure of the concentration matrix Θ * specifies the structure of a Gaussian graphical model.In this case, it is interesting to compare our sufficient conditions for graphical model consistency of the log-determinant approach, as specified in Theorem 2, to those of the neighborhoodbased method, first proposed by Meinshausen and Bühlmann [19].The latter method estimates the full graph structure by performing an ℓ 1 -regularized linear regression (Lasso)-of the form X i = j =i θ ij X j + W -of each node on its neighbors and using the support of the estimated regression vector θ to predict the neighborhood set.These neighborhoods are then combined, by either an OR rule or an AND rule, to estimate the full graph.Various aspects of the high-dimensional model selection consistency of the Lasso are now understood [19,23,26]; for instance, it is known that mutual incoherence or irrepresentability conditions are necessary and sufficient for its success [22,26].In terms of scaling, Wainwright [23] shows that the Lasso succeeds with high probability if and only if the sample size scales as n ≍ c({d + θ −2 min } log p), where c is a constant determined by the covariance matrix Σ * .By a union bound over the p nodes in the graph, it then follows that the neighbor-based graph selection method in turn succeeds with high probability if n = Ω({d + θ −2 min } log p).
For comparison, consider the application of Theorem 2 to the case where the variables are sub-Gaussian (which includes the Gaussian case).For this setting, we have seen that the scaling required by Theorem 2 is n = Ω({d 2 + θ −2 min } log p), so that the dependence of the log-determinant approach in θ min is identical, but it depends quadratically on the maximum degree d.We suspect that that the quadratic dependence d 2 might be an artifact of our analysis, but have not yet been able to reduce it to d.Otherwise, the primary difference between the two methods is in the nature of the irrepresentability assumptions that are imposed: our method requires Assumption 1 on the Hessian Γ * , whereas the neighborhood-based method imposes this same type of condition on a set of p covariance matrices, each of size (p−1)×(p−1), one for each node of the graph.Below we show two cases where the Lasso irrepresentability condition holds, while the log-determinant requirement fails.However, in general, we do not know whether the log-determinant irrepresentability strictly dominates its analog for the Lasso.

Rates in Frobenius and spectral norm
We now derive some corollaries of Theorem 1 concerning estimation of Θ * in Frobenius norm, as well as the spectral norm.Recall that s = |E(Θ * )| denotes the total number of off-diagonal non-zeros in Θ * .
Corollary 3.Under the same assumptions as Theorem 1, with probability at least 1−1/p τ −2 , the estimator Θ satisfies Proof.With the shorthand notation ν := 2K Γ * (1 + 8/α) δ f (n, p τ ), Theorem 1 guarantees that, with probability at least Since the edge set of Θ is a subset of that of Θ * , and Θ * has at most p + s non-zeros (including the diagonal), we conclude that from which the bound (41a) follows.On the other hand, for a symmetric matrix, we have using the definition of the ν ∞ -operator norm, and the fact that Θ and Θ * have at most d non-zeros per row.Since the Frobenius norm upper bounds the spectral norm, the bound (41b) follows.

Exponential-type tails
For the exponential tail function case where the rescaled random variables X i / Σ * ii are sub-Gaussian with parameter σ, we can use the expression (32) for the inverse function δ f to derive rates in Frobenius and spectral norms.When the quantities K Γ * , K Σ * , α remain constant, these bounds can be summarized succinctly as a sample size n = Ω(d 2 log p) is sufficient to guarantee the bounds with probability at least 1 − 1/p τ −2 .

Polynomial-type tails
Similarly, let us again consider the polynomial tail case, in which the rescaled variates X i / Σ * ii have bounded 4m th moments and the samples are drawn i.i.d.Using the expression (34) for the inverse function we can derive rates in the Frobenius and spectral norms.When the quantities K Γ * , K Σ * , α are viewed as constant, we are guaranteed that a sample size n = Ω(d 2 p τ /m ) is sufficient to guarantee the bounds with probability at least 1 − 1/p τ −2 .

Rates for the covariance matrix estimate
Finally, we describe some bounds on the estimation of the covariance matrix Σ * .By Lemma 3, the estimated concentration matrix Θ is positive definite, and hence can be inverted to obtain an estimate of the covariance matrix, which we denote as Σ := ( Θ) −1 .
Corollary 4.Under the same assumptions as Theorem 1, with probability at least 1 − 1/p τ −2 , the following bounds hold.
(a) The element-wise ℓ ∞ norm of the deviation ( Σ − Σ * ) satisfies the bound where (b) The ℓ 2 operator-norm of the deviation ( Σ − Σ * ) satisfies the bound The proof involves certain lemmata and derivations that are parts of the proofs of Theorems 1 and 2, so that we defer it to Section 4.5.

Proofs of main result
In this section, we work through the proofs of Theorems 1 and 2. We break down the proofs into a sequence of lemmas, with some of the more technical aspects deferred to appendices.
Our proofs are based on a technique that we call a primal-dual witness method, used previously in analysis of the Lasso [23].It involves following a specific sequence of steps to construct a pair ( Θ, Z) of symmetric matrices that together satisfy the optimality conditions associated with the convex program (11) with high probability.Thus, when the constructive procedure succeeds, Θ is equal to the unique solution Θ of the convex program (11), and Z is an optimal solution to its dual.In this way, the estimator Θ inherits from Θ various optimality properties in terms of its distance to the truth Θ * , and its recovery of the signed sparsity pattern.To be clear, our procedure for constructing Θ is not a practical algorithm for solving the log-determinant problem (11), but rather is used as a proof technique for certifying the behavior of the M -estimator (11).

Primal-dual witness approach
As outlined above, at the core of the primal-dual witness method are the standard convex optimality conditions that characterize the optimum Θ of the convex program (11).For future reference, we note that the sub-differential of the norm • 1,off evaluated at some Θ consists the set of all symmetric matrices Z ∈ R p×p such that The following result is proved in Appendix A: Lemma 3.For any λ n > 0 and sample covariance Σ with strictly positive diagonal, the ℓ 1 -regularized log-determinant problem (11) has a unique solution Θ ≻ 0 characterized by where Z is an element of the subdifferential ∂ Θ 1,off .
Based on this lemma, we construct the primal-dual witness solution ( Θ, Z) as follows: (a) We determine the matrix Θ by solving the restricted log-determinant problem Note that by construction, we have Θ ≻ 0, and moreover Θ S c = 0.
(b) We choose Z S as a member of the sub-differential of the regularizer • 1,off , evaluated at Θ.
(d) We verify the strict dual feasibility condition To clarify the nature of the construction, steps (a) through (c) suffice to obtain a pair ( Θ, Z) that satisfy the optimality conditions (48), but do not guarantee that Z is an element of sub-differential ∂ Θ 1,off .By construction, specifically step (b) of the construction ensures that the entries Z in S satisfy the sub-differential conditions, since Z S is a member of the sub-differential of ∂ Θ S 1,off .The purpose of step (d), then, is to verify that the remaining elements of Z satisfy the necessary conditions to belong to the sub-differential.
If the primal-dual witness construction succeeds, then it acts as a witness to the fact that the solution Θ to the restricted problem (49) is equivalent to the solution Θ to the original (unrestricted) problem (11).We exploit this fact in our proofs of Theorems 1 and 2 that build on this: we first show that the primal-dual witness technique succeeds with high-probability, from which we can conclude that the support of the optimal solution Θ is contained within the support of the true Θ * .In addition, we exploit the characterization of Θ provided by the primal-dual witness construction to establish the elementwise ℓ ∞ bounds claimed in Theorem 1. Theorem 2 requires checking, in addition, that certain sign consistency conditions hold, for which we require lower bounds on the value of the minimum value θ min .
In the analysis to follow, some additional notation is useful.We let W denote the "effective noise" in the sample covariance matrix Σ, namely Second, we use ∆ = Θ−Θ * to measure the discrepancy between the primal witness matrix Θ and the truth Θ * .Finally, recall the log-determinant barrier g from equation (6).We let R(∆) denote the difference of the gradient ∇g( Θ) = Θ −1 from its first-order Taylor expansion around Θ * .Using known results on the first and second derivatives of the log-determinant function (see p. 641 in Boyd and Vandenberghe [3]), this remainder takes the form

Auxiliary results
We begin with some auxiliary lemmata, required in the proofs of our main theorems.In Section 4.2.1, we provide sufficient conditions on the quantities W and R for the strict dual feasibility condition to hold.In Section 4.2.2, we control the remainder term R(∆) in terms of ∆, while in Section 4.2.3, we control ∆ itself, providing elementwise ℓ ∞ bounds on ∆.In Section 4.2.4,we show that under appropriate conditions on the minimum value θ min , the bounds in the earlier lemmas guarantee that the sign consistency condition holds.All of the analysis in these sections is deterministic in nature.In Section 4.2.5, we turn to the probabilistic component of the analysis, providing control of the noise W in the sample covariance matrix.Finally, the proofs of Theorems 1 and 2 follows by using this probabilistic control of W and the stated conditions on the sample size to show that the deterministic conditions hold with high probability.

Sufficient conditions for strict dual feasibility
We begin by stating and proving a lemma that provides sufficient (deterministic) conditions for strict dual feasibility to hold, so that Z S c ∞ < 1.
Lemma 4 (Strict dual feasibility).Suppose that Then the matrix Z S c constructed in step (c) satisfies Z S c ∞ < 1, and therefore Θ = Θ.
Proof.Using the definitions (51) and ( 52), we can re-write the stationary condition (48) in an alternative but equivalent form This is a linear-matrix equality, which can be re-written as an ordinary linear equation by "vectorizing" the matrices.We use the notation vec(A), or equivalently A for the p 2 -vector version of the matrix A ∈ R p×p , obtained by stacking up the rows into a single column vector.In vectorized form, we have In terms of the disjoint decomposition S and S c , equation (54) can be re-written as two blocks of linear equations as follows: Here we have used the fact that ∆ S c = 0 by construction.Since Γ * SS is invertible, we can solve for ∆ S from equation (55a) as follows: Substituting this expression into equation (55b), we can solve for Z S c as follows: Taking the ℓ ∞ norm of both sides yields Recalling Assumption 1-namely, that where we have used the fact that Z S ∞ ≤ 1, since Z belongs to the sub-differential of the norm • 1,off by construction.Finally, applying assumption (53) from the lemma statement, we have as claimed.

Control of remainder term
Our next step is to relate the behavior of the remainder term (52) to the deviation where We provide the proof of this lemma in Appendix B. It is straightforward, based on standard matrix expansion techniques.

Lemma 6 (Control of ∆). Suppose that
Then we have the elementwise ℓ ∞ bound We prove the lemma in Appendix C; at a high level, the main steps involved are the following.We begin by noting that Θ S c = Θ * S c = 0, so that ∆ ∞ = ∆ S ∞ .Next, we characterize Θ S in terms of the zero-gradient condition associated with the restricted problem (49).We then define a continuous map F : ∆ S → F (∆ S ) such that its fixed points are equivalent to zeros of this gradient expression in terms of ∆ S = Θ S − Θ * S .We then show that the function F maps the ℓ ∞ -ball onto itself.Finally, with these results in place, we can apply Brouwer's fixed point theorem (e.g., p. 161; Ortega and Rheinboldt [20]) to conclude that F does indeed have a fixed point inside B(r).

Sufficient conditions for sign consistency
We now show how a lower bound on the minimum value θ min , when combined with Lemma 6, allows us to guarantee sign consistency of the primal witness matrix Θ S .
Lemma 7 (Sign Consistency).Suppose the minimum absolute value θ min of non-zero entries in the true concentration matrix Θ * is lower bounded as then sign( Θ S ) = sign(Θ * S ) holds.
This claim follows from the bound (62) combined with the bound (60) ,which together imply that for all (i, j) ∈ S, the estimate Θ ij cannot differ enough from Θ * ij to change sign.

Control of noise term
The final ingredient required for the proofs of Theorems 1 and 2 is control on the sampling noise W = Σ − Σ * .This control is specified in terms of the decay function f from equation (12).
Lemma 8 (Control of Sampling Noise).For any τ > 2 and sample size n such that δ f (n, p τ ) ≤ 1/v * , we have Proof.Using the definition ( 12) of the decay function f , and applying the union bound over all p 2 entries of the noise matrix, we obtain that for all δ ≤ 1/v * , Setting δ = δ f (n, p τ ) yields that as claimed.Here the last equality follows since f (n, δ f (n, p τ )) = p τ , using the definition ( 14) of the inverse function δ f .

Proof of Theorem 1
We now have the necessary ingredients to prove Theorem 1.We first show that with high probability the witness matrix Θ is equal to the solution Θ to the original log-determinant problem (11), in particular by showing that the primal-dual witness construction (described in in Section 4.1) succeeds with high probability.Let A denote the event that W ∞ ≤ δ f (n, p τ ).Using the monotonicity of the inverse tail function (15), the lower lower bound (29) on the sample size n implies that δ f (n, p τ ) ≤ 1/v * .Consequently, Lemma 8 implies that P(A) ≥ 1 − 1 p τ −2 .Accordingly, we condition on the event A in the analysis to follow.
We proceed by verifying that assumption (53) of Lemma 4 holds.Recalling the choice of regularization penalty λ n = (8/α) δ f (n, p τ ), we have W ∞ ≤ (α/8)λ n .In order to establish condition (53) it remains to establish the bound R(∆) ∞ ≤ α λn 8 .We do so in two steps, by using Lemmas 6 and 5 consecutively.First, we show that the precondition (59) required for Lemma 6 to hold is satisfied under the specified conditions on n and λ n .From Lemma 8 and our choice of regularization constant From the lower bound (29) and the monotonicity (15) of the tail inverse functions, we have showing that the assumptions of Lemma 6 are satisfied.Applying this lemma, we conclude that Turning next to Lemma 5, we see that its assumption ∆ ∞ ≤ 1 3 K Σ * d holds, by applying equations ( 64) and (65).Consequently, we have as required, where the final inequality follows from our condition (29) on the sample size, and the monotonicity property (15).Overall, we have shown that the assumption (53) of Lemma 4 holds, allowing us to conclude that Θ = Θ.The estimator Θ then satisfies the ℓ ∞ -bound (65) of Θ, as claimed in Theorem 1(a), and moreover, we have Θ S c = Θ S c = 0, as claimed in Theorem 1(b).Since the above was conditioned on the event A, these statements hold with probability

Proof of Theorem 2
We now turn to the proof of Theorem 2. A little calculation shows that the assumed lower bound (37) on the sample size n and the monotonicity property (15) together guarantee that Proceeding as in the proof of Theorem 1, with probability at least 1 − 1/p τ −2 , we have the equality Θ = Θ, and also that Θ − Θ * ∞ ≤ θ min /2.Consequently, Lemma 7 can be applied, guaranteeing that sign(Θ * ij ) = sign( Θ ij ) for all (i, j) ∈ E. Overall, we conclude that with probability at least 1 − 1/p τ −2 , the sign consistency condition sign(Θ * ij ) = sign( Θ ij ) holds for all (i, j) ∈ E, as claimed.

Proof of Corollary 4
With the shorthand ∆ = Θ − Θ * , we have From the definition (52) of the residual R(•), this difference can be written as Proceeding as in the proof of Theorem 1 we condition on the event A = { W ∞ ≤ δ f (n, p τ )}, and which holds with probability P(A) ≥ 1 − 1 p τ −2 .As in the proof of that theorem, we are guaranteed that the assumptions of Lemma 5 are satisfied, allowing us to conclude where We begin by proving the bound (45).From equation (66), we have The quantity L( ∆) in turn can be bounded as follows, where we used the inequality that ∆u ∞ ≤ ∆ ∞ u 1 .Simplifying further, we obtain where we have used the fact that Combining the pieces, we obtain The claim then follows from the elementwise ℓ ∞ -norm bound (30) from Theorem 1. Next, we establish the bound (46) in spectral norm.Taking the ℓ ∞ operator norm of both sides of equation ( 66) Using the expansion (67), and the sub-multiplicativity of the ℓ ∞ operator norm, we obtain where the last inequality uses the bound |||J||| ∞ ≤ 3/2.(Proceeding as in the proof of Lemma 5, this bound holds conditioned on A, and for the sample size specified in the theorem statement.)In turn, the term L( ∆) can be bounded as where the second inequality uses the sub-multiplicativity of the ℓ ∞ -operator norm.Combining the pieces yields Conditioned on the event A, we have the bound (42) on the ℓ ∞ -operator norm Substituting this bound, as well as the elementwise ℓ ∞ -norm bound (30) from Theorem 1, into the bound (69) yields the stated claim.

Experiments
In this section, we illustrate our results with various experimental simulations, reporting results in terms of the probability of correct model selection (Theorem 2) or the ℓ ∞ -error (Theorem 1).For these illustrations, we study the case of Gaussian graphical models, and results for three different classes of graphs, namely chains, grids, and star-shaped graphs.We also consider various scalings of the quantities which affect the performance of the estimator: in addition the triple (n, p, d), we also report some results concerning the role of the parameters K Σ * , K Γ * and θ min that we have identified in the main theorems.For all results reported here, we solved the resulting ℓ 1 -penalized log-determinant program (11) using the glasso program of Friedman et al. [10], which builds on the block co-ordinate descent algorithm of d'Asprémont et al. [8]. Figure 3 illustrates the three types of graphs used in our simulations: chain graphs (panel (a)), four-nearest neighbor lattices or grids (panel (b)), and star-shaped graphs (panel (c)).For the chain and grid graphs, the maximal node degree d is fixed by definition, to d = 2 for chains, and d = 4 for the grids.Consequently, these graphs can capture the dependence of the required sample size n only as a function of the graph size p, and the parameters (K Σ * , K Γ * , θ min ).The star graph allows us to vary both d and p, since the degree of the central hub can be varied between 1 and p − 1.
For each graph type, we varied the size of the graph p in different ranges, from p = 64 upwards to p = 375.
For the chain and star graphs, we define a covariance matrix Σ * with entries Σ * ii = 1 for all i = 1, . . ., p, and Σ * ij = ρ for all (i, j) ∈ E for specific values of ρ specified below.Note that these covariance matrices are sufficient to specify the full model.For the four-nearest neighbor grid graph, we set the entries of the concentration matrix Θ * ij = ω for (i, j) ∈ E, with the value ω specified below.In all cases, we set the regularization parameter λ n proportional to log(p)/n, as suggested by Theorems 1 and 2, which is reasonable since the main purpose of these simulations is to illustrate our theoretical results.However, for general data sets, the relevant theoretical parameters cannot be computed (since the true model is unknown), so that a data-driven approach such as cross-validation might be required for selecting the regularization parameter λ n .
Given a Gaussian graphical model instance, and the number of samples n, we drew N = 100 batches of n independent samples from the associated multivariate Gaussian distribution.We estimated the probability of correct model selection as the fraction of the N = 100 trials in which the estimator recovers the signed-edge set exactly.
Note that any multivariate Gaussian random vector is sub-Gaussian; in particular, the rescaled variates X i / Σ * ii are sub-Gaussian with parameter σ = 1, so that the elementwise ℓ ∞ -bound from Corollary 1 applies.Suppose we collect relevant parameters such as θ min and the covariance and Hessian-related terms K Σ * , K Γ * and α into a single "model-complexity" term K defined as Then, as a corollary of Theorem 2, a sample size of order is sufficient for model selection consistency with probability greater than 1 − 1/p τ −2 .In the subsections to follow, we investigate how the empirical sample size n required for model selection consistency scales in terms of graph size p, maximum degree d, as well as the "model-complexity" term K defined above.

Dependence on graph size
Panel (a) of Figure 4 plots the probability of correct signed edge-set recovery against the sample size n for a chainstructured graph of three different sizes.For these chain graphs, regardless of the number of nodes p, the maximum node degree is constant d = 2, while the edge covariances are set as Σ ij = 0.2 for all (i, j) ∈ E, so that the quantities (K Σ * , K Γ * , α) remain constant.Each of the curve in panel (a) corresponds to a different graph size p.For each curve, the probability of success starts at zero (for small sample sizes n), but then transitions to one as the sample size is increased.As would be expected, it is more difficult to perform model selection for larger graph sizes, so that (for instance) the curve for p = 375 is shifted to the right relative to the curve for p = 64.Panel (b) of Figure 4 replots the same data, with the horizontal axis rescaled by (1/ log p).This scaling was chosen because for sub-Gaussian tails, our theory predicts that the sample size should scale logarithmically with p (see equation ( 71)).Consistent with this prediction, when plotted against the rescaled sample size n/ log p, the curves in panel (b) all stack up.Consequently, the ratio (n/ log p) acts as an effective sample size in controlling the success of model selection, consistent with the predictions of Theorem 2 for sub-Gaussian variables.
Figure 5 shows the same types of plots for a star-shaped graph with fixed maximum node degree d = 40, and Figure 6 shows the analogous plots for a grid graph with fixed degree d = 4.As in the chain case, these plots show the same type of stacking effect in terms of the scaled sample size n/ log p, when the degree d and other parameters ((α, K Γ * , K Σ * )) are held fixed.

Dependence on the maximum node degree
Panel (a) of Figure 7 plots the probability of correct signed edge-set recovery against the sample size n for starshaped graphs; each curve corresponds to a different choice of maximum node degree d, allowing us to investigate the dependence of the sample size on this parameter.So as to control these comparisons, the models are chosen such that quantities other than the maximum node-degree d are fixed: in particular, we fix the number of nodes p = 200, and the edge covariance entries are set as Σ * ij = 2.5/d for (i, j) ∈ E so that the quantities (K Σ * , K Γ * , α) remain constant.The minimum value θ min in turn scales as 1/d.Observe how the plots in panel (a) shift to the right as the maximum node degree d is increased, showing that star-shaped graphs with higher degrees are more difficult.In panel (b) of Figure 7, we plot the same data versus the rescaled sample size n/d.Recall that if all the curves were to stack up under this rescaling, then it means the required sample size n scales linearly with d.These plots are closer to aligning than the unrescaled plots, but the agreement is not perfect.In particular, observe that the curve d (right-most in panel (a)) remains a bit to the right in panel (b), which suggests that a somewhat more aggressive rescaling-perhaps n/d γ for some γ ∈ (1, 2)-is appropriate.
Note that for θ min scaling as 1/d, the sufficient condition from Theorem 2, as summarized in equation ( 71), is n = Ω(d 2 log p), which appears to be overly conservative based on these data.Thus, it might be possible to tighten our theory under certain regimes.

Dependence on covariance and Hessian terms
Next, we study the dependence of the sample size required for model selection consistency on the model complexity term K defined in (70), which is a collection of the quantities K Σ * , K Γ * and α defined by the covariance matrix and Hessian, as well as the minimum value θ min .Figure 8   K, but with a fixed number of nodes p = 120, and maximum node-degree d = 2.We varied the actor K by varying the value ρ of the edge covariances Σ ij = ρ, (i, j) ∈ E. Notice how the curves, each of which corresponds to a different model complexity factor, shift rightwards as K is increased so that models with larger values of K require greater number of samples n to achieve the same probability of correct model selection.These rightward-shifts are in qualitative agreement with the prediction of Theorem 1, but we suspect that our analysis is not sharp enough to make accurate quantitative predictions regarding this scaling.

Convergence rates in elementwise ℓ ∞ -norm
Finally, we report some simulation results on the convergence rate in elementwise ℓ ∞ -norm.According to Corollary 1, in the case of sub-Gaussian tails, if the elementwise ℓ ∞ -norm should decay at rate O( log p n ), once the sample size n is sufficiently large.Figure 9 shows the behavior of the elementwise ℓ ∞ -norm for star-shaped graphs of varying sizes p.The results reported here correspond to the maximum degree d = ⌈0.1p⌉;we also performed analogous experiments for d = O(log p) and d = O(1), and observed qualitatively similar behavior.The edge correlations were set as Σ * ij = 2.5/d for all (i, j) ∈ E so that the quantities (K Σ * , K Γ * , α) remain constant.With these settings, each curve in Figure 9 corresponds to a different problem size, and plots the elementwise ℓ ∞ -error versus the rescaled sample size n/ log p, so that we expect to see curves of the form f (t) = 1/ √ t.The curves show that when the rescaled sample size (n/ log p) is larger than some threshold (roughly 40 in the plots shown), the elementwise ℓ ∞ norm decays at the rate log p n , which is consistent with Corollary 1.

Discussion
The focus of this paper is the analysis of the high-dimensional scaling of the ℓ 1 -regularized log determinant problem (11) as an estimator of the concentration matrix of a random vector.Our main contributions were to derive sufficient conditions for its model selection consistency as well as convergence rates in both elementwise ℓ ∞ -norm, as well as Frobenius and spectral norms.Our results allow for a range of tail behavior, ranging from the exponential-type decay provided by Gaussian random vectors (and sub-Gaussian more generally), to polynomial-type decay guaranteed by moment conditions.In the Gaussian case, our results have natural interpretations in terms of Gaussian Markov random fields.Our main results relate the i.i.d.sample size n to various parameters of the problem required to achieve consistency.In addition to the dependence on matrix size p, number of edges s and graph degree d, our analysis also illustrates the role of other quantities, related to the structure of the covariance matrix Σ * and the Hessian of the objective function, that have an influence on consistency rates.Our main assumption is an irrepresentability or mutual incoherence condition, similar to that required for model selection consistency of the Lasso, but involving the Hessian of the logdeterminant objective function (11), evaluated at the true model.When the distribution of X is multivariate Gaussian, this Hessian is the Fisher information matrix of the model, and thus can be viewed as an edge-based counterpart to the usual node-based covariance matrix We report some examples where irrepresentability condition for the Lasso hold and the log-determinant condition fails, but we do not know in general if one requirement dominates the other.In addition to these theoretical results, we provided a number of simulation studies showing how the sample size required for consistency scales with problem size, node degrees, and the other complexity parameters identified in our analysis.
There are various interesting questions and possible extensions to this paper.First, in the current paper, we have only derived sufficient conditions for model selection consistency.As in past work on the Lasso [23], it would also be interesting to derive a converse result-namely, to prove that if the sample size n is smaller than some function of (p, d, s) and other complexity parameters, then regardless of the choice of regularization constant, the log-determinant method fails to recover the correct graph structure.Second, while this paper studies the problem of estimating a fixed graph or concentration matrix, a natural extension would allow the graph to vary over time, a problem setting which includes the case where the observations are dependent.For instance, Zhou et al. [27] study the estimation of the covariance matrix of a Gaussian distribution in a time-varying setting, and it would be interesting to extend results of this paper to this more general setting.
λ n > 0, then by Lagrangian duality, the problem can be written in an equivalent constrained form: for some C(λ n ) < +∞.Since the off-diagonal elements remain bounded within the ℓ 1 -ball, the only possible issue is the behavior of the objective function for sequences with possibly unbounded diagonal entries.Since any Θ in the constraint set is positive-definite, its diagonal entries are positive, and hence bounded below by zero.Further, by Hadamard's inequality for positive definite matrices [12], we have log det Θ ≤ p i=1 log Θ ii , so that As long as Σ n ii > 0 for each i = 1, . . ., p, this function is coercive, meaning that it diverges to infinity for any sequence (Θ t 11 , . . ., Θ t pp ) → +∞.Consequently, the minimum is attained.Returning to the penalized form (11), by standard optimality conditions for convex programs, a matrix Σ ≻ 0 is optimal if and only 0 belongs to the sub-differential of the objective, or equivalently if and only if there exists a matrix Z in the sub-differential of the off-diagonal norm • 1,off such that

B Proof of Lemma 5
We write the remainder in the form

By sub-multiplicativity of the ||| • |||
where we have used the definition of K Σ * , the fact that ∆ has at most d non-zeros per row/column, and our assumption ∆ ∞ < 1/(3K Σ * ).Consequently, we have the convergent matrix expansion We now prove the bound (58) on the remainder as follows.Let e i denote the unit vector with 1 in position i and zeroes elsewhere.From equation (57), we have which follows from the fact that for any vectors a, b ∈ R p , |a T b| ≤ a ∞ b 1 .This in turn can be simplified as, where where the final line follows since |||∆||| ∞ ≤ d ∆ ∞ , and since ∆ has at most d non-zeroes per row/column.

C Proof of Lemma 6
By following the same argument as in Appendix A, we conclude that the restricted problem (49) has a unique optimum Θ.Let Z be any member of the sub-differential of • 1,off , evaluated at Θ.By Lagrangian theory, the witness Θ must be an optimum of the associated Lagrangian problem In fact, since this Lagrangian is strictly convex, Θ is the only optimum of this problem.Since the log-determinant barrier diverges as Θ approaches the boundary of the positive semi-definite cone, we must have Θ ≻ 0. If we take partial derivatives of the Lagrangian with respect to the unconstrained elements Θ S , these partial derivatives must vanish at the optimum, meaning that we have the zero-gradient condition To be clear, Θ is the p × p matrix with entries in S equal to Θ S and entries in S c equal to zero.Since this zero-gradient condition is necessary and sufficient for an optimum of the Lagrangian problem, it has a unique solution (namely, Θ S ).
Our goal is to bound the deviation of this solution from Θ * S , or equivalently to bound the deviation ∆ = Θ − Θ * .Our strategy is to show the existence of a solution ∆ to the zero-gradient condition (75) that is contained inside the ball B(r) defined in equation (61).By uniqueness of the optimal solution, we can thus conclude that Θ − Θ * belongs this ball.In terms of the vector ∆ S = Θ S − Θ * S , let us define a map F : where G denotes the vectorized form of G.Note that by construction, F (∆ S ) = ∆ S holds if and only if G(Θ * S + ∆ S ) = G(Θ S ) = 0.
We now claim that F (B(r)) ⊆ B(r).Since F is continuous and B(r) is convex and compact, this inclusion implies, by Brouwer's fixed point theorem [20], that there exists some fixed point ∆ S ∈ B(r).By uniqueness of the zero gradient condition (and hence fixed points of F ), we can thereby conclude that Θ S − Θ * S ∞ ≤ r.Let ∆ ∈ R p×p denote the zero-padded matrix, equal to ∆ S on S and zero on S c .By definition, we have where we have used the definition W = Σ − Σ * .For any ∆ S ∈ B(r), we have where ∆ ∞ denotes the elementwise ℓ ∞ -norm (as opposed to the ℓ ∞ -operator norm |||∆||| ∞ ), and the inequality follows since ∆ has at most d non-zero entries per row/column, By the definition (61) of the radius r, and the assumed upper bound (59), we have ∆ ∞ ≤ r ≤ 1 3K Σ * d , so that the results of Lemma 5 apply.By using the definition (52) of the remainder, taking the vectorized form of the expansion (57), and restricting to entries in S, we obtain the expansion Using this expansion (79) combined with the expression (77) for G, we have The second term is easy to deal with: using the definition K Γ * = |||(Γ * SS ) −1 ||| ∞ , we have T 2 ∞ ≤ K Γ * W ∞ + λ n = r/2.It now remains to show that T 1 ∞ ≤ r/2.We have where we used the expanded form (57) of the remainder, Applying the bound (58) from Lemma 5, we obtain Since r ≤ 1 3K 3 Σ * K Γ * d by assumption (59), we conclude that thereby establishing the claim.
We can exploit this lemma to apply tail bounds for sums of i.i.d.sub-exponential variates (Thm.5.1, [6]).Doing so yields that for t ≤ γ ).We obtain such a bound B as follows.Using the inequality (a + b) m ≤ 2 m (a m + b m ), valid for any real numbers a, b, we have Recalling that U where we have used the inequality (x + y) 1/m ≤ 2 1/m (x 1/m + y 1/m ), valid for any integer m ∈ N and real numbers x, y > 0. Since the bound is a decreasing function of m, it follows that The quantity T 1 is equal to the number of ways to put 2m balls in n bins such that if a bin contains a ball, it should have at least two balls.Note that this implies there can then be at most m bins containing a ball.Consequently, the term T 1 is bounded above by the product of the number of ways in which we can choose m out of n bins, and the number of ways in which we can put 2m balls into m bins-viz.
Turning now to the second term T 2 , note the following inequality: for any numbers (v 1 , . . ., v ℓ ) ∈ R ℓ + and nonnegative integers (a 1 , . . ., a ℓ ), we have We have where we have used the inequality (a + b) 2m ≤ 2 2m (a 2m + b 2m ), valid for all real numbers a and b.An application of the Cauchy-Schwarz inequality yields where we have used the assumed moment bound E[ X Substituting back into equation (84) yields Noting that Σ * ij , Σ * jj , and Σ * ij are all bounded above by max i Σ * ii , we obtain as claimed.

Figure 1 .
Figure 1.(a) Simple undirected graph.A Gauss Markov random field has a Gaussian variable Xi associated with each vertex i ∈ V .This graph has p = 5 vertices, maximum degree d = 3 and s = 6 edges.(b) Zero pattern of the inverse covariance Θ * associated with the GMRF in (a).The set E(Θ * ) corresponds to the off-diagonal non-zeros (white blocks); the diagonal is also non-zero (grey squares), but these entries do not correspond to edges.The black squares correspond to non-edges, or zeros in Θ * .

Figure 4 .
Figure 4. Simulations for chain graphs with varying number of nodes p, edge covariances Σ * ij = 0.10.Plots of probability of correct signed edge-set recovery plotted versus the ordinary sample size n in panel (a), and versus the rescaled sample size n/ log p in panel (b).Each point corresponds to the average over 100 trials.

Figure 5 .
Figure 5. Simulations for a star graph with varying number of nodes p, fixed maximal degree d = 40, and edge covariances Σ * ij = 1/16 for all edges.Plots of probability of correct signed edge-set recovery versus the sample size n in panel (a), and versus the rescaled sample size n/ log p in panel (b).Each point corresponds to the average over N = 100 trials.

Figure 6 .
Figure 6.Simulations for 2-dimensional lattice with 4-nearest-neighbor interaction, edge strength interactions Θ * ij = 0.1, and a varying number of nodes p. Plots of probability of correct signed edge-set recovery versus the sample size n in panel (a), and versus the rescaled sample size n/ log p in panel (b).Each point corresponds to the average over N = 100 trials.

Figure 7 .Figure 8 .
Figure 7. Simulations for star graphs with fixed number of nodes p = 200, varying maximal (hub) degree d, edge covariances Σ * ij = 2.5/d.Plots of probability of correct signed edge-set recovery versus the sample size n in panel (a), and versus the rescaled sample size n/d in panel (b).

Figure 9 .
Figure 9. Simulations for a star graph with varying number of nodes p, maximum node degree d = ⌈0.1p⌉,edge covariances Σ * ij = 2.5/d.Plot of the element-wise ℓ∞ norm of the concentration matrix estimate error b Θ − Θ * ∞ versus the rescaled sample size n/ log(p).