Hanson-Wright inequality and sub-gaussian concentration

In this expository note, we give a modern proof of Hanson-Wright inequality for quadratic forms in sub-gaussian random variables. We deduce a useful concentration inequality for sub-gaussian random vectors. Two examples are given to illustrate these results: a concentration of distances between random vectors and subspaces, and a bound on the norms of products of random and deterministic matrices.


Hanson-Wright inequality
Hanson-Wright inequality is a general concentration result for quadratic forms in sub-gaussian random variables.A version of this theorem was first proved in [9,19], however with one weak point mentioned in Remark 1.2.In this article we give a modern proof of Hanson-Wright inequality, which automatically fixes the original weak point.We then deduce a useful concentration inequality for sub-gaussian random vectors, and illustrate it with two applications.
Our arguments use standard tools of high-dimensional probability.The reader unfamiliar with them may benefit from consulting the tutorial [18].Still, we will recall the basic notions where possible.A random variable ξ is called sub-gaussian if its distribution is dominated by that of a normal random variable.This can be expressed by requiring that E exp(ξ 2 /K 2 ) ≤ 2 for some K > 0; the infimum of such K is traditionally called the sub-gaussian or ψ 2 norm of ξ.This turns the set of subgaussian random variables into the Orlicz space with the Orlicz function ψ 2 (t) = exp(t 2 ) − 1.A number of other equivalent definitions are used in the literature.In particular, ξ is sub-gaussian if an only if E |ξ| p = O(p) p/2 as p → ∞, so we can redefine the sub-gaussian norm of ξ as One can show that ξ ψ 2 defined this way is within an absolute constant factor from the infimum of K > 0 mentioned above, see [18,Section 5.2.3].One can similarly define sub-exponential random variables, i.e. by requiring that ξ For an m × n matrix A = (a ij ), recall that the operator norm of A is A = max x =0 Ax 2 / x 2 and the Hilbert-Schmidt (or Frobenius) norm of A is A HS = ( i,j |a i,j | 2 ) 1/2 .Throughout the paper, C, C 1 , c, c 1 , . . .denote positive absolute constants.
Date: May 7, 2014.M. R. was partially supported by NSF grant DMS 1161372.R. V. was partially supported by NSF grant DMS 1001829 and 1265782.
1 Theorem 1.1 (Hanson-Wright inequality).Let X = (X 1 , . . ., X n ) ∈ R n be a random vector with independent components X i which satisfy E X i = 0 and X i ψ 2 ≤ K. Let A be an n × n matrix.Then, for every t ≥ 0, Remark 1.2 (Related results).One of the aims of this note is to give a simple and self-contained proof of the Hanson-Wright inequality using only the standard toolkit of the large deviation theory.Several partial results and alternative proofs are scattered in the literature.
Improving upon an earlier result on Hanson-Wright [9], Wright [19] established a slightly weaker version of Theorem 1.1.Instead of A = (a ij ) , both papers had (|a ij |) in the right side.The latter norm can be much larger than the norm of A, and it is often less easy to compute.This weak point went unnoticed in several later applications of Hanson-Wright inequality, however it was clear to experts that it could be fixed.
A proof for the case where X 1 , . . ., X n are independent symmetric Bernoulli random variables appears in the lecture notes of Nelson [14].The moment inequality which essentially implies the result of [14] can be also found in [6].A different approach to Hanson-Wright inequality, due to Rauhut and Tropp, can be found in [8,Proposition 8.13].It is presented for diagonal-free matrices (however this assumption can be removed by treating the diagonal separately as is done below), and for independent symmetric Bernoulli random variables (but the proof can be extended to sub-gaussian random variables).
An upper bound for P X T AX − E X T AX > t , which is equivalent to what appears in the Hanson-Wright inequality, can be found in [10].However, the assumptions in [10] are somewhat different.On the one hand, it is assumed that the matrix A is positive-semidefinite, while in our result A can be arbitrary.On the other hand, a weaker assumption is placed on the random vector X = (X 1 , . . ., X n ).Instead of assuming that the coordinates of X are independent subgaussian random variables, it is assumed in [10] that the marginals of X are uniformly subgaussian, i.e., that sup y∈S n−1 X, y ψ 2 ≤ K.
The paper [3] contains an alternative short proof of Hanson-Wright inequality due to Latala for diagonal-free matrices.Like in the proof below, Latala's argument uses decoupling of the order 2 chaos.However, unlike the current paper, which uses a simple decoupling argument of Bourgain [2], his proof uses a more general and more difficult decoupling theorem for U-statistics due to de la Peña and Montgomery-Smith [5].For an extensive discussion of modern decoupling methods see [4].
Large deviation inequalities for polynomials of higher degree, which extend the Hanson-Wright type inequalities, have been obtained by Latala [11] and recently by Adamczak and Wolff [1].
Proof of Theorem 1.1.By replacing X with X/K we can assume without loss of generality that K = 1.Let us first estimate . By independence and zero mean of X i , we can represent The problem reduces to estimating the diagonal and off-diagonal sums: Step 1: diagonal sum.Note that X 2 i − E X 2 i are independent mean-zero subexponential random variables, and These standard bounds can be found in [18, Remark 5.18 and Lemma 5.14].Then we can use a Bernstein-type inequality (see [18,Proposition 5.16]) and obtain Step 2: decoupling.It remains to bound the off-diagonal sum The argument will be based on estimating the moment generating function of S by decoupling and reduction to normal random variables.Let λ > 0 be a parameter whose value we will determine later.By Chebyshev's inequality, we have (1.2) Consider independent Bernoulli random variables δ i ∈ {0, 1} with E δ i = 1/2.Since E δ i (1 − δ j ) equals 1/4 for i = j and 0 for i = j, we have Here E δ denotes the expectation with respect to δ = (δ 1 , . . ., δ n ).Jensen's inequality yields where E X,δ denotes expectation with respect to both X and δ.Consider the set of indices Λ δ = {i ∈ [n] : δ i = 1} and express Now we condition on δ and (X i ) i∈Λ δ .Then S δ is a linear combination of meanzero sub-gaussian random variables X j , j ∈ Λ c δ , with fixed coefficients.It follows that the conditional distribution of S δ is sub-gaussian, and its sub-gaussian norm is bounded by the ℓ 2 -norm of the coefficient vector (see e.g. in [18, Lemma 5.9]).Specifically, Next, we use a standard estimate of the moment generating function of centered sub-gaussian random variables, see [18,Lemma 5.5].It yields Taking expectations of both sides with respect to (X i ) i∈Λ δ , we obtain Recall that this estimate holds for every fixed δ.It remains to estimate E δ .
Step 3: reduction to normal random variables.Consider g = (g 1 , . . ., g n ) where g i are independent N (0, 1) random variables.The rotation invariance of normal distribution implies that for each fixed δ and X, we have By the formula for the moment generating function of normal distribution, we have Comparing this with the formula defining E δ in (1.4), we find that the two expressions are somewhat similar.Choosing s 2 = 2C ′ λ 2 , we can match the two expressions as follows: where C 1 = √ 2C ′ .Rearranging the terms, we can write Z = i∈Λ δ X i j∈Λ c δ a ij g j .Then we can bound the moment generating function of Z in the same way we bounded the moment generating function of S δ in Step 2, only now relying on the sub-gaussian properties of X i , i ∈ Λ δ .We obtain To express this more compactly, let P δ denotes the coordinate projection (restriction) of R n onto R Λ δ , and define the matrix A δ = P δ A(I − P δ ).Then what we obtained Recall that this bound holds for each fixed δ.We have removed the original random variables X i from the problem, so it now becomes a problem about normal random variables g i .
Step 4: calculation for normal random variables.By the rotation invariance of the distribution of g, the random variable A δ g 2 2 is distributed identically with i s 2 i g 2 i where s i denote the singular values of A δ .Hence by independence, Note that each g 2 i has the chi-squared distribution with one degree of freedom, whose moment generating function is E exp(tg Using the numeric inequality (1 − z) −1/2 ≤ e z which is valid for all 0 ≤ z ≤ 1/2, we can simplify this as follows: HS ≤ A HS , we have proved the following: This is a uniform bound for all δ.Now we take expectation with respect to δ.
Recalling (1.3) and (1.4), we obtain the following estimate on the moment generating function of S: Step 5: conclusion.Putting this estimate into the exponential Chebyshev's inequality (1.2), we obtain Optimizing over λ, we conclude that Now we combine with a similar estimate (1.1) for p 1 and obtain Repeating the argument for −A instead of A, we get P X T AX − E X T AX < −t ≤ 2p(A, t).Combining the two events, we obtain P |X T AX − E X T AX| > t ≤ 4p(A, t).Finally, one can reduce the factor 4 to 2 by adjusting the constant c in p(A, t).The proof is complete.

Sub-gaussian concentration
Hanson-Wright inequality has a useful consequence, a concentration inequality for random vectors with independent sub-gaussian coordinates.
Theorem 2.1 (Sub-gaussian concentration).Let A be a fixed m × n matrix.Consider a random vector X = (X 1 , . . ., X n ) where X i are independent random variables satisfying E X i = 0, E X 2 i = 1 and X i ψ 2 ≤ K. Then for any t ≥ 0, we have Remark 2.2.The consequence of Theorem 2.1 can be alternatively formulated as follows: the random variable Z = AX 2 − A HS is sub-gaussian, and Z ψ 2 ≤ CK 2 A .Remark 2.3.A few special cases of Theorem 2.1 can be easily deduced from classical concentration inequalities.For Gaussian random variables X i , this result is a standard consequence of Gaussian concentration, see e.g.[13].For bounded random variables X i , it can be deduced in a similar way from Talagrand's concentration for convex Lipschitz functions [15], see [16,Theorem 2.1.13].For more general random variables, one can find versions of Theorem 2.1 with varying degrees of generality scattered in the literature (e.g. the appendix of [7]).However, we were unable to find Theorem 2.1 in the existing literature.
Proof.Let us apply Hanson-Wright inequality, Theorem 1.1, for the matrix we have E X T QX = A 2 HS .Also, note that since all X i have unit variance, we have K ≥ 2 −1/2 .Thus we obtain for any u ≥ 0 that Let ε ≥ 0 be arbitrary, and let us use this estimate for u = ε A 2 HS .Since Now let δ ≥ 0 be arbitrary; we shall use this inequality for ε = max(δ, δ 2 ).Observe that the (likely) event AX 2 2 − A 2 HS ≤ ε A 2 HS implies the event AX 2 − A HS ≤ δ A HS .This can be seen by dividing both sides of the inequalities by A 2 HS and A HS respectively, and using the numeric bound max(|z − 1|, |z − 1| 2 ) ≤ |z 2 − 1|, which is valid for all z ≥ 0. Using this observation along with the identity min(ε, ε 2 ) = δ 2 , we deduce from (2.1) that Setting δ = t/ A HS , we obtain the desired inequality.
2.1.Small ball probabilities.Using a standard symmetrization argument, we can deduce from Theorem 2.1 some bounds on small ball probabilities.The following result is due to Latala et al. [12,Theorem 2.5].
Corollary 2.4 (Small ball probabilities).Let A be a fixed m × n matrix.Consider a random vector X = (X 1 , . . ., X n ) where X i are independent random variables satisfying E X i = 0, E X 2 i = 1 and X i ψ 2 ≤ K. Then for every y ∈ R m we have Remark 2.5.Informally, Corollary 2.4 states that the small ball probability decays exponentially in the stable rank r(A) = A 2 HS / A 2 .Proof.Let X ′ denote an independent copy of the random vector X.Denote p = P AX − y 2 < 1 2 A HS .Using independence and triangle inequality, we have 2) The components of the random vector X − X ′ have mean zero, variances bounded below by 2 and sub-gaussian norms bounded above by 2K.Thus we can apply Theorem 2.1 for 1 √ 2 (X − X ′ ) and conclude that Using this with t = (1 − 1/ √ 2) A HS , we obtain the desired bound for (2.2).
The following consequence of Corollary 2.4 is even more informative.It states that AX − y 2 A HS + y 2 with high probability.
Corollary 2.6 (Small ball probabilities, improved).Let A be a fixed m × n matrix.Consider a random vector X = (X 1 , . . ., X n ) where X i are independent random variables satisfying E X i = 0, E X 2 i = 1 and X i ψ 2 ≤ K. Then for every y ∈ R m we have Proof.Denote h := A HS .Combining the conclusions of Theorem 2.1 and Corollary 2.4, we obtain that with probability at least 1 − 4 exp(−ch 2 /K 4 A 2 ), the following two estimates hold simultaneously: The proof is complete.

Two applications
Concentration results like Theorem 2.1 have many useful consequences.We include two applications in this article; the reader will certainly find more.
The first application is a concentration of distance from a random vector to a fixed subspace.For random vectors with bounded components, one can find a similar result in [16,Corollary 2.1.19],where it was deduced from Talagrand's concentration inequality.
Corollary 3.1 (Distance between a random vector and a subspace).Let E be a subspace of R n of dimension d.Consider a random vector X = (X 1 , . . ., X n ) where X i are independent random variables satisfying E X i = 0, E X 2 i = 1 and X i ψ 2 ≤ K. Then for any t ≥ 0, we have Proof.The conclusion follows from Theorem 2.1 for A = P E ⊥ , the orthogonal projection onto E. Indeed, d(X, E) = P E ⊥ X 2 , P E ⊥ HS = dim(E ⊥ ) = √ n − d and P E ⊥ = 1.
Suppose this event occurs.Then by triangle inequality, AX − y 2 ≥ y 2 − AX 2 ≥ y 2 − 3 2 h.Combining this with the second inequality in (2.3), we obtain that