Dimension-free Bounds for Sums of Independent Matrices and Simple Tensors via the Variational Principle

We consider the deviation inequalities for the sums of independent $d$ by $d$ random matrices, as well as rank one random tensors. Our focus is on the non-isotropic case and the bounds that do not depend explicitly on the dimension $d$, but rather on the effective rank. In an elementary and unified manner, we show the following results: 1) A deviation bound for the sums of independent positive-semi-definite matrices. This result complements the dimension-free bound of Koltchinskii and Lounici [Bernoulli, 2017] on the sample covariance matrix in the sub-Gaussian case. 2) A new bound for truncated covariance matrices that is used to prove a dimension-free version of the bound of Adamczak, Litvak, Pajor and Tomczak-Jaegermann [Journal Of Amer. Math. Soc., 2010] on the sample covariance matrix in the log-concave case. 3) Dimension-free bounds for the operator norm of the sums of random tensors of rank one formed either by sub-Gaussian or by log-concave random vectors. This complements the result of Gu\'{e}don and Rudelson [Adv. in Math., 2007]. 4) A non-isotropic version of the result of Alesker [Geom. Asp. of Funct. Anal., 1995] on the deviation of the norm of sub-exponential random vectors. 5) A dimension-free lower tail bound for sums of positive semi-definite matrices with heavy-tailed entries, sharpening the bound of Oliveira [Prob. Th. and Rel. Fields, 2016]. Our approach is based on the duality formula between entropy and moment generating functions. In contrast to the known proofs of dimension-free bounds, we avoid Talagrand's majorizing measure theorem, as well as generic chaining bounds for empirical processes. Some of our tools were pioneered by O. Catoni and co-authors in the context of robust statistical estimation.


Introduction and main results
We study the non-asymptotic bounds for the sums of some independent random matrices as well as a closely related question of estimating the largest and smallest singular values of random matrices with independent rows.Assume that we are given a random n by d matrix A such that all of its rows A T 1 , . . ., A T n are isotropic independent sub-Gaussian vectors (see the formal definitions below).We are interested in providing the upper and lower bounds on its singular values s 1 (A), s 2 (A), . . ., s d (A).
The question of upper bounding the largest singular values and lower bounding the smallest singular value is known to be essentially equivalent (see [50,Chapter 4]) to providing an upper bound on the operator norm of the difference between the sample covariance matrix formed by the rows A i and the identity matrix.That is, one is interested in providing a high probability, non-asymptotic bound on Here and in what follows • stands for the operator norm of the matrix and for the Euclidean norm of the vector respectively.The latter question is also central in mathematical statistics, where one is interested in estimating the underlying covariance structure using the sample covariance matrix.One of the usual assumptions made when analyzing (1) is that the rows A T i are isotropic and zero mean; that is, EA i = 0 and EA i A T i = I d , where in what follows I d stands for the d by d identity matrix.The non-isotropic case can usually be reduced to the isotropic using a linear transformation.However, the problem is that in this case the bound on (1) will depend on the dimension d, whereas in many cases one expects that a dimension-free deviation bound is possible.The search for dimension-free bounds for sums of independent random matrices is motivated mainly by applications in statistics and data science, where it is usually assumed that the data lives on a low-dimensional manifold.Before providing our first result, we recall that for a random variable Y and α ∈ [1,2], its ψ α norm is defined as follows Using the standard convention, we say that • ψ 2 is the sub-Gaussian norm and • ψ 1 is the subexponential norm.We say that X is a sub-Gaussian random vector in R d if sup u∈S d−1 u, X ψ 2 is finite.A zero mean random vector X is isotropic, if EXX T = I d .Here and in what follows, S d−1 denotes the corresponding unit sphere and •, • is the standard inner product in R d .One of the central quantities appearing in this paper is the effective rank.
The effective rank is always smaller than the matrix rank of Σ and, in particular, smaller than its dimensions.We also have r(I d ) = d.Our first result is a general upper bound for sums of independent positive semi-definite d by d matrices satisfying the sub-exponential norm equivalence assumption.This generalizes the question of upper bounding (1), since we neither assume that the matrix is of rank one nor that the covariance matrix is identity.
Theorem 1 (A general rank version of Theorem 9 in [25]).Assume that M 1 , . . .M n are independent copies of a d by d positive semi-definite symmetric random matrix M with mean EM = Σ.Let M satisfy for some κ 1, for all x ∈ R d .Then, for any t > 0, with probability at least 1 − exp(−t), it holds that whenever n 4r(Σ) + t.
Remark 1.In the theorem above, we presented explicit constants.We place them to emphasize that, in contrast with existing dimension-free bounds, these constants are easy to obtain with the approach we follow.At the same time, little effort was made to get their optimal values.
Remark 2. In Section 3.2 we show that the same dimension-free bound holds under a weaker assumption (allowing heavy-tailed distributions), namely E(x T M x) 2 κ 2 x T Σx, but only for the lower tails of (3).This complements several known dimension-dependent lower tail bounds.
The norm equivalence assumption (2) is quite standard in the literature.As a matter of fact, Theorem 1 recovers one of the central results in high-dimensional statistics, as the following example shows.
Example 1 (The sample covariance matrix in the sub-Gaussian case [25]).The most natural application of Theorem 1 is when M = XX T and X is zero mean sub-Gaussian random vector with a covariance matrix Σ.That is, there is κ 1 such that for any y ∈ R d , it holds that y, X ψ 2 κ y T Σy. ( Using this line, for any x ∈ R d we have x T XX T x ψ 1 = x T X 2 ψ 2 κ 2 x T Σx, which is sufficient for Theorem 1.This gives that, with probability at least 1−exp(−t), provided that n 4r(Σ)+t, it holds that where we used that for a symmetric d by d matrix A it holds that A = sup y∈S d−1 y T Ay.
Recently, much attention has been paid to the dimension-free bound for the sample covariance matrix formed by sub-Gaussian random vectors.Although the dimension-dependent version of (5) (that is, where r(Σ) is replaced by d) follows from a simple discretization argument, the known approaches to obtaining the dimension-free bounds are quite technical and deserve a separate discussion: • The bound of Theorem 1 is a general rank version1 of the result of Koltchinskii and Lounici [25,Theorem 9].Their proof is based on deep probabilistic results: the generic chaining tail bounds for quadratic processes and Talagrand's majorizing measures theorem.In particular, this makes it difficult to provide any explicit constants using their approach.Some generalizations of the result on sample covariance matrices to positive semi-definite matrices are also considered in [53].
• Using the matrix deviation inequality of Liaw, Mehrabian, Plan, and Vershynin [30], Vershynin [50] gives an alternative proof of the bound of Koltchinskii and Lounici, but with the term κ 4 instead of κ 2 in (5).The dependence on κ in [30] has been recently improved in [21].However, this improved (and optimal for some problems) result only leads to κ 2 log κ term in (5).
• Van Handel [48] gives an in-expectation version of (5) in the special case where X is a Gaussian random vector.Despite not using Talagrand's majorizing measures theorem, their analysis is based on a Gaussian comparison theorem and does not cover the sub-Gaussian case.Note that in the Gaussian case, the in-expectation bound for the sample covariance matrix can be converted into an optimal high probability bound using one of the special concentration inequalities provided in [25,1,23].
Our approach, based on the variational inequality and described in detail in Section 2, bypasses several technical steps appearing in the literature.Speaking informally, we use a smoothed version of the ε-net argument that allows to properly capture the complexity of elliptic indexing sets without resorting to generic chaining.This extension will be key to our multilinear results, where the above mentioned tools are hard to apply.
Note that even though Example 1 is sharp in the rank one case (see the lower bound in [25]), the result of Theorem 1, due to its generality, can be suboptimal in other cases.For example, let A be a diagonal random matrix such that its diagonal elements are the same copy of the absolute value of a standard Gaussian random variable.In this case, Theorem 1 scales as d+t n , whereas the correct order is t n .We also remark that, at least in the rank one case, the bound of Theorem 1 is out of the scope of the so-called matrix concentration inequalities, since they provide additional logarithmic factors and suboptimal tails (see some related bounds in [47,50,23,31]).We additionally refer to the recent work [7], where the in-expectation analog of ( 5) is derived modulo some additional lower-order additive terms.
Motivated by the recent interest in random tensors [51,18,43,9,15], we show how our arguments can be extended to provide a multilinear extension of Theorem 1.That is, we are considering sums of independent random tensors of order higher than one and want to prove a bound similar to (5).Let us introduce this setup.Consider the simple (rank one) random symmetric tensor , where X is a zero mean sub-Gaussian vector in R d and X ⊗s 1 , . . ., X ⊗s n are its independent copies.We are interested in studying where • stands for the operator norm of the symmetric s-linear form.Here we used that for symmetric forms the expression is maximized by a single vector v ∈ S d−1 (see e.g., [39,Section 2.3]).
The question of upper bounding (6) (with X i , v s usually replaced by the absolute value | X i , v | s in the right-hand side and non-integer values of s are allowed) is well studied [16,19,34,3,49,35].The results are usually of the following form: Assuming that X 1 , . . ., X n are i.i.d.copies of the isotropic vector satisfying certain regularity assumptions, one is interested in defining the smallest sample size n such that, with high probability, The general form of the assumption (see [19,49] and, in particular, [3, Theorem 4.2]) required to achieve this precision for some regular families of distributions is where C s depends only on s.Although the condition n C s d s/2 is known to be optimal when ε is a constant [49], the dependence on ε is either suboptimal or not explicit in the existing results.In fact, a recent result of Mendelson [35] suggests that using a specific robust estimation procedure, it is possible to approximate the moments of the marginals for any s 1 with ε scaling as d n log n d .At the same time, the inequality (7) becomes vacuous in this regime whenever s > 2. Before we proceed, recall the following definition.Definition 2. The measure ν in R d is log-concave, if for any measurable subsets A, B ∈ R d and any t ∈ [0, 1], ν(tA Our next result shows that provided that the sample size n is large enough, one can approximate the s-th integer moment of the marginals using their empirical counterparts with ε scaling as Σ s/2 r(Σ) n .This is the best possible approximation rate when s = 2 and X is a multivariate Gaussian random vector.Moreover, because of (7) this approximation rate was not previously achieved even in the isotropic case.
Theorem 2. Let s 2 be an integer.Assume that X 1 , . . ., X n are independent copies of a zero mean vector X that is either sub-Gaussian (4) or log-concave.There exist c s > 0 that depends only on s and an absolute constant c > 0 such that the following holds.Assume that Then, with probability at least 1 − cn exp(− r(Σ)), it holds that where C = C(s, κ) in the sub-Gaussian case and C = C(s) in the log-concave case.Moreover, in the sub-Gaussian case if n c s (r(Σ)) s+1 , then the same bound holds with probability at least 1 − cn exp(−r(Σ)).
To simplify our proofs, we focus only on the tensor case; in particular, we consider the integer values of s.In Theorem 2 we require that either cn exp(−r(Σ)) < 1 or cn exp(− r(Σ)) < 1.These assumptions can be dropped by slightly inflating our upper bound (see also [3,Remark 4.3]).It is likely that in the sub-Gaussian case one can extend our arguments, namely, the decoupling-chaining argument discussed below, so that the assertion holds with probability 1 − cn exp(−r(Σ)) whenever n c s (r(Σ)) s−1 .Indeed, we know by (5) that this is the case at least when s = 2.We preferred a shorter proof instead of a more accurate estimate of the tail.
In the log-concave case when s = 2, Theorem 2 complements the renowned result of Adamczak, Litvak, Pajor and Tomczak-Jaegermann [3].The main advantage of our result is the explicit dependence on the effective rank, similar to the sample covariance bound of Koltchinskii and Lounici in the Gaussian case [25].Our next result sharpens the tail estimate in this specific case and coincides with the best known bound in the isotropic case.Theorem 3. Assume that X 1 , . . .X n are independent copies of a zero-mean, log-concave random vector X with covariance Σ.There are absolute constants c 1 , c 2 , c 3 > 0 such that the following holds.We have, with probability at least whenever n c 3 r(Σ).
In both proofs we combine the variational inequality approach with the decoupling-chaining argument developed in [3,45].We remark that the adaptation of the latter argument to the non-isotropic case is quite straightforward.Our analysis improves the analysis of the spread part [45,Section 9.4] whose analysis in the isotropic case combines the ε-net argument and the Bernstein inequality.Finally, observe that since the Gaussian distribution is log-concave, the rate of convergence Σ r(Σ) n is the best possible due to the lower bound in [25].There is a version of Theorem 3, which follows from our proof with minimal changes.For the reader's convenience we also present an explicit tail bound.Proposition 1. Assume that X 1 , . . .X n are independent copies of a zero-mean random vector with covariance Σ such that for some κ 1 and all y ∈ R d , it holds that X, y ψ 1 κ y T Σy and max almost surely.
There are absolute constants c 1 , c 2 , c 3 > 0 such that the following holds.For any t 0 we have, with probability at least 1 − c 1 exp(−t), whenever n c 3 (r(Σ) + t).
In Section 3 we provide two additional results: a bound on the deviation of the norm of a sub-exponential random vector and a lower tail version of Theorem 1. Finally, as a part of the proof of Theorem 2, we provide a simple proof of the bound by Hsu, Kakade and Zhang [20] on the deviation of the norm of a sub-Gaussian random vector.
2 An approach based on the variational equality Our approach will be based on the following duality relation (see [8,Corollary 4.14]): for a probability space (Θ, µ) and any measurable function where the supremum is taken with respect to all measures absolutely continuous with respect to µ and KL(ρ, µ) = log dρ dµ dρ denotes the Kullback-Leibler divergence between ρ and µ.The equality ( 9) is used in the proof of the additivity of entropy [29, Proposition 5.6] and in the transportation method for proving concentration inequalities [8,Chapter 8].A useful corollary of the variational equality is the following lemma (see e.g., [12, Proposition 2.1] and discussions therein).
Lemma 1. Assume that X i are i.i.d.random variables defined on some measurable space X .Assume also that Θ (called the parameter space) is a subset of Let µ be a distribution (called prior) on Θ and let ρ be any distribution (called posterior) on Θ such that ρ ≪ µ.Then, with probability at least 1 − exp(−t), simultaneously for all such ρ we have where θ is distributed according to ρ.
Proof.We sketch the proof for the sake of completeness.Let g : . Let E denote the expectation with respect to the i.i.d.
sample X 1 , . . ., X n .Using successively ( 9), Fubini's theorem and independence of X 1 , . . ., X n , we have By Markov's inequality for any random variable Y the identity E exp(Y ) = 1 implies that Y < t, with probability at least 1 − exp(−t).The claim follows by taking Remark 3. In Lemma 1 we assumed that E X exp(f (X, θ)) < ∞ for all θ ∈ Θ.However, this does not imply that f (X, θ) is integrable with respect to ρ.If it is not the case, one can conventionally take the cases where E ρ f (X, θ) is infinite into account, so that the inequality of Lemma 1 still holds.For more details see [11, Appendix A].
Our analysis is inspired by the application of ( 9) and Lemma 1 in the works of Catoni and co-authors [6,5,10,11,12] on robust mean and covariance estimation as well as by the work of Oliveira [40] on the lower tails of sample covariance matrices under minimal assumptions.This approach is usually called the PAC-Bayesian method in the literature.In robust mean estimation, one is making minimal distributional assumptions (for example, by considering heavy-tailed distributions) aiming to estimate the mean of the random variable/vector/matrix using the estimators that necessarily differ from the sample mean (see [10,37,33,12,36,14,41,22,35] and the recent survey [32]).Our aim is somewhat different: We work with sums of independent random matrices and multilinear forms.It is important to note that except the recent works of Catoni and Giulini [17,12], statistical guarantees based on (9) are dimension-dependent.A detailed technical comparison with these papers is deferred to Section 4.1.

Motivating examples: matrices with isotropic sub-Gaussian rows and the Gaussian complexity of ellipsoids
To motivate (and illustrate) the application of the variational equality (9) in the context of high-dimensional probability, we first show how Lemma 1 can be used to recover the standard bound on the largest and smallest singular values of the the n by d random matrix A having independent, mean zero, isotropic (EA i A T i = I d ) sub-Gaussian rows.In this case, (4) can be rewritten as sup In view of [50,Lemma 4.1.5],it is enough to show the following statement.
Proposition 2. ([50, Theorem 4.6.1])Let A be an n by d random matrix whose rows A T i are independent, mean zero, sub-gaussian isotropic random vectors.We have for any t 0, with probability at least 1 − exp(−t), The standard way of proving Proposition 2 uses an ε-net argument combined with the Bernstein inequality in terms of the • ψ 1 norm and the union bound.We demonstrate that if the prior µ and the posterior ρ are correctly chosen, then Lemma 1 recovers the same bound without directly exploiting a discretization argument.this term is naturally dominated by d+t n .In some sense, we only captured the sub-Gaussian regime in the deviation bound.This regime is arguably the most interesting when considering statistical estimation problems.
Our analysis requires the following standard result.Since we need a version with an explicit constant, we reproduce these lines for the sake of completeness.
Lemma 2. Let Y be a zero mean random variable.Then for any λ such that |λ| Proof.First, by Markov's inequality and any t 0, it holds that In the following lines, we assume without loss of generality that Y ψ 1 = 1.We have for p 1, (10) Finally, when |λ| 1/2 by Taylor's expansion and since EY = 0, we have The claim follows.
Proof.(of Proposition 2) Fix ε > 0. Our aim is to choose µ and ρ.Let where B d is a unit ball in R d .Choose µ to be a product of two uniform measures each defined on Observe that both balls belong to By the additivity of KL-divergence for product measures and the formula for the volume of the d-dimensional ball, we have where vol(S) denotes the volume of the set S. Fix λ ∈ R and consider the random variable λθ T A i A T i ν, where (θ, ν) is distributed according to ρ u,v .We want to plug this random variable into Lemma 1. Observe that conditionally on (θ, ν), we have, using where the last inequality follows from the fact that θ, ν ∈ (1 + ε)B d almost surely.Conditionally on (θ, ν), combining the triangle and Jensen's inequalities, we have Further, since EA i A T i = I d we have by Lemma 2, conditionally on (θ, ν), We choose ε = 1 √ e−1 to guarantee that 2d log ((1 + ε)/ε) = d.Then, taking λ = 1 4(1+ε) 2 κ 2 d+t n , we require n d + t.Simplifying (11) for this choice of parameters, we prove the claim.
Another motivating fact is that (9) correctly reflects the Gaussian complexity of the ellipsoid.It is well-known that for the ellipsoids, the Dudley integral argument does not give an optimal bound, while the generic chaining does (see [45,Chapter 2.5]).Although one can instead directly use the Cauchy-Schwarz inequality, it is easy to show that the variational equality (9) captures the same bound.
Example 2 (The Gaussian complexity of ellipsoids via the variational equality).Let Z be a standard normal random vector.Let Σ be a positive semi-definite d by d matrix.It hold that Proof.Set Θ = R d and let β > 0. Let the prior distribution µ be a multivariate Gaussian distribution with mean zero and covariance β −1 Σ.For v ∈ Σ 1/2 S d−1 let the distribution ρ v be a multivariate Gaussian distribution with mean v and covariance β −1 Σ.By the standard formula, we have Let θ be distributed according to ρ v .By Jensen's inequality for any λ > 0, By the line of the proof of Lemma 1, we have Since Z is a standard normal random vector, it holds that Combining previous inequalities and simplifying, we have The claim follows.
Observe that the proof explicitly uses a bound on the expected squared norm of a multivariate normal vector (not for the norm of Z though), which is closer to a more "algebraic" approach based on the Cauchy-Schwarz inequality, whereas the generic chaining is a "geometric" approach; we refer to [45, Chapter 2.5] for a detailed discussion of the Gaussian complexity of ellipsoids.

Proof of Theorem 1
In view of Proposition 2 and Example 2, a natural idea is to use the uniform distribution for ρ and µ on ellipsoids induced by the structure of the matrix Σ.It appears that working with ellipsoids directly is quite tedious.To avoid these technical problems, we work with the nonisotropic truncated Gaussian distribution.Throughout the proof, we assume without loss of generality that Σ is invertible.If it is not the case, the distribution of M lives almost surely in a lower-dimensional subspace.We can project on this subspace and continue the proof without changes.Fix and choose the prior distribution µ on Θ as the product of two multivariate Gaussian distributions in R d both with mean zero and covariance matrix β −1 Σ.For u, v ∈ Σ 1/2 S d−1 let the posterior distribution ρ u,v be defined as follows.For r > 0 consider the density function f u in R d given by where p > 0 is a normalization constant.That is, the distribution defined by f u is a multivariate normal distribution restricted to the ball {x ∈ R d : x − u r}.Our distribution ρ u,v on Θ is now defined as a product of two distributions given by f u and f v respectively.Observe that since f u is symmetric around u (that is, for any y ∈ R d , we have where ρ u and ρ v denote the marginals of ρ u,v .
Let us now compute the Kullback-Leibler divergence between ρ u,v and µ.Let g denote the density function of a multivariate Gaussian distribution with mean zero and covariance β −1 Σ.By the additivity of the Kullback-Leibler divergence for product measures, we have Both terms are now analyzed similarly.For θ distributed according to ρ u , where in the last line we used u ∈ Σ 1/2 S d−1 .Let Z be a random vector having a multivariate Gaussian distribution with mean zero and covariance β −1 Σ.By (12) and using the translation u → 0, we have p = Pr( Z r).By Markov's inequality we have Pr( and get p 1/2.Therefore, we have log (1/p) log 2. Finally, for this choice of r, For λ ∈ R we want to plug the function λθ T Σ −1/2 M Σ −1/2 ν into Lemma 1, where (θ, ν) is distributed according to ρ u,v .By (13) we have It is only left to compute E ρu,v log(E exp(λθ T Σ −1/2 M Σ −1/2 ν)).Conditionally on (θ, ν), we have as in the proof of Proposition 2 where both the • ψ 1 norm and the expectation are considered with respect to the distribution of M , and the second line uses the Cauchy-Schwarz inequality.Taking again the expectation with respect to M only, we have by Lemma 2 . Observe that by our choice of r, we have almost surely Let us choose β = 2r(Σ).Thus, by ( 14), (15) and Lemma 1 we have for any fixed λ such that |λ| where we used 2 log 2+2r(Σ) 4r(Σ).We choose λ = and finish the proof.

Proofs of Theorem 2 and Theorem 3
We first present some auxiliary results, then we prove Theorem 3 and Theorem 2. The technique of the proof combines the analysis of Theorem 1 with a careful truncation argument.We also use the decoupling-chaining argument to control the large components in the sums.In the last part, we are mainly adapting the previously known techniques.
We need the following result, which is similar to the deviation inequality appearing in [20].As above, we provide a simple proof based on the variational equality (9).Lemma 3. Assume that X is a zero mean κ-sub-Gaussian random vector (4).Then, with probability at least 1 − exp(−t), Remark 5. We will be frequently using the following relaxation of the bounds ( 16): Proof.Observe that X = sup v∈S d−1 X, v .Thus, we upper bound X, v uniformly over the sphere.Set Θ = R d .Let the prior distribution µ be a multivariate Gaussian distribution with mean zero and covariance β −1 I d .For v ∈ S d−1 let ρ v be a multivariate Gaussian distribution with mean v and covariance β −1 I d .By the standard formula, we have Our function is λ X, θ , where θ is distributed according to ρ v .To apply Lemma 1 (with n = 1) we only need to compute E ρv log(E X exp(λ X, θ )).Conditionally on θ, by the sub-Gaussian assumption, we have log E X exp(λ X, θ ) 9κ 2 λ 2 θ T Σθ, where to get the explicit constant, one should keep track of the constant factors in the implications of [50, Proposition 2.5.2].We have Therefore, for any λ > 0, simultaneously for all v ∈ S d−1 , we have, with probability at least , we prove the claim.
Remark 6.The leading constant in Lemma (3) can be made optimal if we assume a bound on the moment generating function as in the Gaussian case.That is, if instead of (17), we have, conditionally on θ, log E X exp(λ X, θ ) λ 2 θ T Σθ/2, then, optimizing with respect to λ and β, one can show that, with probability at least 1−exp(−t), it holds that This is what one can achieve if X is a zero mean Gaussian vector, in which case one can use the Gaussian concentration inequality (see [8,Example 5.7]).This observation appears (implicitly) in the works of Catoni and co-authors.
Define the truncation function It is a symmetric function such that for all x ∈ R, The following bound will be important in our analysis.
Lemma 4. Let ψ be as above and let Z be a square integrable random variable.We have Moreover, for any a > 0, it holds that Proof.In the proof we use the following fact.The function is convex.Observe also that if 0 t a, then We proceed with the following lines For the second inequality we have by (18).
This allows us to prove the following uniform bound.
Lemma 5. Assume that X 1 , . . .X n are independent copies of a zero-mean random vector X in R d with covariance Σ.Let s be an integer and assume that X satisfies that for some η 1 and any y ∈ R d , E| y, X | 2s 1 2s η y T Σy.
Fix the truncation level Then there is C s > 0 that depends only on s such that for any t > 0, with probability at least 1 − 2 exp(−t), it holds that Proof.Fix β > 0 and let Θ = (R d ) s .Choose the prior distribution µ on Θ as the product of s multivariate Gaussian distributions in R d with mean zero and covariance β −1 I d .For v ∈ S d−1 let the posterior distribution ρ v,...,v be defined as the product of s multivariate Gaussian distributions in R d each with mean v and covariance β −1 I d .For (θ 1 , . . ., θ s ) distributed according to ρ v,...,v , we have In what follows, we use the simplifying notation ρ = ρ v,...,v and the marginals of ρ will be denoted by ρ k for all k = 1, . . ., s.Using the additivity of the Kullback-Leibler divergence for product measures, we get Fix λ > 0 and let β = r(Σ).Our plan will be to bound ψ (λ v, X s ) by where c s > 0 depends only on s and some other terms that do not depend on v; then we apply Lemma 1.This idea is related to the influence function approach used by Catoni [10,11] in the context of robust estimation.Using (19) and the first part of Lemma 4, we have We start with the last summand.Conditionally on X, the following holds: Therefore, we have We need to upper bound the last term.Applying Hölder's inequality and since θ k are independent for k = 1, . . ., s, we have where we used that for our choice β = r(Σ), Thus, Lemma 1 implies, that with probability at least 1 − exp(−t), for all v ∈ S d−1 , By the above computations we have on the same event It is left to control the last sum.Denote Y = min 1, 2 s−1 λ 2 X 2s 6β s and let Y 1 , . . ., Y n be independent copies of Y .Observe that Y ∈ [0, 1], which implies Var(Y ) EY .By the standard Bernstein inequality [8,Corollary 2.11], with probability at least 1 − exp(−t), Finally, denoting the standard basis in R d by e 1 , . . ., e d and using the triangle inequality, we have which implies EY 2 s−1 η 2s λ 2 Σ s /6.Combining these estimates and taking our choice of λ into account, the one-sided bound follows.
To finish the proof, we need to get a two-sided bound.Since the function ψ is symmetric, it is enough to consider ρ v,...,v,−v instead of ρ v,...,v as the posterior distribution, for which the same analysis holds.The claim follows.

Bounds on the norms of random vectors
First, consider the sub-Gaussian case.By [50, Proposition 2.5.2] the sub-Gaussian vector X satisfies for all q 1 and any y ∈ R d , (E| y, X | q ) 1/q 3 √ q y, X ψ 2 3κ qy T Σy.
Choosing q = 2s, we obtain that when applying Lemma 5 we may choose η = 3κ √ 2s.The norm of the sub-Gaussian vector can easily be analyzed using Lemma 3. Indeed, by the union bound, with probability at least 1 − n exp(−t), it holds that whenever t r(Σ).
Second, we consider the log-concave case.We use Borell's characaterization of log-concave distributions (see e.g., [3,Lemma 2.3]): If X is a zero mean random vector in R d with a logconcave distribution, then for any y ∈ S d−1 , y, X ψ 1 κ y T Σy, (21) where κ is a universal constant.We use the symbol κ in the proof but absorb it by the generic constant in our final statements.By (10) and (21) we have for any y ∈ S d−1 and q 1, E| y, X | q 2q! y, X q ψ 1 2q q y, X q ψ 1 2(κq) q (y T Σy) q/2 .( Choosing q = 2s, we obtain that in Lemma 5 we may choose η = 4κs.The second component is the inequality of Paouris [42], written in the following form [2, Theorem 2]: If X is a random vector in R d with a log-concave distribution, then for any q 1, where C > 0 is a universal constant.For a zero mean random vector X we have E X Tr(Σ).By (22) we have E| y, X | q 2(κq) q Σ q/2 .Thus, by Markov's inequality together with (23), with probability at least 1−exp(−t), it holds that X eC Tr(Σ) + 2κt Σ .By the union bound, with probability at least 1 − n exp(−t),

Proof of Theorem 3
We start with an upper tail.Let λ > 0 be a fixed truncation level.We can write the following decomposition similar to the one used in [3]: The analysis of the second term will follow from Lemma 5. Indeed, for η = 8κ, defining which will be our choice throughout the proof, we have, with probability at least 1−2 exp(−r(Σ)), sup where c > 0 is an absolute constant.Since κ is an absolute constant in the log-concave case, we will sometimes absorb it by other absolute constants.
The analysis of large summands can be done via a well-known decoupling-chaining argument.This argument leads to the following result.
Lemma 6 (Proposition 9.4.2 in [45]).Assume that Y 1 , . . ., Y n are independent copies of a random vector Y in R d such that for all x ∈ S d−1 , it holds that x, Y ψ 1 1.
There are absolute constants c 1 , c 2 > 0 such that the following holds.For any t 0, with probability at least 1 − c 1 exp(−t), we have uniformly over k = 1, . . ., n, Our first observation is that the above bound does not depend on d.Moreover, the distribution of Y is not necessarily isotopic.Thus, we can adapt this result to our case.Recall that by (21) for a zero mean, log-concave vector X, we have for any Throughout the proof we choose t = (r(Σ)n) 1/4 .From now on we can follow the arguments in [3,45] with several modifications needed to take the effective rank into account.Observe that by (24), with probability at least where we used that Case 1.On the event where ( 26) and ( 27) hold, we have that m m is increasing for 1 m n and since n r(Σ), Thus, on the same event ( 26) and ( 27) imply where c 3 > 0 is an absolute constant.
Case 2. Otherwise, we assume m > r(Σ) log 2 e 2 n r(Σ) . By (26), (27) and the union bound, we have, with probability at least 1 Assume that the sample size n is large enough, so that λ −1 4 c 2 κ Σ log en m 2 . Solving (28) we get, in particular, that where c 4 , c 5 > 0 are absolute constants.Combining ( 26), ( 29) and dividing both sides by n, we show that for some c 6 > 0, on the corresponding event , which is Since we assumed that m > r(Σ) log 2 e 2 n r(Σ) , one can easily show that if n c 7 r(Σ) for some c 7 > 0, then the inequality (30) holds.Indeed, x → √ x grows faster than x → log 2 (x).We proved the upper tail bound.
Although one can prove the lower tail bound completely analogously, we discuss an alternative argument.For the same value of λ, using the definition of the truncation function and since v, X 2 0, we have, with probability at least 1 − 2 exp(−r(Σ)), sup Applying the union bound we finish the proof.We discuss a related argument in Section 3.2.

Proof of Proposition 1
There are only a few changes compared to the proof of Theorem 3. First, an analog of the norm bound ( 27) is implied by our assumption.When applying Lemma 5 to the second term in (25) we choose Our second modification is that we consider the cases: m In the first case √ m log en m 2 r(Σ) + t.In this case, with probability at least 1− c 1 exp(−t), it holds that Otherwise, if m > r(Σ)+t log 2 e 2 n r(Σ)+t following the same lines, we show that where c 3 > 0 is an absolute constant.When m > r(Σ)+t log 2 e 2 n r(Σ)+t , the condition (30) will be rewritten which implies that it is sufficient to take n c 4 (r(Σ) + t).Combining these bounds as in the proof of Theorem 3, we prove the claim.

Proof of Theorem 2
The log-concave case.The proof in the log-concave case is quite similar to the proof of Theorem 3. Let us start with an upper tail.Let λ 1 > 0 be a fixed truncation level.As above, we write the following decomposition: The analysis of the second term will follow from Lemma 5.For η = 4sκ, defining λ 1 = 2r(Σ) n(4sκ) 2s Σ s , we have, with probability at least 1 − 2 exp(−r(Σ)), sup where C s depends only on s.Observe that λ , we repeat the arguments of the proof of Theorem 3 for the term with just two modifications: We use instead that, with probability at least 1 − n exp(− r(Σ)), and choose t = r(Σ) when applying Lemma 6 in the form (26).The same lines imply that if , which is n(4sκ) 2s Σ s 2r(Σ) then, with probability at least 1 − (n + c 1 ) exp(− r(Σ)), where c 8 is an absolute constant.This inequality will contribute the term c 8 Σ r(Σ) s to the final bound.For this term to be consistent with our final bound we need It is easy to verify that both the last inequality and (31) are satisfied when (8) is satisfied.
Indeed, in (31) we are essentially comparing n r(Σ) 1 s with log 2 en r(Σ) , which implies the need for an s-dependent factor c s in (8).
The analysis of the lower tail can be done analogously.Indeed, we may repeat the analysis for the following decomposition: The second part of the claim follows.
The sub-Gaussian case.We mainly follow the previous proof with several modifications.First, when applying Lemma 5 we can use η = 3κ √ 2s as discussed above.Second, we use (see [50,Lemma 2.7.7]) log 2 y, X ψ 1 y, X ψ 2 κ Σ , and repeat the same lines with (27) replaced by (20) to achieve our bound.
Our final observation is that if n is large, then, with high probability, , where λ is chosen as in Lemma 5. On the corresponding event, we have where C s depends only on s.By (20), with probability at least 1 − n exp(−r(Σ)), we have Solving this with respect to n, we see that it is enough to take n c 9 (r(Σ)) s+1 , where c 9 > 0 is an absolute constant.Applying the union bound, we complete the proof of the statement.
Remark 7. To improve the tail estimate in the Gaussian case for smaller values of n, one needs to prove a ψ 2 -version of Lemma 6.In our analysis we simply used that √ log 2 y, X ψ 1 y, X ψ 2 .

Additional results
We provide two results extending the ideas used in Theorem 1 and Theorem 2.

Deviations of the norm of sub-exponential random vectors
First, assume that X is a zero mean random vector with covariance Σ satisfying X, y ψ 1 κ y T Σy, for all y ∈ R d .For example, the aforementioned result shows that the assumption (32) is implied by the log-concavity assumption.We prove the following result, which improves the bound of Alesker [4] on the concentration of the Euclidean norm of a point sampled uniformly at random

A lower tail version of Theorem 1
It is known that the least singular value of a random matrix can be controlled with high probability guarantees (in fact, it has sub-Gaussian tails) under significantly milder assumptions.This can be seen as a high-dimensional extension of the following fact.For a non-negative random variable Z and for any t 0, That is, the one-sided bound allows for sub-Gaussian concentration even if the random variable Z has infinite moments after the first two.There is a list of dimension-dependent lower tail bounds for the least singular value under minimal assumptions [44,40,26,46,52,11,38].Our next result improves the dimension-dependent bound of Oliveira [40] and complements the result of Theorem 1.Although one can modify the original proof in [40] 2 to get a dimension-free bound, we present a short and self-contained proof.
Proposition 4. Assume that M 1 , . . .M n are independent copies of a positive semi-definite symmetric random matrix M with mean EM = Σ.Let M satisfy for some κ 1, for all x ∈ R d .Then, for any t > log 2, with probability at least 1 − 2 exp(−t), it holds that Proof.Set Θ = R d .Let the prior distribution µ be a multivariate Gaussian distribution with mean zero and covariance β −1 I d .For v ∈ S d−1 let ρ v be a multivariate Gaussian distribution with mean v and covariance β −1 I d .We have KL(ρ v , µ) = β/2.Let θ be distributed according to ρ v .Given a positive semi-definite matrix C, we have Let Z be a random vector having a multivariate Gaussian distribution with zero mean and covariance β −1 Σ. Define ϕ = Σ 1/2 v.We have, using (a + b) 4 8a 4 + 8b 4 and the formulas for the moments of the Gaussian distribution, Consider the negative part of the truncation function.That is, let ψ : (−∞, 0] → [−1, 0] be defined as follows: One can easily check that for x 0 it holds that x ψ(x) log(1 + x + x 2 /2).Fix λ > 0 and consider the function ψ(−λθ T M θ).Observe that −λθ T M θ 0 almost surely, so that the function is well defined.We want to plug it into Lemma 1.Using ψ(x) log(1 + x + x 2 /2) for x 0 and log(1 + y) y for y > −1, we have where we used the moment equivalence assumption together with (34) and (35).By Lemma 1 we have, with probability at least 1 − exp(−t), simultaneously for all v ∈ S d−1 , Observe that ψ is convex.Thus, by Jensen's inequality and (34) we have Since v T M i v 0 by the definition of ψ and using ψ(x) x, we have We plug this in (36), divide both sides by λ > 0 and obtain on the corresponding event It is left to analyze the sum involving Tr(M i ).Observe that where we used, denoting the standard basis in R d by e 1 , . . ., e d , Applying the Bernstein inequality, we have, with probability at least 1 − exp(−t), By the union bound, with probability at least 1 − 2 exp(−t), for any v ∈ S d−1 , Choose β = 10 3 r(Σ) and λ =

A comparison with previous results by Catoni and co-authors
The application of Lemma 1 is not new in the context of estimation of quadratic forms.The earliest such application traces back to the works of Audibert and Catoni on robust linear least squares regression [6,5].More recently, it was done in the context of covariance matrix estimation [11,17,12] and lower tails for the sample covariance matrix [40,38].With a few exceptions discussed below, this technique has not been previously applied to also control the upper tails of the sample covariance matrix.Applications to general multilinear forms were not previously analyzed.
The closest (in terms of proof techniques) to our results are the bounds appearing in [11] and in the follow-up work [17].In particular, the work of Catoni [11] provides the dimensiondependent analysis of the sample covariance matrix under two assumptions: the equivalence of L 4 and L 2 marginal norms as well as the exponential moment assumption on the distribution of X (see their Proposition 2.2 and Proposition 2.3).However, their results fall short in providing the rate of Proposition 2 (our simplest bound), as in their case the d/n convergence rate requires at least n d 5 due to the additive term denoted there by γ + .The sub-optimality comes from the step of the analysis at which max i=1,...,n X i is used to control the moment generating function of the quadratic form.The problem is explicitly pointed out by Catoni [11, pages 15 and 16], where the author is asking if it is possible to show that γ + scales as d/n at least in some cases.The same limitations are inherited in the dimension-free extension of this analysis appearing in [17, Proposition 5.1 and Proposition 5.2]: Apart from the additional energy parameter σ and logarithmic factors, their bound similarly requires the sample size n to be greater than some integer power of r(Σ).To resolve these problems and achieve the optimal guarantees, we introduce a truncated non-isotropic posterior distribution (12).This idea also plays a central role in the proof of Proposition 3 for which no non-isotropic analog is known.
Our second key idea is the analysis of the truncated powers of linear forms in Lemma 5. When s = 2, the bound of Lemma 5 shares some similarities with the uniform bound of Proposition 4.2 in [13].However, we work with an explicit truncation applied to powers of linear forms, which allows to combine the bound of Lemma 5 with the classical decoupling-chaining argument analyzed in [3].This plays a central role in the proofs of Theorem 2 and Theorem 3.

Possible further extensions
In some applications, one is interested in the following generalization of our problem: In the setup of Theorem 1 let T be a bounded subset of R d .Assume that we want to provide a high probability upper bound on Consider the case where T can be approximated by an ellipsoid; that is, assume that there is a symmetric positive definite matrix Γ such that 1 2 Γ 1/2 S d−1 ⊆ T ⊆ 2Γ 1/2 S d−1 .In this case, by considering the random matrix Γ 1/2 M Γ 1/2 we can verify that the norm equivalence assumption (2) holds and the bound of Theorem 1 is applicable.
Theorem 2 allows various extensions that can be achieved using the same approach.For example, with a slight modification of the proof one can write a similar high probability upper bound on where for any given index i the random vectors X k i are either sub-Gaussian or log-concave but not necessarily the same for k = 1, . . ., s.Our results can also be written in the same form as the bounds in [19,3,49] allowing isometric approximations at any scale ε.To do so, we need to take a smaller value of λ in the proof.The restriction on n will be weakened accordingly.
Another extension is a multilinear version of the lower tail bound of Proposition 4. When s is even, we have X, v s 0, so that we can prove the following: With probability at least 1 − exp(−t), it holds that In our proofs, we used that the indexing set is essentially the product of either unit spheres or ellipsoids in R d and the functions we are considering are linear with respect to the elements of the indexing set.Recently, Koltchinskii [24] studied the asymptotic properties of smooth functions of the sample covariance operators.It is likely that if the linearity is replaced by a certain smoothness assumption, our techniques are still applicable.

Remark 4 .
The condition n d + t does not appear in [50, Theorem 4.6.1].As a result, this bound contains an additional additive term scaling as d+t n .In our regime when n d + t,

sup v∈S d− 1 E X, v s − 1 n n i=1 X
i , v s c s η s Σ s/2 r(Σ) + t n , under the only assumption E| y, X | 2s 1/2s η y T Σy for all y ∈ R d .The proof follows from Lemma 5 by the lower tail argument of Theorem 3.This complements the dimension-dependent lower tail bound of Mendelson [35, Corollary 1.8] valid for general L s norms with s > 2.