A PROOF OF A NON-COMMUTATIVE CENTRAL LIMIT THEOREM BY THE LINDEBERG METHOD

A Central Limit Theorem for non-commutative random variables is proved using the Lindeberg method. The theorem is a generalization of the Central Limit Theorem for free random variables proved by Voiculescu. The Central Limit Theorem in this paper relies on an assumption which is weaker than freeness.


Introduction
One of the most important results in free probability theory is the Central Limit Theorem (CLT) for free random variables ( [11]). It was proved almost simultaneously with the invention of free probability theory. Later conditions of the theorem were relaxed ( [10]). Moreover, a farreaching generalization was achieved in [1], which studied domains of attraction of probability laws with respect to free additive convolutions. See also [2]. Freeness is a very strong condition imposed on operators and it is of interest to find out whether the Central Limit Theorem continues to hold if this condition is somewhat relaxed. This problem calls for a different proof of the non-commutative CLT which does not depend on R-transforms or on the vanishing of mixed free cumulants, because both of these techniques are closely connected with the concept of freeness. In this paper we give a proof of free CLT that avoids using either R-transforms or free cumulants. This allows us to develop a generalization of the free CLT to random variables that are not necessarily free but that satisfy a weaker assumption. An example shows that this assumption is strictly weaker than the assumption of freeness. The proof that we use is a modification of the Lindeberg proof of the classical CLT ( [6]). The main difference is that we use polynomials instead of arbitrary functions from C 3 c (R) , and that more ingenuity is required to estimate the residual terms in the Taylor expansion formula. The closest result to the result in this paper is Theorem 2.1 in ( [12]), where the Central Limit Theorem is proved under the conditions on summands that are weaker than the requirement of freeness. The conditions that we use are somewhat different than those in Voiculescu's paper. In addition, we give an explicit example of variables that are not free but that satisfy conditions of the theorem. The rest of the paper is organized as follows. Section 2 provides background material and formulates the main result. Section 3 shows by an example that a condition in the main result is strictly weaker than the condition of freeness. Section 4 contains the proof of the main result. And Section 5 concludes.

Background and Main Theorem
Before proceeding further, let us establish the background. A non-commutative random space (A, E) is a pair of an operator algebra A and a linear functional E on A. It is assumed that A is closed relative to taking the adjoints and contains a unit, and that E is 1) positive, i.e., E (X * X) ≥ 0 for every X ∈ A, 2) finite, i.e., E(I) = 1 where I denotes the unit operator, and 3) tracial, i.e., E (X 1 X 2 ) = E (X 2 X 1 ) for every X 1 and X 2 ∈ A. This linear functional is called expectation. Elements of A are called random variables. Let X be a self-adjoint random variable (i.e., a self-adjoint operator from algebra A). We can write X as an integral over a resolution of identity: where P X (λ) is an increasing family of commuting projectors. Then we can define the spectral probability measure of interval (a, b] as follows: We can extend this measure to all measurable subsets in the usual way. We will call µ X the spectral probability measure of random variable X, or simply its spectral measure. We can calculate the expectation of any summable function of a self-adjoint variable X by using its spectral measure: In particular, the moments of the probability measure µ X equal the expectation values of the powers of X: Let us now recall the definition of freeness. Consider sub-algebras A 1, ..., A n . Let a i denote elements of these sub-algebras and let k (i) be a function that maps the index of an element to the index of the corresponding algebra: a i ∈ A k(i) . Definition 1. The algebras A 1, ..., A n (and their elements) are free if E (a 1 ...a m ) = 0 whenever the following two conditions hold: (a) E (a i ) = 0 for every i, and (b) k(i) = k (i + 1) for every i < m. The variables X 1 , ..., X n are called free if the algebras A i generated by {I, X i , X * i } , respectively, are free.
An important property of freeness is that we can compute the moments of the products of the free random variables. Proposition 2. Suppose X 1, ..., X n are free. Then whereˆdenotes terms that are omitted.
This property is easy to prove by induction. However, we will not need all the power of this property. Below we formulate the conditions that we need to impose on the random variables to prove the CLT. These conditions are consequences of freeness but are likely to be weaker. We will say that a sequence of zero-mean random variables X 1 , ..., X n , ... satisfies Condition A if: 1. For every k, E (X k X i1 ... X ir ) = 0 provided that i s = k for s = 1, ..., r.

For every
provided that i s < k for s = 1, ..., r.
Intuitively, if we know how to calculate every moment of the sequence X 1 , ..., X k−1 , then using Condition A we can also calculate the expectation of any product of random variables X 1 , ..., X k that involves no more than two occurrences of variable X k . Part 1 of Condition A is stronger than is needed for this calculation, since it involves variables with indices higher than k. However, we will need this additional strength in the proof of Lemma 13 below, which is essential for the proof of the main result.
This proposition can be checked by direct calculation using Proposition 2. We will also need the following fact.
Proposition 4. Let X 1 , ..., X l be zero-mean variables that satisfy Condition A(1), and let Y l+1 , ..., Y n be zero-mean variables which are free from each other and from the algebra generated by variables X 1 , ..., X l . Then the sequence X 1 , ..., X l , Y l+1 , ..., Y n satisfies Condition A(1).
where A it is either one of Y j or one of X i but it can equal X k . Then we can use the fact that Y j are free and write where none of X it(α) equals X k . Then, using the assumption that X i satisfy Condition A(1), we In sum, the sequence X 1 , ..., X l , Y l+1 , ..., Y n satisfies Condition A(1). QED. While the freeness of random variables X i is the same concept as the freeness of the algebras that they generate, Condition A deals only with variables X i , and not with the algebras that they generate. For example, it is conceivable that a sequence {X i } satisfies condition A but does not. In particular, this implies that Condition A requires checking a much smaller set of moment conditions than freeness. Below we will present an example of random variables which are not free but which satisfy Condition A. Recall that the standard semicircle law µ SC is the probability distribution on R with the density , and 0 otherwise. We are going to prove the following Theorem.
Theorem 5. Suppose that (i) {ξ i } is a sequence of self-adjoint random variables that satisfies Condition A; (ii) every ξ i has asbsolute moments of all orders, which are uniformly bounded, i.e., The contribution of this theorem is twofold. First, it shows that the semicircle central limit holds for a certain class of non-free variables. Second, it gives a proof of the free CLT which is different from the usual proof through R-transforms. However, it is not stronger than a version of the free CLT which is formulated in Section 2.5 in [10].

Example
Let us present an example that suggest that Condition A is strictly weaker than the freeness condition. Let F be the free group with a countable number of generators f k . Consider the set of relations where k ≥ 2, and define G = F/R, that is, G is the group with generators f k and relations generated by relations in R.
Here are some consequences of these relationships: We are interested in the structure of the group G. For this purpose we will study the structure of R, which is a subgroup of F generated by elements of R and their conjugates. We will represent elements of F by words, that is, by sequences of generators. We will say that a word is reduced if does not have a subsequence of the form We will call a number of elements in a reduced word w its length and denote it as |w| . A set of relations R is symmetrized if for every word r ∈ R, the set R also contains its inverse r −1 and all cyclically reduced conjugates of both r and r −1 . For our particular example, a symmetrized set of relations is given by the following list: where k are all integers ≥ 2. A word b is called a piece (relative to a symmetrized set R) if there exist two elements of R, r 1 and r 2 , such that r 1 = bc 1 and r 2 = bc 2 . In our case, each f k and f −1 There is no other pieces. Now we introduce the condition of small cancellation for a symmetrized set R: Condition 6 (C ′ (λ)). If r ∈ R and r = bc where b is a piece, then |b| < λ |r| .
Essentially, the condition says that if two relations are multiplied together, then a possible cancellation must be relatively small. Note that if R satisfies C ′ (λ) then it satisfies C ′ (µ) for all µ ≥ λ. In our example R satisfies C ′ (1/5) . Another important condition is the triangle condition.
Condition 7 (T ). Let r 1 , r 2 , and r 3 be three arbitrary elements of R such that r 2 = r −1 1 and Then at least one of the products r 1 r 2 , r 2 r 3 , or r 3 r 1 , is reduced without cancellation.
In our example, Condition (T ) is satisfied. If s is a word in F, then s > λR means that there exists a word r ∈ R such that r = st and |s| > λ |r| . An important result from small cancellation theory that we will use later is the following theorem: Theorem 8 (Greendlinger's Lemma). Let R satisfy C ′ (1/4) and T. Let w be a non-trivial, cyclically reduced word with w ∈ R. Then either (1) w ∈ R, or some cyclycally reduced conjugate w * of w contains one of the following: (2) two disjoint subwords, each > 3 4 R, or (4) four disjoint subwords, each > 1 2 R. This theorem is Theorem 4.6 on p. 251 in [7]. Since in our example R satisfies both C ′ (1/4) and T, we can infer that in our case the conclusion of the theorem must hold. For example, (2) means that we can find two disjoint subwords of w, s 1 and s 2 , and two elements of R, r 1 and r 2 , such that r i = s i t i and |s i | > (3/4) |r i | = 9/2. In particular, we can conclude that in this case |w| ≥ 10. Similarly, in case (4), |w| ≥ 16. One immediate application is that G does not collapse into the trivial group. Indeed, f i are not zero. Let L 2 (G) be the functions of G that are square-summable with respect to the counting measure. G acts on L 2 (G) by left translations: Let A be the group algebra of G. The action of G on L 2 (G) can be extended to the action of A on L 2 (G) . Define the expectation on this group algebra by the following rule: where ·, · denotes the scalar product in L 2 (G) . Alternatively, the expectation can be written as follows: where h = g∈G a g g is a representation of a group algebra element h as a linear combination of elements g ∈ G. The expectation is clearly positive and finite by definition. It is also tracial because g 1 g 2 = e if and only if g 2 g 1 = e. If L h = g∈G a g L g is a linear operator corresponding to the element of group algebra h = g∈G a g g, then its adjoint is (L h ) * = g∈G a g L g −1 , which corresponds to the element h * = g∈G a g g −1 .
i . They are self-adjoint and E (X i ) = 0. Also we can compute E X 2 i = 2. Indeed it is enough to note that f 2 i = e and f −2 i = e, and this holds because insertion or deletion of an element from R changes the degree of f i by a multiple of 3. Therefore, every word equal to zero must have the degree of every f i equal to 0 modulo 3.
Proposition 9. The sequence of variables {X i } is not free but satisfies Condition A.
Proof: The variables X k are not free. Consider = e, and all other terms in the expansion of X 2 X 1 X 2 X 1 X 2 X 1 are different from e. Indeed, the only terms that are not of the form above but still have the degree of all f i equal to zero modulo 3 are but they do not equal zero by application of Greendlinger's lemma. Therefore, E (X 2 X 1 X 2 X 1 X 2 X 1 ) = 2. This contradicts the definition of freeness of variables X 2 and X 1 .
Assume that neither f i1 ...f ip nor f ip+1 ...f iq can be reduced to e. Otherwise we can use property A2. Then the claim is that E f ε1 k f i1 ...f ip f ε2 k f ip+1 ...f iq = 0. This is clear when ε 1 and ε 2 have the same sign since in this case the degree of f k does not equal 0 modulo 3. A more difficult case is when ε 1 = 1 and ε 2 = −1. (The case with opposite signs is similar.) However, in this case we can conclude ..f iq = e by an application of Greendlinger's lemma. Indeed, the only subwords that this word can contain and which would also be subwords of an element of R, are subwords of length 1 and 2. But these subwords fail to satisfy the requirement of either (2) or (4) in Greendlinger's lemma. Therefore, we can conclude that ..f iq = e, and therefore A(3) is also satisfied. Thus Condition A is satisfied by random variables X 1 , ..., X k , ... in algebra A, although these variables are not free. QED. To estimate Ef (S N ) − Ef S N , we substitute the elements in S N with free semicircle variables, one by one, and estimate the corresponding change in the expected value of f (S N ). After that, we show that the total change, as all elements in the sum are substituted with semicircle random variables, is asymptotically small as N → ∞. Finally, the tightness of the selected family of functions allows us to conclude that the distribution of S N must converge to the semicircle law as N → ∞. The usual choice of functions f in the classical case are functions from C 3 c (R) , that is, functions with a continuous third derivative and compact support. In the non-commutative setting this family of functions is not appropriate because the usual Taylor series formula is difficult to apply. Intuitively, it is difficult to develop f (X + h) in a power series of h if variables X and h do not commute. Since the Taylor formula is crucial for estimating the change in Ef (S N ), we will still use it but we will restrict the family of functions to polynomials. To show that the family of polynomials is sufficiently rich for our purposes, we use the following Proposition:

Proof of the Main Result
Proposition 10. Suppose there is a unique distribution function F with the moments m (r) , r ≥ 1 . Suppose that {F N } is a sequence of distribution functions, each of which has all its moments finite: Finally, suppose that for every r ≥ 1 : Then F N → F vaguely. See Theorem 4.5.5.on page 99 in [3] for a proof. Note that Chung uses words "vague convergence" to denote that kind of convergence which is more often called the weak convergence of probability measures. Since the semicircle distribution is bounded and therefore is determined by its moments (see Corollary to Theorem II.12.7 in [8]), therefore the assumption of Proposition 10 is satisfied, and we only need to show that the moments of S n converge to the corresponding moments of the semicircle distribution. Proof of Theorem 5: Define η i as a sequence of random variables that are freely independent among themselves and also freely independent from all ξ i . Suppose also that η i have semicircle distributions with Eη i = 0 and Eη 2 i = σ 2 i . We are going to accept the fact that the sum of free semicircle random variables is semicircle, and therefore, the spectral distribution of (η 1 + ... + η N ) / s √ N converges in distribution to the semicircle law µ SC with zero expectation and unit variance. Let us define X i = ξ i /s N and Y i = η i /s N . We will proceed by proving that moments of X 1 + ... + X N converge to moments of Y 1 + ... + Y N and applying Proposition 10. Let where f (x) = x m . We want to show that this difference approaches zero as N grows. By assumption, EY i = EX i = 0 and EY 2 i = EX 2 The first step is to write the difference ∆f as follows: We intend to estimate every difference in this sum. Let We are interested in We are going to apply the Taylor expansion formula but first we define directional derivatives. Let f ′ X k (Z k ) be the derivative of f at Z k in direction X k , defined as follows: The higher order directional derivatives can be defined recursively. For example, For polynomials, this definition is equivalent to the following definition: Example 11. Operator directional derivatives of f (x) = x 4 Let us compute f ′ X (Z) and f ′′ X (Z) for f (x) = x 4 . Using definitions we get and the expression for f ′′ X (Z) does not depend on whether definition (3) or (4) was applied.
The derivatives of f at Z k + τ X k in direction X k are defined similarly, for example: Next, let us write the Taylor formula for f (Z k + X k ): Formula (6) can be obtained by integration by parts from the expression For polynomials it is easy to write the explicit expressions for f (r) X k (Z k + τ X k ) although they can be quite cumbersome for polynomials of high degree. Very schematically, for a function f (x) = x m , we can write and Similar formulas hold for f ′ Y k (Z k ) and f ′′ Y k (Z k ) , with the change that Y k should be used instead of X k . Using the assumptions that sequence {X k } satisfies Condition A and that variables Y k are free, we can conclude that . Indeed, consider, for example, (8). We can use expression (2) for Z k and the free independence of Y i to expand (8) as where X i denotes certain monomials in variables X 1 , ..., X k−1 (i.e., X i = X i1 ...X ip with i k ∈ {1, ..., k − 1}), and where α indexes certain polynomials P α . In other words, using the free independence of Y i and X i we expand the expectations of polynomial f ′′ X k (Z k ) as a sum over polynomials in joint moments of variables X j and Y i where j = 1, ..., k and i = k + 1, ..., N. By freeness, we can reduce the resulting expression so that the moments in the reduced expression are either joint moments of variables X j or joint moments of variables Y i but never involve both X j and Y i . Moreover, we can explictly calculate the moments of Y i (i.e., expectations of the products of Y i ) because their are mutually free. The resulting expansion is (9). Let us try to make this process clearer by an example. Suppose that f (x) = x 4 , N = 4, k = 2 and Z k = Z 2 = X 1 + Y 3 + Y 4 . We aim to compute Ef ′′ X2 (Z 2 ) . Using formula (5), we write: Then, using the freeness of Y 3 and Y 4 and the facts that E (Y i ) = 0 and E Y 2 i = σ 2 i , we continue as follows: which is the expression we wanted to obtain.
It is important to note that the coefficients c α do not depend on variables X j but only on Y j , j > k, and on the locations which Y j take in the expansion of f ′′ X k (Z k ) . Therefore, we can substitute Y k for X k and develop a similar formula for Ef ′′ Y k (Z k ): In the example above, we will have Formula (10) is exactly the same as formula (9) except that all X k are substituted with Y k . Finally, using Condition A we obtain that for every i: Next, note that if f is a polynomial, then f ′′′ X k (Z k + τ X k ) is the sum of a finite number of terms which are products of Z k + τ X k and X k . The number of terms in this expansion is bounded by C 1 , which depends only on the degree m of the polynomial f. A typical term in the expansion looks like In addition, if we expand the powers of Z k + τ X k , we will get another expansion that has the number of terms bounded by C 2 , where C 2 depends only on m. A typical element of this new expansion is Every term in this expansion has a total degree of X k not less than 3, and, correspondingly, a total degree of Z k not more than m − 3. Our task is to show that as n → ∞, these terms approach 0. We will use the following lemma to estimate each of the summands in the expansion of f ′′′ X k (Z k + τ X k ).
Lemma 12. Let X and Y be self-adjoint. Then QED. We apply Lemma 12 to estimate each of the summands in the expansion of f ′′′ X k (Z k + τ X k ). Consider a summand E (Z m1 k X n1 k ...Z mr k X nr k ) . Then by Lemma 12, we have Next step is to estimate the absolute moments of the variable Z k .
where v i are self-adjoint and satisfy condition A(1) and let E |v i | k ≤ µ k for every i. Then, for every integer r ≥ 0 Proof: We will first treat the case of even r. In this case, E (|Z| r ) = E (Z r ) . Consider the expansion of (v 1 + ... + v N ) r . Let us refer to the indices 1, ..., N as colors of the corresponding v. If a term in the expansion includes more than r/2 distinct colors, then one of the colors must be used by this term only once. Therefore, by the first part of condition A the expectation of such a term is 0. Let us estimate a number of terms in the expansion that include no more than r/2 distinct colors. Consider a fixed combination of ≤ r/2 colors. The number of terms that use colors only from this combination is ≤ (r/2) r . Indeed, consider the product with r product terms. We can choose an element from the first product term in r/2 possible ways, an element from the second product term in r/2 possible ways, etc. Therefore, the number of all possible choices is (r/2) r . On the other hand, the number of possible different combinations of k ≤ r/2 colors is Therefore, the total number of terms that use no more than r/2 colors is bounded from above by (r/2) r N r/2 . Now let us estimate the expectation of an individual term in the expansion. In other words we want to estimate E v k1 i1 ...v ks is , where k t ≥ 1, k 1 + ... + k s = r, and i t = i t+1 . First, note that Indeed, using the Cauchy-Schwartz inequality, for any operator X we can write where U is a partial isometry and P = U * U is a projection. Note that from the positivity of the expectation functional it follows that E (|X| P ) ≤ E (|X|) . Therefore, we can conclude that |E (X)| ≤ E (|X|) . Next, we use the Hölder inequality for traces of non-commutative operators (see [4] Without loss of generality we can assume that bounds µ k are increasing in k. Using the facts that s ≤ r and k i ≤ r, we obtain the bound: Therefore, Now consider the case of odd r. In this case, we use the Lyapunov inequality to write: The important point is that the bounds in (12) and (13) do not depend on N. QED. By definition Z k = (ξ 1 + ... + ξ k−1 + η k+1 + ... + η N ) /s N and by assumption ξ i and η i are uniformly bounded, and s N ∼ √ N . Moreover, ξ 1 , ..., ξ k−1 satisfy Condition A by assumption, and η k+1 , ..., η N are free from each other and from ξ 1 , ..., ξ k−1 . Therefore, by Proposition 4, ξ 1 , ..., ξ k−1 , η k+1 , ..., η N satisfy condition A(1). Consequently, we can apply Lemma 13 to Z k and conclude that E |Z k | r is bounded by a constant that depends only on r but does not depend on N. Using this fact, we can continue the estimate in (11) and write: where the constant C 4 depends only on m.
In sum, we obtain the following Lemma: Lemma 14.
Ef ′′′ X k (Z k + τ X k ) ≤ C 5 N −3/2 , where C 5 depends only on the degree of polynomial f and the sequence of constants µ k .
A similar result holds for Ef ′′′ X k (Z k + τ Y k ) and we can conclude that After we add these inequalities over all k = 1, ..., N we get Clearly this estimate approaches 0 as N grows. Applying Proposition 10, we conclude that the measure of X 1 + ... + X N converges to the measure of Y 1 + ... + Y N in distribution. This finishes the proof of the main theorem.

Concluding Remarks
The key points of this proof are as follows: 1) We can substitute each random variable X i in the sum S N with a free random variable Y i so that the first and the second derivatives of any polynomial with S N in the argument remain unchanged. The possibility of this substitution depends on Condition A being satisfied by X i . 2)We can estimate a change in the third derivative as we substitute Y i for X i by using the first part of Condition A and several matrix inequalities, valid for any collection of operators. Here Condition A is used only in the proof that the k-th moment of (ξ 1 + ... + ξ N ) /N 1/2 is bounded as N → ∞.
It is interesting to speculate whether the ideas in this proof can be generalized to the case of the multivariate CLT.