The mean spectral measures of random Jacobi matrices related to Gaussian beta ensembles

An explicit formula for the mean spectral measure of a random Jacobi matrix is derived. The matrix may be regarded as the limit of Gaussian beta ensemble (G$\beta$E) matrices as the matrix size $N$ tends to infinity with the constraint that $N \beta $ is a constant.

where the diagonal is an i.i.d.(independent identically distributed) sequence of standard Gaussian N (0, 1) random variables, the off diagonal is also an i.i.d.sequence of χ2α -distributed random variables.Here χ2α = χ 2α / √ 2 with χ 2α denoting the chi distribution with 2α degree of freedom.As explained later, J α is regarded as the limit of Gaussian beta ensembles (GβE for short) as the matrix size N tends to infinity and the parameter β also varies with the constraint that N β = 2α.
Let us explain some terminologies and introduce main results of the paper.A (semi-infinite) Jacobi matrix is a symmetric tridiagonal matrix of the form For a Jacobi matrix J, there is a probability measure µ on R such that R x k dµ = J k e 1 , e 1 = J k (1, 1), k = 0, 1, . . ., where e 1 = (1, 0, . . . ) T ∈ 2 .Here u, v denotes the inner product of u and v in 2 , while µ, f := f dµ will be used to denote the integral of a function f with respect to a measure µ.Then the measure µ is unique if and only if J, as a symmetric operator defined on D 0 = {x = (x 1 , x 2 , . . ..) : x k = 0 for k sufficiently large}, is essentially self-adjoint, that is, J has a unique selfadjoint extension in 2 .When the measure µ is unique, it is called the spectral measure of J, or more precisely, the spectral measure of (J, e 1 ).It is known that the condition implies the essential self-adjointness of J, [6, Corollary 3.8.9].
For the random Jacobi matrix J α , the above condition holds almost surely because its off diagonal elements are positive i.i.d.random variables.Thus spectral measures µ α are uniquely determined by the following relations ), k = 0, 1, . . . .Then the mean spectral measure μα is defined to be a probability measure satisfying μα for all bounded continuous functions f on R. It then follows that provided that the right hand side of the above equation is finite for all k.
The purpose of this paper is to identify the mean spectral measure μα .Our main results are as follows.
Theorem 1. (i) The mean spectral measure μα coincides with the spectral measure of the non-random Jacobi matrix A α , where (ii) The measure μα has the following density function where Let us sketch out main ideas for the proof of the above theorem.To show the first statement, the key idea is to regard the Jacobi matrix J α as the limit of GβE as the matrix size N tends to infinity with N β = 2α.More specifically, let T N (β) be a finite random Jacobi matrix whose components are (up to the symmetry constraints) independent and are distributed as Then it is well known in random matrix theory that the eigenvalues of T N (β) are distributed as GβE, namely, Moreover, by letting N → ∞ with β = 2α/N , the matrices T N (β) converge, in some sense, to J α .That crucial observation together with a result on moments of GβE ([2, Theorem 2.8]) makes it possible to show that μα coincides with the spectral measure of A α .The next step is to establish the following self-convolutive recurrence for even moments of μα , where u n (α) is the 2nth moment of μα .Note that its odd moments are all vanishing because the spectral measure of A α is symmetric.Finally, the explicit formula for μα is derived by using the method in [4].
The paper is organized as follows.In the next section, we mention some known results on GβE needed in this paper.In Section 3, we introduce the matrix model and step by step, prove the main theorem.

A result on Gaussian β-ensembles
The Jacobi matrix model for GβE, a finite random Jacobi matrix, was discovered by Dumitriu and Edelman [1].First of all, let us mention some preliminary facts about finite Jacobi matrices.Assume that J is a finite Jacobi matrix of order N (with the requirement that the off diagonal elements are positive).Then the matrix J has exactly N distinct eigenvalues λ 1 , λ 2 , . . ., λ N .Let v 1 , v 2 , . . ., v N be the corresponding eigenvectors which are chosen to be an orthonormal basis in R N .Then the spectral measure µ, which is well defined by µ, x k = J k (1, 1), k = 0, 1, . . ., can be expressed as where δ λ denotes the Dirac measure.It is known that a finite Jacobi matrix of order N is one-to-one correspondence with a probability measure supported on N points, or a set of Jacobi matrix parameters {a i } N i=1 , {b j } N −1 j=1 is one-to-one correspondence with the spectral data {λ i } N i=1 , {q j } N j=1 .The Jacobi matrix model for GβE is defined as follows.Let {a i } N i=1 be an i.i.d.sequence of standard Gaussian N (0, 1) random variables and {b j } N −1 j=1 be a sequence of independent random variables having χ distributions with parameters (N − 1)β, (N − 2)β, . . ., 1, respectively, which is independent of {a i } N i=1 .Here χk , for k > 0, denotes the distribution with the following probability den- or the square root of the gamma distribution with parameter (k/2, 1).We form a random Jacobi matrix T N (β) from {a i } N i=1 and {b j } N −1 j=1 as follows, Then the eigenvalues {λ i } N i=1 and the weights {q j } N j=1 are independent, with the distribution of the former given by and the distribution of the latter given by It is also known that q = (q 1 , . . ., q N ) is distributed as a vector ( χβ , . . ., χβ ) with i.i.d.components, normalized to unit length.The trace of T N (β) n and T N (β) n (1, 1) can be expressed in term of the spectral data as Consequently, In the rest of this section, for convenience, we use the parameter ) is a polynomial of degree p in N , and thus m p (N, β) is defined for all N ∈ R. Then a result for the trace of T N (β) n can be rewritten for m p (N, β) as follows.
Theorem 2 (cf.[2, Theorem 2.8] and [7,Theorem 2]).It holds that Observe that β−p m p (N, β) is the expectation of the 2pth moment of the spectral measure of the following Jacobi matrix The convergences also hold almost surely.Therefore as β → ∞, Here the convergence of matrices means the convergence (in L q ) of their elements.Let h p (N ) = H 2p N (1, 1) for N > p. Then h p (N ) is a polynomial of degree p in N so that h p (N ) is defined for all N ∈ R. The above convergence of matrices implies that for fixed p and fixed N , Let and let u p (α) = A 2p α (1, 1).Then u p (α) is also a polynomial of degree p in α.In addition, it is easy to see that As a direct consequence of Theorem 2 and relations (1) and (2), we get the following result.
3 Random Jacobi matrices related to Gaussian β ensembles 3.1 A matrix model and proof of Theorem 1(i) Consider the following random Jacobi matrix where all components are independent random variables.More precisely, the diagonal {a i } ∞ i=1 is an i.i.d.sequence of standard Gaussian N (0, 1) random variables and the off diagonal {b j } ∞ j=1 is another i.i.d.sequence of χ2α random variables.Then the spectral measure µ α of J α exists and is unique almost surely because The mean spectral measure μα is defined to be a probability measure satisfying μα for all bounded continuous functions f on R. Then Theorem 1(i) states that the measure μα coincides with the spectral measure of (A α , e 1 ).
Proof of Theorem 1(i).Note that the spectral measure of A α , a probability measure µ satisfying µ, < ∞ for all k = 0, 1, . . . .Therefore, our task is now to show that for all k = 0, 1, . . ., We consider the case of even k first.For any fixed j, all moments of the χ(N−j)2 β distribution converge to those of the χ2α distribution as N → ∞ with β = α/N .Thus for fixed p, as N → ∞ with β = α/N , Consequently, for even k, namely, by taking into account Proposition 3.
For odd k, both sides of the equation ( 3) are zeros.Indeed, A k α (1, 1) = 0 when k is odd because the diagonal of A α is zero.Also all odd moments of μα are vanishing, μα , x 2p+1 = E[ µ α , x 2p+1 ] = 0, because the expectation of odd moments of any diagonal element of J α are zero.
The proof is completed.

Moments of the spectral measure of A α
Recall that u n (α) = A 2n α (1, 1), n = 0, 1, . . . .Proposition 4. (i) u n (α) is a polynomial of degree n in α and satisfies the following relations (ii) {u n (α)} ∞ n=0 also satisfies the following relations Remark 5.The sequences {u n (α)} n≥0 , for α = 1 and α = 2, are the sequences A000698 and A167872 in the On-line Encyclopedia of Integer Sequences [5], respectively.Relations ( 4) and ( 5) as well as many interesting properties for those sequences can be found in the above reference.In the proof below, we give another explanation of u n (α) as the total sum of weighted Dyck paths of length 2n.
Proof.In this proof, for convenience, let the index of the matrix A α start from 0. Since the diagonal of A α is zero, it follows that where D 2n denotes the set of indices {i 0 , i 1 , . . ., i 2n } satisfying that Each element in D 2n corresponds to a path of length 2n consisting of rise steps or rises and fall steps or falls which starts at (0, 0) and ends at (2n, 0), and stays above the x-axis, called a Dyck path.We also use D 2n to denote the set of all Dyck paths of length 2n.A Dyck path p is assigned a weight w(p) as follows.We assign a weight (α + k + 1) for each rise step from level k to k + 1, and the weight w(p) is the product of all those weights.Then Let D * 2n be the set of all Dyck paths of length 2n which do not meet the x-axis except the starting and the ending points.Let Moreover, let 2i be the first time that the Dyck path p meets the x-axis.Then either i = n or the Dyck path p is the concatenation of a Dyck path in D * 2i , (1 ≤ i < n), and another Dyck path of length 2(n − i).Thus, The proof of (i) is complete.We will prove the second statement after the next lemma.
Lemma 6.Let α ≥ 0 be fixed.Let {a n } be a sequence defined recursively by Let {b n } be a sequence defined by the following relations b 0 = 1, Then {b n } satisfies an analogous recursive relation as Proof.Consider the field of formal Laurent series over R, denoted by R((X)), The addition is defined as usual and the multiplication is well defined as It is straightforward to show that the recursive relation ( 6) is equivalent to the following equation In addition, the relation ( 7) leads to .
Finally, we can easily check that g(X) satisfies which is equivalent to the recursive relation (8).The proof is complete.
Proof of Proposition 4(ii).When α = 0, it is well known that u n (0) is the 2nth moment of the standard Gaussian distribution, and is given by Consequently, the conditions in Lemma 6 are satisfied for a n = u n (0), b n = u n (1) and α = 0.It follows that the recursive relation ( 5) then holds for α = 1.Continue this way, it follows that the recursive relation ( 5) holds for any α ∈ N. We conclude that it holds for all α because of the fact that {u n (α)} is a polynomial of degree n in α.The proof is complete.

Explicit formula for the spectral measure of A α , proof of Theorem 1(ii)
In this section, by using the method of Martin and Kearney [4], we derive the explicit formula for the mean spectral measure μα from the relation (5), Recall that u n (α) = μα , x 2n and μα is a symmetric probability measure.
We are now in a position to simplify the explicit formula of μα .Let Here, in the above expressions, we have used the following relation for Gamma function Γ Then μα (y) can be written as Next, we will show that V R (y) and V I (y) are the Fourier cosine transform and Fourier sine transform of respectively.Let us now give definitions of Fourier transforms.The Fourier transform of a function f : R → C is defined to be and the Fourier cosine transform, the Fourier sine transform are defined to be t) sin(yt)dt, (y > 0), respectively.Then those transforms are related as follows For α > 0, we have (cf.Formula 3.952(8) in [3]) Then by some simple calculations, we arrive at the following relation Similarly, V I (y) = F s (f α (t))(y), y ≥ 0, by using Formula 3.952(7) in [3], .
By definitions, V R (y) is an even function and V I (y) is an odd function.Thus the following expression holds for all y ∈ R,  Remark 7. When α in a positive integer number, we can give even more explicit expressions for V R (y) and V I (y).

V 2 π ∞ 0 f 4 − x 2 ,
R (y) + iV I (y) = 2 π ∞ 0 f α (t)(cos(yt) + i sin(yt)dt = α (t)e iyt dt =: fα (y).Consequently, V R (y) 2 + V I (y) 2 = | fα (y)| 2 ,which completes the proof of Theorem 1(ii).We plot the graph of the density μα (y) for several values α as in the following figure by using Mathematica.It follows from the Jacobi matrix form that the spectral measure of 1 √ α A α converges weakly to the semicircle law as α tends to infinity.Note that the semicircle law, the probability measure supported on [−2, 2] with the density1 (−2 ≤ x ≤ 2),is the spectral measure of the following Jacobi matrix