On the Second-Order Correlation Function of the Characteristic Polynomial of a Real Symmetric Wigner Matrix

We consider the asymptotic behaviour of the second-order correlation function of the characteristic polynomial of a real symmetric random matrix. Our main result is that the existing result for a random matrix from the Gaussian Orthogonal Ensemble essentially continues to hold for a general real symmetric Wigner matrix.


Introduction
In recent years, the characteristic polynomials of random matrices have found considerable interest.This interest was sparked, at least in part, by the discovery by Keating and Snaith [KS] that the moments of a random matrix from the Circular Unitary Ensemble (CUE) seem to be related to the moments of the Riemann zeta-function along the critical line.Following this observation, several authors have investigated the moments and correlation functions of the characteristic polynomial also for other random matrix ensembles (see e.g.Brézin and Hikami [BH1,BH2], Mehta and Normand [MN], Strahov and Fyodorov [SF], Baik, Deift and Strahov [BDS], Borodin and Strahov [BS], Götze and Kösters [GK]).
One important observation is that the correlation functions of the characteristic polynomials of Hermitian random matrices are related to the "sine kernel" sin x/x.More precisely, this holds both for the unitary-invariant ensembles (Strahov and Fyodorov [SF]) and -at least as far as the second-order correlation function is concerned -for the Hermitian Wigner ensembles (Götze and Kösters [GK]).Thus, the emergence of the sine kernel may be regarded as "universal" for Hermitian random matrices.
In contrast to that, the correlation functions of the characteristic polynomials of real symmetric random matrices lead to different results.This was first observed by Brézin and Hikami [BH2], who investigated the Gaussian Orthogonal Ensemble (GOE) (see e.g.Forrester [Fo] or Mehta [Me] for definitions) and came to the conclusion that the second-order correlation function of the characteristic polynomial is related to the function sin x/x 3 − cos x/x 2 in this case.(See below for a more precise statement of this result.)Moreover, Borodin and Strahov [BS] obtained similar results for arbitrary products and ratios of characteristic polynomials of the GOE.
The main purpose of this paper is to generalize the above-mentioned result by Brézin and Hikami [BH2] about the second-order correlation function of the characteristic polynomial of the GOE to arbitrary real symmetric Wigner matrices.Throughout this paper, we consider the following situation: Let Q be a probability distribution on the real line such that and let (X ii / √ 2) i∈N and (X ij ) i<j, i,j∈N be independent families of independent real random variables with distribution Q on some probability space (Ω, F, P).Also, let X ji := X ij for i < j, i, j ∈ N.Then, for any N ∈ N, the real symmetric Wigner matrix of size N × N is given by X N = (X ij ) 1≤i,j≤N , and the second-order correlation function of the characteristic polynomial is given by where µ, ν are real numbers and I N denotes the identity matrix of size N × N .
In the special case where Q is given by the standard Gaussian distribution, the distribution of the random matrix X N is the Gaussian Orthogonal Ensemble (GOE).(Note, however, that our scaling is slightly different from that mostly used in the literature (see e.g.Forrester [Fo] or Mehta [Me]), since the variance of the off-diagonal matrix entries is fixed to 1, and not to 1/2.)The result by Brézin and Hikami [BH2] then corresponds to the statement that lim where ξ ∈ (−2, +2), µ, ν ∈ R, and ̺(ξ) := 1 2π 4 − ξ 2 .Our main result is the following generalization of (1.3): Theorem 1.1.Let Q be a probability distribution on the real line satisfying (1.1), let f be defined as in (1.2), let ξ ∈ (−2, +2), and let µ, ν ∈ R. Then we have lim where ̺(ξ) := 1 2π 4 − ξ 2 denotes the density of the semi-circle law, and T (x) := 1/6 for x = 0, by continuous extension.
In particular, we find that the correlation function of the characteristic polynomial asymptotically factorizes into a universal factor involving the function T (x), another universal factor involving the density ̺(ξ) of the semi-circle law, and a non-universal factor which depends on the underlying distribution Q only via its fourth moment b, or its fourth cumulant b − 3.
It is interesting to compare the above result with the corresponding result for Hermitian Wigner matrices (Theorem 1.1 in Götze and Kösters [GK]), which states that under similar assumptions as in (1.1) and with similar notation as in (1.2), we have lim ( f and b denote the analogues of f and b, respectively.)Obviously, the structure of (1.6) and (1.4) is the same.The most notable difference is given by the fact that the "sine kernel" sin x/x in (1.6) is replaced with the function T (x) in (1.4).It is noteworthy that both functions are closely related to Bessel functions, as already observed by Brézin and Hikami [BH2].Indeed, it is well-known (see e.g.p. 78 in Erdélyi [Er]) that where J p (x) denotes the Bessel function of order p.Thus, if one wishes, one may rewrite both (1.6) and (1.4) in the common form lim Furthermore, in the special case that ξ = µ = ν = 0, Theorem 1.1 reduces to a result about determinants of random matrices, due to Zhurbenko [Zh].
To prove Theorem 1.1, we show that the approach for Hermitian Wigner matrices adopted by Götze and Kösters [GK] can easily be adapted to real symmetric Wigner matrices.This stands in contrast to the "orthogonal polynomial approach" typically used in the analysis of the invariant ensembles, for which the transition from the unitary-invariant ensembles (such as the GUE) to the orthogonal-invariant ensembles (such as the GOE) is usually more complicated.
Acknowledgement.The author thanks Friedrich Götze for the suggestion to study the problem.

Generating Functions
In this section, we determine the exponential generating function of the correlation function of the characteristic polynomial of a real symmetric Wigner matrix.Our results generalize those by Zhurbenko [Zh], who considered the special case of determinants.
We make the following conventions: The determinant of the "empty" (i.e., 0×0) matrix is taken to be 1.If A is an n × n matrix and z is a real or complex number, we set A−z := A−zI n , where I n denotes the n×n identity matrix.Also, if A is an n × n matrix and i 1 , . . ., i m and j 1 , . . ., j m are families of pairwise different indices from the set {1, . . ., n}, we write A [i 1 ,...,im:j 1 ,...,jm] for the (n − m) × (n − m)-matrix obtained from A by removing the rows indexed by i 1 , . . ., i m and the columns indexed by j 1 , . . ., j m .Thus, for any n × n matrix A = (a ij ) 1≤i,j≤n (n ≥ 1), we have as follows by expanding the determinant about the last row and the last column.
(For n = 1, note that the big sum vanishes.)Recall that we write X N for the real symmetric random matrix (X ij ) 1≤i,j≤N , where the X ij are the random variables from the introduction.We will analyze the function To this purpose, we will also need the auxiliary functions Note that the functions f B 11 and f C 11 actually coincide, but we will not need this.Since µ and ν can be regarded as constants for the purposes of this section, we will only write f (N ) instead of f (N ; µ, ν), etc.
We have the following recursive equations: Lemma 2.1. (2.7) Proof.We give the proof for the recursive equation for f (N ) only, the proofs for the remaining recursive equations being very similar.
The result for f (0) is clear.For N ≥ 1, we expand the determinants of the matrices (X N − µ) and (X N − ν) as in (2.1) and use the independence of the random variables X ij = X ji (i ≤ j), thereby obtaining Since the random variables X ij = X ji (i ≤ j) are independent with E(X ij ) = 0 (i ≤ j), several of the expectations vanish, and the sum reduces to From this (2.2) follows by noting that EX 2 N,N = 2, EX 2 i,N = 1, EX 4 i,N = b, and by exploiting obvious symmetries.
It turns out that the above recursions can be combined into a single recursion involving only the values f (N ).Using the abbreviations and we have the following result: Lemma 2.2.The values c(N ) satisfy the recursive equation where all terms c( • ) and s( • ) with a negative argument are taken to be zero.
Proof.It follows from Lemma 2.1 that ) for all N ≥ 3. Thus, we can substitute f A 11 (N − 1) + f B 11 (N − 1) + f C 11 (N − 1) on the right-hand side of (2.2) to obtain for all N ≥ 3. (For N = 3, note that the second term in the large bracket vanishes.)Since for all N ≥ 3, as follows from (2.6) and (2.7) by a straightforward induction, the assertion for N ≥ 3 is proved.
The assertion for N < 3 follows from Lemma 2.1 by direct calculation: Using Lemma 2.2, we can determine the exponential generating function of the sequence (f (N )) N ≥0 : Lemma 2.3.The exponential generating function , where b * := 1 2 (b − 3).
Proof.Multiplying (2.9) by x N −1 , summing over N and recalling our convention concerning negative arguments, we have This leads to the differential equation which has the solution Here, F 0 is a multiplicative constant which must be chosen as exp( 12 (µ 2 + ν 2 )) in order to satisfy (2.8).Inserting this above completes the proof.

The Proof of Theorem 1.1
This section is devoted to proving Theorem 1.1.In doing so, we will closely follow the proof of Theorem 1.1 in Götze and Kösters [GK].Throughout this section, T (x) will denote the function defined in Theorem 1.1.
We will first establish the following slightly more general result: Proposition 3.1.Let Q be a probability distribution on the real line satisfying (1.1), let f be defined as in (1.2), let (ξ N ) N ∈N be a sequence of real numbers such that lim N →∞ ξ N / √ N = ξ for some ξ ∈ (−2, +2), and let η ∈ C. Then we have It is easy to see that Proposition 3.1 implies Theorem 1.1: Proof of Theorem 1.1.Taking from which Theorem 1.1 follows by a simple rearrangement.
Proof of Proposition 3.1.From the exponential generating function obtained in Lemma 2.3, we have the integral representation where γ ≡ γ N denotes the counterclockwise circle of radius R ≡ R N = 1 − 1/N around the origin.(We will assume that N ≥ 2 throughout the proof.) Setting µ = ξ N + η/ √ N and ν = ξ N − η/ √ N and doing a simple calculation, we obtain Thus, where We will show that the integral I 1 is the asymptotically dominant term.Thus, putting it all together, we have shown that for each l = 0, 1, 2, 3, . . ., In particular, the series in (3.4) converges termwise.
We will show that the series in (3.6) also converges as a whole.To this purpose, let ε 2 > 0 denote a positive constant such that cos t ≤ 1 − ε 2 t 2 for −π ≤ t ≤ +π.Then, for any α > 0 and any −π ≤ t 1 < t 2 ≤ +π, we have the estimate where K denotes some absolute constant.Let us convene that this constant K may change from occurrence to occurrence in the subsequent calculations.Then it follows from (3.3) and (3.9) that, for each l = 0, 1, 2, 3, . . ., It therefore follows from (3.8) that lim Hence, to complete the proof of Proposition 3.1, it remains to show that lim N →∞ I j /N 3/2 = 0; j = 2, 3, 4.
For the integral I 2 , we use the estimates as well as (3.9) to obtain where K denotes some absolute constant which may change from step to step.
For the integral I 3 , we write For the integral I 4 , we can finally use (3.3), (3.11) and (3.9) to obtain where K denotes some absolute constant which may change from step to step.
This concludes the proof of Proposition 3.1.