Universality of sine-kernel for Wigner matrices with a small Gaussian perturbation

We consider $N\times N$ Hermitian random matrices with independent identically distributed entries (Wigner matrices). We assume that the distribution of the entries have a Gaussian component with variance $N^{-3/4+\beta}$ for some positive $\beta>0$. We prove that the local eigenvalue statistics follows the universal Dyson sine kernel.


Introduction
Certain spectral statistics of broad classes of N × N random matrix ensembles are believed to follow a universal behavior in the limit N → ∞. Wigner has observed [30] that the density of eigenvalues of large symmetric or hermitian matrices H with independent entries (up to the symmetry requirement) converges, as N → ∞, to a universal density, the Wigner semicircle law. Dyson has observed that the local correlation statistics of neighboring eigenvalues inside the bulk of the spectrum follows another universal pattern, the Dyson sine-kernel in the N → ∞ limit [10]. Moreover, any k-point correlation function can be obtained as a determinant of the two point correlation functions. The precise form of the universal two point function in the bulk seems to depend only on the symmetry class of the matrix ensemble (a different universal behavior emerges near the spectral edge [28]).
Dyson has proved this fact for the Gaussian Unitary Ensemble (GUE), where the matrix elements are independent, identically distributed complex Gaussian random variables (subject to the hermitian constraint). A characteristic feature of GUE is that the distribution is invariant under unitary conjugation, H → U * HU for any unitary matrix U . Dyson found an explicit formula for the joint density function of the N eigenvalues. The formula contains a characteristic Vandermonde determinant and therefore it coincides with the Gibbs measure of a particle system interacting via a logarithmic potential analogously to the two dimensional Coulomb gas. Dyson also observed that the computation of two point function can be reduced to asymptotics of Hermite polynomials.
His approach has later been substantially generalized to include a large class of random matrix ensembles, but always with unitary (orthogonal, symplectic, etc.) invariance. For example, a general class of invariant ensembles can be given by the measure Z −1 exp(−Tr V (H))dH on the space of hermitian matrices, where dH stands for the Lebesgue measure for all independent matrix entries, Z is the normalization and V is a real function with certain smoothness and growth properties. For example, the GUE ensemble corresponds to V (x) = x 2 .
The joint density function is explicit in all these cases and the evaluation of the two point function can be reduced to certain asymptotic properties of orthogonal polynomials with respect to the weight function exp(−V (x)) on the real line. The sine kernel can thus be proved for a wide range of potentials V . Since the references in this direction are enormous, we can only refer the reader to the book by Deift [9] for the Riemann-Hilbert approach, the paper by Levin and Lubinsky [23] and references therein for approaches based on classical analysis of orthogonal polynomials, or the paper by Pastur and Shcherbina [26] for a probabilistic/statistical physics approach. The book by Anderson et al [1] or the book by Metha [25] also contain extensive lists of literatures.
Since the computation of the explicit formula of the joint density relies on the unitary invariance, there have been very little progress in understanding non-unitary invariant ensembles. The most prominent example is the Wigner ensemble or Wigner matrices, i.e., hermitian random matrices with i.i.d. entries. Wigner matrices are not unitarily invariant unless the single entry distribution is Gaussian, i.e. for the GUE case. The disparity between our understanding of the Wigner ensembles and the unitary invariant ensembles is startling. Up until the very recent work of [14], there was no proof that the density follows the semicircle law in small spectral windows unless the number of eigenvalues in the window is at least √ N . This is entirely due to a serious lack of analytic tools for studying eigenvalues once the mapping between eigenvalues and Coulomb gas ceases to apply. At present, there are only two rigorous approaches to eigenvalue distributions: the moment method and Green function method. The moment method is restricted to studying the spectrum near the edges [28]; the precision of the Green function method seems to be still very far from getting information on level spacing [6].
Beyond the unitary ensembles, Johansson [21] proved the sine-kernel for a broader category of ensembles, i.e., for matrices of the form H + sV where H is a Wigner matrix, V is an independent GUE matrix and s is a positive constant of order one. (Strictly speaking, in the original work [21], the range of the parameter s depends on the energy E. This restriction was later removed by Ben Arous and Péché [3], who also extended this approach to Wishart ensembles). Alternatively formulated, if the matrix elements are normalized to have variance one, then the distribution of the matrix elements of the ensemble H + sV is given by ν * G s , where ν is the distribution of the Wigner matrix elements and G s is the centered Gaussian law with variance s 2 . Johasson's work is based on the analysis of the explicit formula for the joint eigenvalue distribution of the matrix H + sV (see also [7]).
Dyson has introduced a dynamical version of generating random matrices. He considered a matrix-valued process H + sV where V is a matrix-valued Brownian motion. The distribution of the eigenvalues then evolves according to a process called Dyson's Brownian motions. For the convenience of analysis, we replace the Brownian motions by an Ornstein-Uhlenbeck process so that the distribution of GUE is the invariant measure of this modified process, which we still call Dyson's Brownian motion. Dyson's Brownian motion thus can be viewed as a reversible interacting particle system with a long range (logarithmic) interaction. This process is well adapted for studying the evolution of the empirical measures of the eigenvalues, see [18]. The sine kernel, on the other hand, is a very detailed property which typically cannot be obtained from considerations of interacting particle systems. The Hamiltonian for GUE, however, is strictly convex and thus the Dyson's Brownian motion satisfies the logarithmic Sobolev inequality (LSI). It was noted in the derivation of the Navier-Stokes equations [12,27] that the combination of the Guo-Papanicolaou-Varadhan [20] approach and LSI provides very detailed estimates on the dynamics.
The key observation of the present paper is that this method can also be used to estimate the approach to local equilibria so precisely that, after combining it with existing techniques from orthogonal polynomials, the Dyson sine kernel emerges. In pursuing this approach, we face two major obstacles: 1. Good estimate of the initial entropy, 2. Good understanding of the structure of local equilibria. It turns out that the initial entropy can be estimated using the explicitly formula for the transition kernel of the Dyson's Brownian motion (see [7] and [21]) provided strong inputs on the local semicircle law [14] and level repulsion [15] are available.
The structure of local equilibria, however, is much harder to analyze. Typically, the local equilibrium measures are finite volume Gibbs measures with short range interaction and the boundary effects can be easily dealt with in the high temperature phase. In the GUE case, the logarithmic potential does not even decay at large distance and the equilibrium measure can depend critically on the boundary conditions. The theory of orthogonal polynomials provides explicit formulae for the correlation functions of this highly correlated Gibbs measure. These formulae can be effectively analyzed if the external potential (or logarithm of the weight function in the terminology of the orthogonal polynomials) is very well understood. Fortunately, we have proved the local semicircle law up to scales of order 1/N and the level repulsion, which can be used to control the boundary effects. By invoking the theorem of Levin and Lubinsky [23] and the method of Pastur and Shcherbina [26] we are led to the sine kernel.
It is easy to see that adding a Gaussian component of size much smaller than N −1 to the original Wigner matrix would not move the eigenvalues sufficiently to change the local statistics. Our requirement that the Gaussian component is at least of size N −3/4 comes from technical estimates to control the initial global entropy and it does not have any intrinsic meaning. The case that the variance is of order N −1 , however, is an intrinsic barrier which is difficult to cross. Nevertheless, we believe that our method may offer a possible strategy to prove the universality of sine kernel for general Wigner matrices.
After this manuscript had been completed, we found a different approach to prove the Dyson sine kernel [16], partly based on a contour integral representation for the two-point correlation function [7,21]. Shortly after our manuscripts were completed, we learned that our main result was also obtained by Tao and Vu in [29] with a different method under no regularity conditions on the initial distribution ν provided the third moment of ν vanishes.
Although the results in this paper are weaker than those in [16] and [29], we believe that the method presented here has certain independent interest. Unlike [16] and [29], this approach does not use the contour integral representation of the two point correlation function. Hence, it may potentially have a broader applicability to other matrix ensembles for which such representation is not available.
Acknowledgements. We would like to thank the referees for suggesting several improvements of the presentation.

Main theorem and conditions
Fix N ∈ N and we consider a Hermitian matrix ensemble of N × N matrices H = (h ℓk ) with the normalization h ℓk = N −1/2 z ℓk , z ℓk = x ℓk + iy ℓk , (2.1) where x ℓk , y ℓk for ℓ < k are independent, identically distributed random variables with distribution ν = ν (N ) that has zero expectation and variance 1 2 . The diagonal elements are real, i.e. y ℓℓ = 0 and and x ℓℓ are also i.i.d., independent from the off-diagonal ones with distribution ν = ν (N ) that has zero expectation and variance one. The superscript indicating the N -dependence of ν, ν will be omitted.
This requirement is equivalent to considering random matrices of the form where H is a Wigner matrix with single entry distribution ν 0 and ν 0 , and V is a GUE matrix whose elements are centered Gaussian random variables with variance 1/N . Furthermore, we assume that ν is absolutely continuous with positive density functions h(x) > 0, i.e. we can write it as dν(x) = h(x)dx = exp(−g(x))dx with some real function g. We assume the following conditions: • The measure dν satisfies the logarithmic Sobolev inequality, i.e. there exists a constant S such that R u log u dν ≤ S R |∇ √ u| 2 dν (2.4) holds for any density function u > 0 with u dν = 1.
• There exists a δ 0 > 0 such that for the distribution of the diagonal elements Although the conditions are stated directly for the measures ν and ν, it is easy to see that it is sufficient to assume that ν 0 satisfies (2.4) and (2.5) and ν 0 satisfies (2.6). We remark that (2.4) implies that (2.6) holds for ν instead of ν as well (see [22]). The eigenvalues of H are denoted by λ 1 , λ 2 , . . . λ N . The law of the matrix ensemble induces a probability measure on the set of eigenvalues whose density function will be denoted by p(λ 1 , λ 2 , . . . , λ N ). The eigenvalues are considered unordered for the moment and thus p is a symmetric function. For any k = 1, 2, . . . , N , let be the k-point correlation function of the eigenvalues. The k = 1 point correlation function (density) is denoted by ̺(λ) := p (1) (λ). With our normalization convention, the density ̺(λ) is supported in [−2, 2] and in the N → ∞ limit it converges to the Wigner semicircle law given by the density The main result of this paper is the following theorem: Theorem 2.1 Fix arbitrary positive constants β > 0 and κ > 0. Consider the Wigner matrix ensemble with a Gaussian convolution of variance s 2 = N −3/4+β given by (2.3) and assume (2.4)-(2.6). Let p (2) be the two point correlation function of the eigenvalues of this ensemble. Let with g, h smooth and compactly supported functions such that h ≥ 0 and h = 1. Then we have (2.9) The factor g in the observable (2.8) tests the eigenvalue differences. The factor h, that disappears in the right hand side of (2.9), is only a normalization factor. Thus the special form of observable (2.8) directly exhibits the fact that the local statistics is translation invariant.
Conventions. All integrations with unspecified domains are on R. We will use the letters C and c to denote general constants whose precise values are irrelevant and they may change from line to line. These constants may depend on the constants in (2.4)-(2.6).

Outline of the proof
Our approach has three main ingredients. In the first step, we use the entropy method from hydrodynamical limits to establish a local equilibrium of the eigenvalues in a window of size N −1+ε (with some small ε > 0), i.e. window that typically contains n = N ε eigenvalues. This local equilibrium is subject to an external potential generated by all other eigenvalues. In the second step we then prove that the density of this equilibrium measure is locally constant by using methods from orthogonal polynomials. Finally, in the third step, we employ a recent result [23] to deduce the sine-kernel. We now describe each step in more details.
We generate the Wigner matrix with a small Gaussian component by running a matrix-valued Ornstein-Uhlenbeck process (3.1) for a short time of order t ∼ N −ζ , ζ > 0. This generates a stochastic process for the eigenvalues which can be described as Ornstein-Uhlenbeck processes for the individual eigenvalues with a strong interaction (3.10).
This process is the celebrated Dyson's Brownian motion (DBM) [11] and the equilibrium measure is the GUE distribution of eigenvalues. The transition kernel can be computed explicitly (5.12) and it contains the determinantal structure of the joint probability density of the GUE eigenvalues that is responsible for the sine-kernel. This kernel was analyzed by Johansson [21] assuming that the time t is of order one, which is the same order as the relaxation time to equilibrium for the Dyson's Brownian motions. The sine-kernel, however, is a local statistics, and local equilibrium can be reached within a much shorter time scale. To implement this idea, we first control the global entropy on time scale N −1 by N 1+α , with α > 1/4 (Section 5.2).
More precisely, recall that the entropy of f µ with respect to a probability measure µ is given by In our application, the measure µ is the Gibbs measure for the equilibrium distribution of the (ordered) eigenvalues of the GUE, given by the Hamiltonian If f t denotes the joint probability density of the eigenvalues at the time t with respect to µ, then the evolution of f t is given by the equation 11) where the generator L is defined via the Dirichlet form The evolution of the entropy is given by the equation The key initial entropy estimate is the inequality that for any α > 1 4 and for sufficiently large N . The proof of this estimate uses the explicit formula for the transition kernel of (2.11) and several inputs from our previous papers [13,14,15] on the local semicircle law and on the level repulsion for general Wigner matrices. We need to strengthen some of these inputs; the new result will be presented in Section 4 with proofs deferred to Appendix A, Appendix B and Appendix C.
It is natural to think of each eigenvalue as a particle and we will use the language of interacting particle systems. We remark that the entropy per particle is typically of order one in the interacting particle systems. But in our setting, due to the factor N in front of the Hamiltonian (2.10), the typical size of entropy per particle is of order N . Thus for a system bearing little relation to the equilibrium measure µ, we expect the total entropy to be O(N 2 ). So the bound (2.12) already contains nontrivial information. However, we believe that one should be able to improve this bound to α ∼ 0 and the additional α > 1/4 power in (2.12) is only for technical reasons. This is the main reason why our final result holds only for a Gaussian convolution with variance larger than N −3/4 . The additional N α factor originates from Lemma 5.3 where we approximate the Vandermonde determinant appearing in the transition kernel by estimating the fluctuations around the local semicircle law. We will explain the origin of α > 1/4 in the beginning of Appendix D where the proof of Lemma 5.3 is given.
From the initial entropy estimate, it follows that the time integration of the Dirichlet form is bounded by the initial entropy. For the DBM, due to convexity of the Hamiltonian of the equilibrium measure µ, the Dirichlet form is actually decreasing. Thus for t = τ N −1 with some The last estimate says that the Dirichlet form per particle is bounded by N 1+α τ −1 . So if we take an interval of n particles (with coordinates given by x = (x 1 , . . . , x n )), then on average the total Dirichlet form of these particles is bounded by nN 1+α τ −1 . We will choose n = N ε with some very small ε > 0. As always in the hydrodynamical limit approach, we consider the probability law of these n particles given that all other particles (denoted by y) are fixed. Denote by µ y (dx) the equilibrium measure of x given that the coordinates of the other N − n particles y are fixed. Let f y,t be the conditional density of f t w.r.t. µ y (dx) with y given. The Hamiltonian of the measure µ y (dx) is given by and it satisfies the convexity estimate If y are regularly distributed, we have the convexity bound This implies the logarithmic Sobolev inequality where in the last estimate some additional n-factors were needed to convert the local Dirichlet form estimate per particle on average to an estimate that holds for a typical particle. Thus we obtain provided we choose t = N −1 τ = N β−1 with β ≥ 10ε + α (Section 6). The last inequality asserts that the two measures f y µ y and µ y are almost the same and thus we only need to establish the sine kernel for the measure µ y . At this point, we remark that this argument is valid only if y is regularly distributed in a certain sense which we will call good configurations (Definition 4.1). Precise estimates on the local semicircle law can be used to show that most external configurations are good. Although the rigorous treatment of the good configurations and estimates on the bad configurations occupy a large part of this paper, it is of technical nature and we deferred the proofs of several steps to the appendices.
In Sections 8, 9 and 10, we refine the precision on the local density and prove that the density is essentially constant pointwise. Direct probabilistic arguments to establish the local semicircle law in [15] rely on the law of large numbers and they give information on the density on scales of much larger than N −1 , i.e. on scales that contain many eigenvalues. The local equilibrium is reached in a window of size n/N and within this window, we can conclude that the local semicircle law holds on scales of size n γ /N with an arbitrary small γ > 0. However, this still does not control the density pointwise. To get this information, we need to use orthogonal polynomials.
The density in local equilibrium can be expressed in terms of sum of squares of orthogonal polynomials p 1 (x), p 2 (x), . . . with respect to the weight function exp (−nU y (x)) generated by the external configuration y (see Section 8 for precise definitions). To get a pointwise bound from the appropriate bound on average, we need only to control the derivative of the density, that, in particular, can be expressed in terms of derivatives of the orthogonal polynomials p k . Using integration by parts and orthogonality properties of p k , it is possible to control the L 2 norm of p ′ k in terms of the L 2 norm of p k (x)U ′ y (x). Although the derivative of the potential is singular, p k U ′ y 2 can be estimated by a Schwarz inequality at the expense of treating higher L p norms of p k (Lemma 8.1). In this content, we will exploit the fact that we are dealing with polynomials by using the Nikolskii inequality which estimates higher L p norms in terms of lower ones at the expense of a constant depending on the degree. To avoid a very large constant in the Nikolskii inequality, in Section 7 we first cutoff the external potential and thus we reduce the degree of the weight function.
We remark that our approach of using orthogonal polynomials to control the density pointwise was motivated by the work of Pastur and Shcherbina [26], where they proved sine-kernel for unitary invariant matrix ensembles with a three times differentiable potential function on the real line. In our case, however, the potential is determined by the external points and it is logarithmically divergent near the edges of the window.
Finally, in Section 11, we complete the proof of the sine-kernel by applying the main theorem of [23]. This result establishes the sine-kernel for orthogonal polynomials with respect to an ndependent sequence of weight functions under general conditions. The most serious condition to verify is that the density is essentially constant pointwise -the main result we have achieved in the Step 2 above. We also need to identify the support of the equilibrium measure which will be done in Appendix F.
We remark that, alternatively, it is possible to complete the third step along the lines of the argument of [26] without using [23]. Using explicit formulae from orthogonal polynomials and the pointwise control on the density and on its derivative, it is possible to prove that the local twopoint correlation function p (2) n (x, y) is translation invariant as n → ∞. After having established the translation invariance of p (2) , it is easy to derive an equation for its Fourier transform and obtain the sine-kernel as the unique solution of this equation. We will not pursue this alternative direction in this paper.

Ornstein-Uhlenbeck process
We can generate our matrix H (2.3) from a stochastic process with initial condition H. Consider the following matrix valued stochastic differential equation where β t is a hermitian matrix-valued stochastic process whose diagonal matrix elements are standard real Brownian motions and whose off-diagonal matrix elements are standard complex Brownian motions. For completeness we describe this matrix valued Ornstein-Uhlenbeck process more precisely. The rescaled matrix elements z ij = N 1/2 h ij evolve according to the complex Ornstein-Uhlenbeck process For i = j, β = β ij is a complex Brownian motion with variance one. The real and imaginary parts of z = x + iy satisfy where β x , β y are independent standard real Brownian motions. For the diagonal elements i = j in (3.2), β ii is a standard real Brownian motion with variance 1.
To ensure z ij =z ji , for i < j we choose β ij to be independent complex Brownian motion with E |β ij | 2 = 1, we set β ji :=β ij and we let β ii to be a real Brownian motion with E β 2 ii = 1. Then We note that dTr H 2 = 0, thus remains constant for all time.
If the initial condition of (3.1) is distributed according to the law of H, then the solution of (3.1) is clearly where V is a standard GUE matrix (with matrix elements having variance 1/N ) that is independent of H. With the choice of t satisfying (1 − e −t ) = s 2 = N −3/4+β , i.e. t = − log(1 − N −3/4+β ) ≈ N −3/4+β , we see that H given in (2.3) has the same law as H t .

Joint probability distribution of the eigenvalues
We will now analyze the eigenvalue distribution of H t . Let λ(t) = (λ 1 (t), λ 2 (t), . . . , λ N (t)) ∈ R N denote the eigenvalues of H t . As t → ∞, the Ornstein-Uhlenbeck process (3.1) converges to the standard GUE. The joint distribution of the GUE eigenvalues is given by the following measure µ The measure µ has a density with respect to Lebesgue measure given by . This is the joint probability distribution of the eigenvalues of the standard GUE ensemble normalized in such a way that the matrix elements have variance 1/N (see, e.g. [25]). With this normalization convention, the bulk of the one point function (density) is supported in [−2, 2] and in the N → ∞ limit it converges to the Wigner semicircle law (2.7). For any finite time t < ∞ we will represent the joint probability density of the eigenvalues of H t as f t (λ) u(λ), with lim t→∞ f t (λ) = 1. In particular, we write the joint distribution of the eigenvalues of the initial Wigner matrix H as f 0 (λ) µ(dλ) = f 0 (λ) u(λ)dλ.

The generator of Dyson's Brownian motion
The Ornstein-Uhlenbeck process (3.1) induces a stochastic process for the eigenvalues.
Let L be the generator given by acting on L 2 ( µ) and let be the corresponding Dirichlet form, where ∂ j = ∂ λj . Clearly µ is an invariant measure for the dynamics generated by L.
Let the distribution of the eigenvalues of the Wigner ensemble be given by f 0 (λ) µ(dλ). We will evolve this distribution by the dynamics given by L: The corresponding stochastic differential equation for the eigenvalues λ(t) is now given by (see, e.g. Section 12.1 of [19]) where {B i : 1 ≤ i ≤ N } is a collection of independent Brownian motions and with initial condition λ(0) that is distributed according to the probability density f 0 (λ) µ(dλ). We remark that u(λ) and f t (λ) are symmetric functions of the variables λ j and u vanishes whenever two points coincide. By the level repulsion we also know that f 0 (λ) u(λ) vanishes whenever λ j = λ k for some j = k. We can label the eigenvalues according to their ordering, λ 1 < λ 2 < . . . < λ N , i.e. one can consider the configuration space instead of the whole R N . With an initial point in Ξ (N ) , the equation (3.10) has a unique solution and the trajectories do not cross each other, i.e. the ordering of eigenvalues is preserved under the time evolution and thus the dynamics generated by L can be restricted to Ξ (N ) ; see, e.g. Section 12.1 of [19]. The main reason is that near a coalescence point λ i = λ j , i > j, the generator is The constant 1 in front of the drift term is critical for the Bessel process 1 2 ∂ 2 b + 1 b ∂ b not to reach the boundary point b = 0. Note that the symmetric density function u(λ) defined on R N can be restricted to Ξ (N ) as u(λ) = N ! u(λ)1(λ ∈ Ξ (N ) ). (3.12) The density function of the ordered eigenvalues is thus f t (λ)u(λ) on Ξ (N ) . Throughout this paper, with the exception of Section 5.2, we work on the space Ξ (N ) , i.e., the equilibrium measure µ(dλ) = u(λ)dλ with density u(λ) and the density function f t (λ) will be considered restricted to Ξ (N ) .

Good global configurations
Several estimates in this paper will rely on the fact that the number of eigenvalues N I in intervals I with length much larger than 1/N is given by the semicircle law [15]. In this section we define the set of good global configurations, i.e. the event that the semicircle law holds on all subintervals in addition to a few other typical properties. Let be the empirical density of the eigenvalues. For an interval I = [a, b] we introduce the notation for the number of eigenvalues in I. For the interval [E − η/2, E + η/2] of length η and centered at E we will also use the notation Let be the empirical density smoothed out on scale η. Furthermore, let be the Stieltjes transform of the empirical eigenvalue distribution and be the Stieljes transform of the semicircle law. The square root here is defined as the analytic extension (away from the branch cut [−2, 2]) of the positive square root on large positive numbers.
Clearly ω y (x) = π −1 Im m(x + iy) for y > 0. We will need an improved version of Theorem 4.1 from [15] that is also applicable near the spectral edges. The proof of the following theorem is given in Appendix A.
(i) For any q ≥ 1 we have where C q is independent of x and y.
(ii) Assume that |x| ≤ K for some K > 0. Then there exists c > 0 such that for all δ > 0 small enough and all N large enough (independently of δ). Consequently, we have with some q-dependent constant C q . Moreover, for all N large enough (independently of x, y).
(iii) Assuming |x| ≤ K and that N |y||2 − |x|| ≥ (log N ) 2 we also have As a corollary to Theorem 4.1, the semicircle law for the density of states holds locally on very short scales. The next proposition can be proved, starting from Theorem 4.1, exactly as Eq. (4.3) was shown in [13].
Proposition 4.1 Assuming (2.4)-(2.6), for any sufficiently small δ and for any η * with We also need an estimate directly on the number of eigenvalues in a certain interval, but this will be needed only away from the spectral edge. The following two results estimate the deviation of the normalized empirical counting function 1 N N [−∞, E] = 1 N #{λ j ≤ E} and its expectation from the distribution function of the semicircle law, defined as Proposition 4.2 Assume that the Wigner matrix ensemble satisfies conditions (2.4)-(2.6). Let κ > 0 be fixed. For any 0 < δ < 1 and |E| ≤ 2 − κ, we have with κ-dependent constants. Moreover, there exists a constant C > 0 such that 14) The proof of this proposition will be given in Appendix B.
Next we define the good global configurations; the idea is that good global configurations are configurations for which the semicircle law holds up to scales of the order (log N ) 4 /N (and so that some more technical conditions are also satisfied). By Proposition 4.1 and Proposition 4.2, we will see that set of these configurations have, asymptotically, a full measure. As a consequence, we will be able to neglect all configurations that are not good.
with some small constants 0 < ε, γ ≤ 1 10 and m = 0, 1, 2, . . . , log N . Here [x] denotes the integer part of x ∈ R. Note that within this range of m's, will be called the set of good global configurations. Proof. The probability of Ω (m) was estimated in (4.17). The probability of the second event in (4.18) can be estimated by (4.13) from Proposition 4.2 and from N sc (0) = 1/2. The third event is treated by the large deviation estimate on N I for any interval I with length |I| ≥ (log N ) 2 /N (see Theorem 4.6 from [15]; note that there is a small error in the statement of this theorem, since the conditions y ≥ (log N )/N and |I| ≥ (log N )/N should actually be replaced by the stronger assumptions y ≥ (log N ) 2 /N and |I| ≥ (log N ) 2 /N which are used in its proof): The fourth event is a large deviation of the largest eigenvalue, see, e.g. Lemma 7.4. in [13]. 2 In case of good configurations, the location of the eigenvalues are close to their equilibrium localition given by the semicircle law. The following lemma contains the precise statement and it will be proven in Appendix C. Lemma 4.3 Let λ 1 < λ 2 < . . . < λ N denote the eigenvalues in increasing order and let κ > 0. Then on the set Ω and if N ≥ N 0 (κ), it holds that |λ a − N −1 sc (aN −1 )| ≤ Cκ −1/2 n −γ/6 (4. 21) for any N κ 3/2 ≤ a ≤ N (1 − κ 3/2 ) (recall the definition of N sc from (4.12)), and

Bound on the level repulsion and potential for good configurations
Lemma 4.4 On the set Ω and with the choice n given in (4.15), we have (4.23) and 1 N E with respect to any Wigner ensemble satisfying the conditions (2.4) and (2.5) Proof. First we partition the interval [−2 + κ, 2 − κ] into subintervals that have already been used in the proof of Lemma 4.3. On the set Ω we have the bound on the number of eigenvalues in each interval I r . Moreover, We estimate (4.23) as follows: where the star in the first summation indicates a restriction to N κ 3/2 ≤ j < ℓ ≤ (1 − κ 3/2 )N . By (4.26), for any fixed r, the summation over ℓ with λ ℓ ∈ I r contains at most Cn γ elements. The summation over j contains at most Cn γ elements if k < 0, since λ ℓ ∈ I r and |λ If k ≥ 0, then the j-summation has at most C(2 k + n γ ) elements since in this case λ j ∈ {I s : |s − r| ≤ C · 2 k n −γ + 1}. Thus we can continue the above estimate as (4.28) The second sum is bounded by Cn 3γ . In the first sum, we use the level repulsion estimate by decomposing I r−1 ∪ I r ∪ I r+1 = m J m into intervals of length 2 k+2 N −1 that overlap at least by 2 k+1 N −1 , more precisely Using the level repulsion estimate given in Theorem 3.4 of [15] (here the condition (2.5) is used) and the fact that and thus and this completes the proof of (4.23).
For the proof of (4.24), we note that it is sufficient to bound the event when N |λ j − λ ℓ | ≥ 1 after using (4.23). Inserting the partition (4.25), we get Recalling the choice of n completes the proof of Lemma 4.4. 2 5 Global entropy 5

.1 Evolution of the entropy
Recall the definition of the entropy of f µ with respect to µ and let f t solve (3.9). Then the evolution of the entropy is given by the equation and thus using that S(f t ) > 0 we have For dynamics with energy H and the convexity condition for some constant Λ, the following Bakry-Emery inequality [2] holds: (notice the additional N factor due to the N −1 in front of the second order term in the generator L, see (3.7)). This implies the logarithmic Sobolev inequality that for any probability density g, with respect to µ, In this case, the Dirichlet form is a decreasing function in time and we thus have for any t > s that In our setting, we have as a matrix inequality away from the singularities (see remark below how to treat the singular set).
This tells us that S(f t ) in (3.9) is exponential decaying as long as t ≫ 1. But for any time t ∼ 1 fixed, the entropy is still the same order as the initial one. Note that t ∼ 1 is the case considered in Johasson's work [21].
The proof of (5.5) and the application of the Bakry-Emery condition in (5.6) requires further justification. Typically, Bakry-Emery condition is applied for Hamiltonians H defined on spaces without boundary. Although the Hamiltonian H (3.5) is defined on R N , it is however convex only away from any coalescence points λ i = λ j for some i = j; the Hessian of the logarithmic terms has a Dirac delta singularity with the wrong (negative) sign whenever two particles overlap. In accordance with the convention that we work on the space Ξ (N ) throughout the paper, we have to consider H restricted to Ξ (N ) , where it is convex, i.e. (5.5) holds, but we have to verify that the Bakry-Emery result still applies. We review the proof of Bakry and Emery and prove that the contribution of the boundary term is zero.

Recall that the invariant measure exp(−H)dλ and the dynamics
assuming that the boundary term in the integration by parts vanishes. To see (5.9), consider a segment λ i = λ i+1 of the boundary ∂Ξ. From the explicit representation (5.11), (5.12) in the next section, we will see that f t ≥ 0 is a meromorphic function in each variable in the domain Ξ for any t > 0. It can be represented as by ( where F is analytic and 0 < F < ∞ near λ i = λ i+1 . Since f t ≥ 0, we obtain that the exponent β is non-negative and even. Therefore f √ f e −H vanishes at the boundary due to the factor (λ i+1 − λ i ) 2 in e −H , i.e. the integral (5.9) indeed vanishes.

Bound on the entropy
with C depending on α.
Proof. In the proof we consider the probability density u(λ) and the equilibrium measure µ extended to R N (see (3.12)), i.e. the eigenvalues are not ordered. Clearly S(f s µ|µ) = S(f s µ| µ) and we estimate the relative entropy of the extended measures.
Given the density f 0 (λ) µ(dλ) of the eigenvalues of the Wigner matrix as an initial distribution, the eigenvalue density f s (λ) for the matrix evolved under the Dyson's Brownian motion is given by where c = c(s) = e −s/2 for brevity. The derivation of (5.12) follows very similarly to Johansson's presentation of the Harish-Chandra/Itzykson-Zuber formula (see Proposition 1.1 of [21]) with the difference that in our case the matrix elements move by the Ornstein-Uhlenbeck process (3.1) instead of the Brownian motion.
In particular, formula (5.12) implies that f s is an analytic function for any s > 0 since with an explicit analytic function h s (λ). Since the determinant is analytic in λ, we see that f s (λ) is meromorphic in each variables and the only possible poles of f s (λ) come from the factors is a non-negative function, so it cannot have a singularity of order −1, thus these singular factors cancel out from a factor (λ i − λ j ) from the integral. Alternatively, using the Laplace expansion the determinant, one can explicitly see that each 2 by 2 subdeterminant from the i-th and j-th columns carry a factor ±(λ i − λ j ). Then, by Jensen inequality from (5.11) and from the fact that f 0 (ν) u(ν) is a probability density, we have Expanding this last expression we find, after an exact cancellation of the term (N/2) log(2π), For the determinant term, we use that each entry is at most one, thus The last term in (5.13) can be estimated using Stirling's formula and Riemann integration thus the 1 2 N 2 log N terms cancel. For the N 2 terms we need the following approximation Lemma 5.2 With respect to any Wigner ensemble whose single-site distribution satisfies (2.4)-(2.6) and for any α > 1/4 we have where the constant in the error term depends on α and on the constants in (2.4)-(2.6).
Note that (2.6), (2.5) hold for both the initial Wigner ensemble with density f 0 and for the evolved one with density f t . These conditions ensure that Theorem 3.5 of [15] is applicable.
Proof of Lemma 5.2. The quadratic term can be computed explicitly using (3.4): The second (determinant) term will be approximated in the following lemma whose proof is postponed to Appendix D.

Lemma 5.3
With respect to any Wigner ensemble whose single-site distribution satisfies (2.4)-(2.6) and for any α > 1/4 we have Finally, explicit calculation then shows that and this proves Lemma 5.2. 2 Hence, continuing the estimate (5.13), we have the bound where we used Lemma 5.2 both for the initial Wigner measure and for the evolved one and finally we used that the E Tr H 2 is preserved, see (3.4). This completes the proof of (5.10). 2 6 Local equilibrium 6

.1 External and internal points
by using (5.10). Recall that the eigenvalues are ordered, λ 1 < λ 2 < . . . < λ N . Let L ≤ N − n (n was defined in (4.15)) and define again in increasing order (Ξ was defined in (3.11)). We set to be the index set of the y's. We will refer to the y's as external points and to the x j 's as internal points. Note that the indices are chosen such that for any j we have y k < x j for k < 0 and y k > x j for k > 0. In particular, for any fixed L, we can split any y ∈ Ξ (N −n) as y = (y − , y + ) where y − := (y −L , y −L+1 , . . . y −1 ), y + := (y 1 , y 2 , . . . y N −L−n ) The set Ξ (N −n) with a splitting mark after the L-th coordinate will be denoted by Ξ For a fixed L we will often consider the expectation of functions O(y) on Ξ (N −n) with respect to µ or f µ; this will always mean the marginal probability: 3) be the conditional density of x given y with respect to the conditional equilibrium measure Here f L y also depends on time t, but we will omit this dependence in the notation. Note that for any fixed y ∈ Ξ (N −n) , any value x j lies in the interval I y := [y −1 , y 1 ], i.e. the functions u y (x) and f y (x) are supported on the set Now we localize the good set Ω introduced in Definition 4.1. For any fixed L and y = ( Set Since Here P f (Ω 1 ) is a short-hand notation for the marginal expectation, i.e.
but we will neglect this distinction. Note that y ∈ Ω 1 also implies, for large N , that there exists an x ∈ I n y such that (y − , x, y + ) ∈ Ω. This ensures that those properties of λ ∈ Ω that are determined only by y's, will be inherited to the y's. E.g. y ∈ Ω 1 will guarantee that the local density of y's is close to the semicircle law on each interval away from I y . More precisely, note that for any interval I = [E − η * m /2, E + η * m /2] of length η * m = 2 m n γ N −1 and center E, |E| ≤ 2 − κ, that is disjoint from I y , we have, by (4.16), Moreover, for any interval I with |I| ≥ n γ N −1 we have, by (4.18), Using (4.21) and (4.22) from Lemma 4.3 on the set Ω (see (4.18)), we for any y ∈ Ω 1 (L) we have in particular |y −1 |, |y 1 | ≤ 2 − κ/2 and |I y | ≤ Cn N (6.14) with C = C(κ). Let with some large constant K. On the set Ω we have |I y | ≤ Kn/N (see (6.14)), thus Π c

Localization of the Dirichlet form
For any L ≤ N − n and any y ∈ Ξ (N −n) L , we define the Dirichlet form y . Hence from (6.1) we have the inequality where the expectation E ft is defined similarly to (6.4), with f replaced by f t . In the first inequality in (6.17), we used the fact that, by (6.5) and (6.6), and therefore, when we sum over all L ∈ {N κ 3/2 , . . . , N (1 − κ 3/2 )} as on the l.h.s. of (6.17), every local Dirichlet form is summed over at most n times, so we get the total Dirichlet form with a multiplicity at most n. We define the set then the above inequality guarantees that for the cardinality of G 1 , For L ∈ G 1 , we define

Local entropy bound
Suppose that L ∈ G 1 and fix it. For any y ∈ Ξ for any x ∈ I n y as a matrix inequality. On the set y ∈ Ω 2 (L) we have inf x∈Iy k∈JL We can apply the logarithmic Sobolev inequality (5.3) to the local measure µ y , taking into account Remark 5.1. Thus we have for µ = µ y and f = f y , we have also have We will choose t = N −1 τ with τ = N β such that i.e. β ≥ 10ε + α.

Good external configurations
Definition 6.1 The set of good L-indices is defined by Notice that for any fixed L we can write and we also have and similar formulae hold when λ L is replaced with λ L+n+1 and y −1 with y 1 .

Bounds in equilibrium
In this section we translate the bounds in the second and third lines of (6.31) into similar bounds with respect to equilibrium using that the control on the local Dirichlet form also controls the local entropy for the good indices: Moreover, we also have Proof. Let O : R n → R be any observable and Ω y be any event. Then for any fixed y ∈ Ξ (N −n) we have by the entropy inequality (6.25). If L ∈ G and y ∈ Ω 2 (L), then we have by (6.26) that For a given y ∈ Y L , we set the observable with O ∞ ≤ Cn Ap+1 ≤ cn 2A+1 . Then, for τ ≥ n 4A+8 N α we obtain from (6.31) and (6.35) that On the complement set Ω c y we just use the crude supremum bound together with the bound on P fy (Ω c y ) in the definition of Ω 1 (6.7): Combining the last two estimates proves (6.33). The proof of (6.34) is analogous, here we use that the corresponding observable has an L ∞ bound This completes the proof of Lemma 6.1. 2

Cutoff Estimates
In this section, we cutoff the interaction with the far away particles. We fix a good index L ∈ G and a good external point configuration y ∈ Y L . Consider the measure µ y = e −Hy /Z y with The measure µ y is supported on the interval I y = (y −1 , y 1 ). For any fixed y, decompose and where B is a large positive number with Bε < 1/2. We define the measure This lemma will imply that one can cutoff all y k 's in the potential with |k| ≥ n B .
Proof. Let then, by (6.15) and y ∈ Y L , we have In Lemma 7.2 we will give an upper bound on V ′ 2 ∞ , and then we have, for B ≥ 20, that we obtain (7.6). 2 Lemma 7.2 For B ≥ 20 and for any L ∈ G 1 , y ∈ Y L we have Proof. Recall that y ∈ Y L ⊂ Ω 1 implies that the density of the y's is close the semicircle law in the sense of (6.9). Let Since y ∈ Ω 1 , we know that |y −1 |, |y 1 | ≤ 2 − κ/2 (see (6.14)), thus ̺ sc (y −1 ) ≥ c > 0. Taking the imaginary part of (4.3) for |z| ≤ 2 and renaming the variables, we have the identity Thus therefore to prove (7.7) it is sufficient to show that x − y dy ≤ Cn γ/12−B/8 (7.9) We will consider only k ≥ n B and compare the sum with the integral on the regime y ≥ȳ + d, the sum for k ≤ −n B is similar. Define dyadic intervals Since y ∈ Y L ⊂ Ω 1 , i.e. max |y k | ≤ K, there will be no y k above the last interval I log N . We subdivide each I m into n B/2 equal disjoint subintervals of length 2 m dn −B/2 For y ∈ Y L ⊂ Ω 1 , the estimate (4.22) holds for y 1 and y n B , i.e.
if B ≥ 20, which means that by using the definition of d from (7.8), the fact that ̺ sc (y ±1 ) is separated away from zero and that |I y | ≤ CnN −1 from (6.14). Therefore we can estimate To see the last estimate, we notice that in the first summand we haveȳ + d ≤ y j ≤ y n B ≤ y + d + Cn 4B/5 N −1 by (7.11), i.e. all these y j 's lie in an interval of length Cn 4B/5 N −1 , so their number is bounded by Cn 4B/5 by (6.10). Thus the first term in the right hand side of (7.12) is bounded by Cn 4B/5 N −1 d −1 ≤ Cn 1−B/5 ; the estimate of the second term is similar.
Using that max y∈I m,ℓ In the second line we used that N (I m,ℓ ) ≤ KN |I m,ℓ | by (6.10) since y ∈ Ω 1 and I m,ℓ ∩ I y = ∅. We use that for y ∈ Ω 1 we can apply (6.9) for I = I m,ℓ and we get where we used that |I m,ℓ | = 2 m dn −B/2 ≤ C ·2 m n B/2 N −1 (see (7.8)) and that |x− y * m,ℓ | ≥ 2 m−1 d ≥ c · 2 m n B N −1 .
Finally, the second term on the left hand side of (7.14) is a Riemann sum of the integral in (7.9) with an error Combining (7.12), (7.13), (7.14) and (7.15), we have proved (7.9) which completes the proof of Lemma 7.2. 2

Derivative Estimate of Orthogonal Polynomials
In the next few sections, we will prove the boundedness and small distance regularity of the density. Our proof follows the approach of [26] (cf: Lemma 3.3 and 3.4 in [26]), but the estimates are done in a different way due to the singularity of the potential. For the rest of this paper, it is convenient to rescale the local equilibrium measure to the interval [−1, 1] as we now explain.
Suppose L ∈ G and y ∈ Y L . We change variables by introducing the transformation and its inverse then T (I y ) = [−1, 1]. Let µ y be the measure µ The ℓ-point correlation functions of µ y and µ y are related by Let p j (λ), j = 0, 1, . . . denote the real orthonormal polynomials on [−1, 1] corresponding to the weight function e −nU y (λ) , i.e. deg p j = j and 1 −1 p j (λ)p k (λ)e −nU y (λ) dλ = δ jk and define ψ j (λ) := p j (λ)e −nU y (λ)/2 (8.4) to be orthonormal functions with respect to the Lebesgue measure on [−1, 1]. Everything depends on y, but y is fixed in this section and we will omit this dependence from the notation. We define the n-th reproducing kernel The density is given by and the general ℓ-point correlation function is given by following the standard identities in orthogonal polynomials. For the rest of the paper we drop the tilde and all variables will denote the rescaled ones, i.e. all x variables will be on the interval [−1, 1]. All integrals in this section are understood on [−1, 1]. The basic ingredients of the approach [26] can be described as follows: Suppose that the following two properties hold for the normalized function ψ = ψ j , j = n − 1, n, and for some fixed for some positive σ, δ,ε with σ < 1. We will take take δ = 1/4, same as in [26]. Let be the average of ψ in the interval |x − x 0 | ≤ ℓ with some x 0 , |x 0 | ≤ 1 − κ and ℓ ≤ κ/2. We have Using (8.10) to estimate |ψ| ≤ Cℓ −1/2 n (σ−δ)/2 (under the assumption that ℓ < n −δ ) and using (8.9), we obtain |ψ(x 0 )| ≤ Cℓ −1/2 n (σ−δ)/2 + Cn 1+ε/2 ℓ 1/2 .
with some small power ε ′′ , then it will follow that |̺ ′ (x)| ≤ o(n) and this proves the regularity of the density over a distance of order 1/n. Together with the fact that the density is well approximated with the semicircle law on scales bigger than 1/n this will show that the density is close to the semicirle law pointwise. In [26] the regularity of the density on larger scales followed from the smoothness of the potential (Theorem 2.2 of [26]). In our case this follows from (6.34) which is a consequence of the fact [15] that the semicircle law is precise on scales slightly larger than 1/N that corresponds to scales bigger than 1/n after rescaling.
In proving (8.9), (8.10) and (8.12), one basic assumption in [26] requires the potential to be in C 2+ν for some ν > 0. The potential for our probability measure (8.2), parametrized by the boundary conditions y, is singular near the boundary points {±1}. In order to control these singularities, besides using some special properties of orthogonal polynomials, we rely on [15] via (6.33) to provide essential estimates such as level repulsions. It turns out that we can only establish (8.9) and (8.10) for ψ j , j ≤ n−1 following this idea. The case of j = n has to be treated completely differently. We now start to prove (8.9) for ψ j , j = n − 1, n − 2.
Lemma 8.1 Suppose that L ∈ G, y ∈ Y L and, after rescaling that sets y −1 = −1, y 1 = 1, let the y-configuration satisfy (note that the boundary terms k = ±1 are not included in the summations). Furthermore, assume that the density ̺ n satisfies for some A ≥ 60B. Then for the orthonormal functions ψ j from (8.4) we have Notice that the assumptions (8.13) and (8.14) follow from (6.31) and (6.33).
In this section and in the subsequent Sections 9 and 10 we work with orthogonal polynomials on [−1, 1] with respect to the potential U y (x) (see (8.2)). For brevity, we set V (x) = U y (x) in these three sections and we make the convention that the summation over the index k that labels the elements of the external configuration y will always run over integers with for 1 ≤ |k| < n B unless otherwise indicated.
Proof. For simplicity, let p(x) = p j (x) and ψ(x) = ψ j (x). Then Note that e −nV (x) is zero at the boundary x = ±1 so the boundary term vanishes in the integration by parts. Since p(x) is an orthogonal polynomial, it is orthogonal to all polynomials of lower degree, thus the first integral vanishes. By Schwarz inequality, the second integral is bounded by We have thus proved that The last integral is bounded by From (8.13), and the normalization of ψ we have To control the term I 1 , we separate the integration regimes |x ± 1| ≤ n −A and −1 + n −A ≤ x ≤ 1 − n −A for some big constant A. In the inside regime, we can use |ψ(x)| 2 = |ψ j (x)| 2 ≤ n̺ n (x) since j ≤ n − 1. From (8.14) we obtain To estimate the singular part of the integral in I 1 near the boundary points, we can focus in estimating the other endpoint being similar. Let Notice that g(x) is a polynomial of degree deg g ≤ 2n 2B + n. From the Nikolskii inequality (see, e.g., Theorem A.4.4 of [5]) with some universal constant C. Here g p is defined as 1 −1 |g(x)| p dx 1/p for any 0 < p < ∞.
Notice that Nikolskii inequality holds between L p spaces even with exponents p < 1. By the Hölder inequality, Thus from (8.22) we have g 4 ≤ Cn 15B and by Hölder inequality we have we have thus proved that by using (8.18). This completes the proof. 2 9 Bound on smeared-out orthogonal polynomials Lemma 9.1 Let κ, δ 0 > 0 be arbitrary positive numbers. Let L ∈ G, y ∈ Y L , suppose that the y-configuration satisfies (8.13), (8.14) and the density ̺ n (x) ≥ δ 0 > 0 for all |x| ≤ 1 − κ. Let ψ = ψ n−1 or ψ n−2 be an orthogonal function. Then we have with a constant C depending on κ and δ 0 .
Proof. For any z = u + iη ∈ C with η > 0, let denote the Stieltjes transform of the density and denote by the truncated correlation function, where p (2) n was defined in (8.3) and computed from (8.8). We will again drop the tilde in this proof.
We have the identity This identity follows from expressing ̺ n by an integral over n − 1 variables of the equilibrium measure and then integrating by parts (see also (2.81) of [26]). Hence, by using (8.6), we have The last integral can be bounded by where, to estimate the last integral, we have used the Christoffel-Darboux formula We have thus proved that We define a new measure µ − y on [−1, 1] n−1 as where we already omitted the tildes and recall that V (x) = U y (x). Note that this measure differs from (8.1) written in n− 1 variables in that we kept the prefactor n in front of the potential. Define and note that where ψ j 's are defined in (8.4). This latter formula follows from the recursive relation of the correlation functions for GUE-like ensembles, therefore be the Stieltjes transform of ̺ − n ; then we have the analogue of (9.6) Subtracting this from (9.6), we have Assume that u = Re z satisfies |u − x 0 | ≤ n −1/4 . By adding n(m n (z) − m − n (z))V ′ (u) to the both sides of (9.7), we obtain We divide the integral into |x − x 0 | ≤ ν/2 and |x − x 0 | ≥ ν/2. In the first integration regime, since Since |x| ≤ 1 − ν/2, |u| ≤ 1 − ν/2, we have |y k − u| ≥ 2ν −1 for any k. Thus, by (8.13), the prefactor in (9.8) is bounded, uniformly in |x| ≤ 1 − ν/2, by where the constant C depends on ν and we recall that y −1 = −1, y 1 = 1 in the rescaled variables.
In the second integration regime we use |x − u| ≥ |x − x 0 | − |x 0 − u| ≥ ν/4 and obtain where we have used (8.15) and Hölder inequality to estimate the first term in the last line and using (8.13) for the second term.
We have thus proved that using that Im m − n (z) > 0. Since ̺ n (x) ≥ δ 0 > 0 by assumption, Im m n (z) is bounded from below. Thus, choosing η = n −1/4 , we obtain with C depending on ν and δ 0 . Taking imaginary part, we have for any u with |u − x 0 | ≤ η = n −1/4 . Integrating over |u − x 0 | ≤ η and using with some positive constant c, we have proved (9.1) for ψ = ψ n−1 . The case ψ = ψ n−2 can be done in a similar way. This completes the proof of Lemma 9.1. 2 Corollary 9.2 Suppose that the y-configuration satisfies (8.13), (8.14) and the density satisfies ̺ n (x) ≥ δ 0 for all |x| ≤ 1 − κ for some δ 0 , κ > 0. Let ψ = ψ j with j = n − 2, n − 1, n be an orthogonal function. Then sup Proof. For the case j = n − 2, n − 1, the estimate (9.10), even with a better exponent, follows from the argument leading to (8.11) from the two assumptions (8.9) and (8.10) with δ = 1/4, ε = 6γ and σ = 3γ: The estimate (8.9) was proven in Lemma 8.1, the estimate (8.10) follows from Lemma 9.1. The proof of (9.10) for ψ = ψ n requires a different argument. Let a j be the leading coefficient of the (normalized) j-th orthogonal polynomial, i.e. p j (x) = a j x j + . . .. Observe that p ′ n (x) = na n x n−1 + . . . = n(a n /a n−1 )p n−1 (x) + . . ., where dots mean a polinomial of degree less than n − 1. Thus The first integral on the right hand side vanishes. By the Schwarz inequality, we have where the second integral was estimated in (8.15).
Recall the standard three-term recursion relation for orthogonal polynomials xp n−1 = ap n + bp n−1 + cp n−2 (9.13) with some real numbers a, b, c depending on n. By comparing the leading coefficients, we have a n−1 = aa n and by orthonormality, we get In particular 1 |a| = a n a n−1 ≤ Cn 3γ from (9.12). Hence, from (9.13), Using the bound (9.11), we obtain (9.10) for ψ = ψ n as well. 2

Regularity of Density
Lemma 10.1 Let L ∈ G, y ∈ Y L . Suppose that the external y-configuration satisfies (8.13) and (8.14) and assume that γ < 1 150 . Then for any κ > 0 we have where the constant C depends on κ.
Proof. The derivative of the density can be computed explicitly (see, e.g., (3.63) of [26]) as In our case From the Christoffel-Darboux formula, we have Since |x| ≤ 1 − κ, we can estimate the contribution to (10.3) from the first term in (10.4) by where we have used (8.13) to bound the factor in front of the integral The contribution from the second term in (10.4) is bounded by The integral is estimated as The first term on the right hand side is bounded by Cn 3γ using (8.13). In the second term, we split the integration into two regimes: |z| ≤ 1 − n −A and 1 − n −A ≤ |z| ≤ 1 with some A ≥ 60B.
In the first regime, we use the bound (9.10) to obtain CAn −1/8+11γ log n ≤ C if γ < 1 88 . In the second regime we use the bound (8.23). This proves (10.1).
For the proof of (10.2) we use the derivative estimate and the fact that the density is close to the semicircle law on scale n −1+γ as given in (6.34). For any x, y ∈ [−1 + 2κ, 1 − 2κ] we have Taking the average on the interval I = [x − 1 2 n −1+γ , x + 1 2 n −1+γ ], we get Using (6.34), we have with I * := T −1 (I), where we also used that Combining these inequalities, we arrive at ( where M 1 is the space of probability measures on [−1, 1]. For general properties of the equilibrium measure, see, e.g. Chapter 2 of [24] (and references therein) that specifically discusses the case of compact interval I and continuous potential going to infinity at the endpoints. We point out however, that we follow the convention of [9] and [26] in what we call external potential; the potential in [24] and [23], denoted by q(x) and Q(x), respectively, differs by a factor of two from our convention: q(x) = Q(x) = 1 2 V (x). The equilibrium measure ν with support S(ν) satisfies the Euler-Lagrange equations log |s − t| −1 ν(ds) and S(ν) ⊂ (−1, 1) (Theorem 2.1 of [24]). Moreover, since V is convex in (−1, 1) such that lim |x|→1 V (x) = ∞, the support S(ν) is an interval, S(ν) = [a, b], whose endpoints satisfy −1 < a < b < 1 and they are uniquely determined by the equations according to Theorem 2.4 [24] (after adjusting a factor of 2). In our case, the potential V and thus the equilibrium measure ν depend on n and the external configuration y in a non-trivial way. The main result of the recent work of Levin and Lubinsky [23] proves the universal sine-kernel behavior for the correlation function of the orthogonal polynomials with respect to a general n-dependent potential. This result fits exactly our situation, after the conditions of [23] are verified.
We recall the main result of [23] in a special form we will need.
Theorem 11.1 For each n ≥ 1, consider a positive Borel measure µ n on the real line whose 2n+ 1 moment is finite. Let I = [−1, 1] and assume that each µ n is absolutely continuous on I and they can be written as µ n (dx) = W 2n n (x)dx where the non-negative functions W n are continuous on I. We define the potential Q n = − log W n : I → (−∞, +∞] and let ν n be the solution of the variational problem (11.1) with V = V n = 2Q n . Let J be a compact subinterval of (−1, 1). Assume the following conditions (d) The following limit holds uniformly for E ∈ J and a in any fixed compact subset of R: Then for the n-th reproducing kernel of the measure µ n on I (defined in (8.5)) we have uniformly for E ∈ J and for a, b in compact subsets of R.
First we verify the conditions of this theorem for our case. We consider the sequence of measures µ n on R that vanish outside of I = [−1, 1] and that are given by µ n (dx) = e −nUy(x) dx on I, where y ∈ Y L is a sequence of good external configurations after rescaling for some L ∈ G. Recall that the concept of good external configurations depends on N , i.e. G = G N and we recall the relation (4.15) between n and N . We set J = [−1 + σ, 1 − σ] for some σ > 0. The measure µ n is clearly absolutely continuous (actually it has a polynomial density), and since it is compactly supported, all moments are finite. Conditions (a) and (b) will be verified separately in Appendix F. Conditions (c) and (d) follow directly from (10.2) in Lemma 10.1. Now we start the proof of the main Theorem 2.1. Throughout this proof, E is the expectation for the Wigner ensemble with a small Gaussian component, i.e. E = E ft with the earlier notation. All constants in this proof may depend on κ. We will use the results obtained in Sections 4-10. In these sections, various small exponents α, β, γ, ε, and various large exponents A, B need to be specified. The exponent β is given in the theorem and it can be an arbitrary positive constant. The other exponents are determined in terms of β subject to the following requirements: β ≥ 10ε + α (6.27), β ≥ (4A + 8)ε + α (Lemma 6.1), Bε < 1/2 (Section 7), B ≥ 20 (Lemma 7.1) and A ≥ 60B (Lemma 8.1). Finally, γ ≤ 1 10 can be an arbitrary positive number, independent of β. Obviously, these conditions can be simultaneously satisfied for any β > 0 if α, γ, ε are chosen sufficiently small and A, B sufficiently large. All constants in the proof depend on this choice.
Let O(a, b) be a bounded function and δ < κ/2. In (2.9) we have to compute the limit of We first show that sup with a constant depending on κ. To see this, let R be a large number so that g(x) = h(x) = 0 for |x| ≥ R, then where we have used that inf{̺ sc (E) : |E − E 0 | ≤ δ} ≥ c > 0 and that for any interval I of length |I| ≥ 1/N . The bound (11.8) follows from Eq. (3.11) in [15] after cutting the interval I into subintervals of size 1/(2N ). The estimate (11.6) and similar ideas allow us to perform many cutoffs and approximations. For example, we can replace ̺ sc (E) in g and h by ̺ := ̺ sc (E 0 ) in the definition of T (N, δ), see (11.5), at the expense of an error that vanishes in the limit δ → 0. We shall give a proof in case we perform the change for, say, g: We will not repeat this type of simple argument in this proof.
After this replacement, we can perform the dE integration using that h = 1: (11.9) where the last error comes from the contribution of eigenvalues within CR/N distance to E 0 ± δ. With the notation and using (11.4), we thus need to prove that Recall the definition of N sc (E) from (4.12) and its inverse function and write 1 λ j − E 0 ≤ δ = χ N,E0,δ (j) + U j , (11.11) where U j is the error term, defined as the difference of 1 λ j − E 0 ≤ δ and χ N,E0,δ (j). We thus have The last term is bounded by  The first expectation is bounded by Splitting the interval [E 0 − δ − C/N, E 0 + δ + C/N ] into overlapping subintervals I ℓ of length 4C/N with an overlap at least 2C/N , we get that this last expectation is bounded by where we used the moment bound (11.8) with k = 3 and the fact that the number of subintervals is CN δ.
Since N sc is monotonic, the second expectation in (11.13) is bounded by On the set Ω c we estimate the difference of the two characteristic functions by 2, and we get from (4.19) that the contribution is subexponentially small in n. On the set Ω we can use (4.21) and we see that the difference of the two characteristic functions can be nonzero only if i.e. the number of j's this can happen is bounded by CN n −γ/6 . Recalling (4.15), we get therefore the second term in (11.12) vanishes in the N → ∞ limit. This shows that we can replace 1 λ j − E 0 ≤ δ by χ N,E0,δ (j) in the definition of T * a with negligible error and we can do similarly for k instead of j. Therefore, we need to prove that and without loss of generality, we can assume that g ≥ 0.
We define and let Q L := E X L .
We claim that To see this, we consider the expectation value separately on Ω and Ω c . Since the double sum contains at most N 2 terms and P(Ω c ) is subexponentially small (4.19), it is sufficient to check (11.14) when the expectations are restricted to the set Ω. On the set Ω we have (11.15) where C depends on g ∞ . This follows from the fact that, by the support of g, only those (j, k) index pairs give nonzero contribution for which |λ j − λ k | ≤ C/N , and thus |j − k| ≤ Cn γ by (4.22). Therefore the sum L X L contains each pair (j, k) at least [n − Cn γ ]-times and at most [n + Cn γ ]-times. Taking the expectation of (11.15) on Ω, we obtain (11.14).
Since Q L is bounded by using (11.8), and we only have to estimate Q L for a typical L. Additionally to L ∈ {M − , M − + 1, . . . , M + }, we can thus assume that L ∈ G, since the relative proportion of good indices approaches one within any index set with cardinality proportional with N and which is away from the boundary (see (6.29)). More precisely, we fix two sequences L − (N ) and L + (N ) such that L ± (N then it follows from (11.16) that where lim δ→0 lim N →∞ ε N,δ = 0. We thus have to show that Q L±(N ) converges to the sine kernel. We will actually prove that Q L converges to the sine-kernel for any sequence L = L(N ) ∈ G = G N . The dependence on N will be omitted from the notation. For L ∈ G, we can compute the expectation as according to the convention that E = E ft . Recall that definition of the sets Ω 1 = Ω 1 (L), Ω 2 = Ω 2 (L) and Ω 3 (L) from (6.7),(6.15) and (6.20). Setting Ω := Ω 1 ∩Ω 2 ∩Ω 3 , we see that the probability of its complement is P( Ω c ) ≤ Cn −2 (see (6.8), (6.16) and (6.21)). Since X L ≤ Cn, we only have to consider external configurations such that y ∈ Ω. Thus . (11.17) The second term in the square bracket will be an error term since it is bounded by Since y ∈ Ω and L ∈ G, we have |f y − 1|dµ y ≤ n −2 from (6.26) and (6.27) and we thus obtain For the main term, by using (7.6) and assuming that B is large enough, we can also replace the measure µ y by its cutoff version µ y is an equilibrium measure, its correlation functions can be obtained as determinants of the appropriate K kernels, see (8.8). In particular (11.19) holds for the marginals of the measure µ (1) y . The lower bound on p (2) follows from the fact that K is the kernel of a positive operator, i.e. |K(u, v)| 2 ≤ K(u, u)K(v, v).
Let 0 < κ ≤ 1/10. We now show that, up to an error of order κ, the dα integration in (11.18) can be restricted from I y = [y −1 , y 1 ] onto i.e. onto an interval in the middle of I y with length (1 − 4κ)|I y |. Similarly, the dβ integration will be restricted to i.e. onto an interval in the middle of I y with length (1 − 2κ)|I y |. We show how to restrict the dα integration, the other one is analogous. The difference between the full dα integral and the restricted one is given by (11.20) To do this estimate, we go back from the equilibrium measure µ (1) y to f y and we also remove the constraint Ω. As above, all these changes result in negligible errors. Moreover, we can insert Ω at the expense of a negligible error since P(Ω) is subexponentially small. Thus (11.20) can be estimated by (11.21) up to negligible errors. On the set Ω we know from (4.22) that assuming that γ ≤ 1/20. Thus the first term in the square bracket of (11.20) can be estimated by taking into account (11.8) as before. Similar estimate holds for the second term in (11.21). Thus, restricting the dα-integration to I * y results in an error of order O(κ). Doing the same restriction for the dβ integral, we can from now on assume that both integrations in (11.18) are restricted to I * y , i.e. it is separated away from the boundary. In particular, from (10.2) and after rescaling, we know that ̺ y (α) and ̺ y (β) are essentially constant and equal to |I y | −1 (1 + O(n −γ/12 ). Moreover, on the set Ω, we know from (6.13) that |I y | −1 = N ̺ n (1 + O(n γ−1/4 )), i.e.

(11.24)
Since g is smooth and has compact support, we have (11.25) from (11.23). Therefore, when we insert (11.25) into (11.24) and use (11.19), the error term involving ξ is bounded by (11.26) using that, by definition, and similar bound holds for the β-integral.
Thus we can replace the variable of g in (11.24) by −b with negligible errors. Now Theorem 11.1 states that 1 Clearly, as n → ∞, (y * * ± − α)n̺ y (α) → ±∞ for all α ∈ I * y , i.e. the integration limits can be extended to infinity, noting that g is compactly supported. Finally, from (11.23) we have Combining all these estimates with Theorem 11.1, we obtain where the last term error term is from Theorem 11.1 that goes to zero as N → ∞. Taking the N → ∞, δ → 0 and κ → 0 limits in this order, we arrive at the proof of Theorem 2.1.
To prove the results about the closeness of m(z) or E m(z) to m sc (z), we first recall the key identity about the trace of a resolvent in terms of resolvents of minors (see, e.g., (4.5) of [13]): and Here B (k) is the (kk)-minor of H (the (N − 1) × (N − 1) matrix obtained by removing the k-th row and the k-th column from H), λ α are the eigenvalues and the eigenvectors of B (k) , and a (k) = (h k1 , . . . , h k,k−1 , h k,k+1 , . . . h kN ). Throughout the proof we let x, y denote the real and imaginary parts of z = x + iy. Moreover, we will restrict our attention to y > 0. The case y < 0 can be handled similarly.
Step 1. Lower bound on |m(z) + z|. There exist constants C, c > 0 such that for all x ∈ R, y ≥ (log N ) 4 /N , and for all N large enough (depending only on the choice of C, c).
To show (A.3), we use a continuity argument. We claim that there exist positive constants C 1 , C 2 , C 3 , c > 0 such that the following four conditions are satisfied: The first condition can be checked explicitly from (4.3). The second condition follows from the upper bound in Theorem 4.6 of [15]. The third condition can be satisfied because of Lemma 4.2 in [15], combined with the fact that P(max k |h kk | ≤ (c/48)) ≤ e −CN and with the observation that with probability one (see, for example (2.7) in [14]). Finally, the last condition can be verified by Theorem 4.1 of [15]. Note that the last three conditions only need to hold for all N > N 0 (c, C 1 , C 2 , C 3 ) large enough. Fix C = min(C 1 , C 2 , C 3 ).  To prove (A.3) for z ′ ∈ B z , notice that |m ′ (z)| ≤ N 2 for all z ∈ C with Im z ≥ (log N ) 4 /N with probability one. Therefore, using (A.3) for z, we find that for all z ′ ∈ B z . Expanding (A.1), we obtain that Therefore, where we used (A.4) and (A.6). This implies (A.3) for z ′ ∈ B z , and completes the proof of Step 1.
Step 2. Convergence to the semicircle in probability. Suppose that |x| ≤ K, (log N ) 4 /N ≤ y ≤ 1. Then there exist constants c, C, δ 0 , only depending on K, such that for all δ < δ 0 , and all N ≥ 2.
To show (A.7), we first observe that, by increasing the constant C, we can assume N to be sufficiently large. Then we expand (A.1) into .
We define the complex random variable . for all y ≥ (log N ) 4 /N and δ > 0, we find for δ small enough, y ≥ (log N ) 4 /N , and N large enough (independently of δ).
It remains to show (A.7) for 2 ≤ |x| ≤ K. To this end, for (log N ) 4 /N ≤ y ≤ 1 and 2 ≤ |x| ≤ K, we consider the event Since m(z) is the Stieltjes transform of an empirical measure with finite support, it is analytic away from a compact subset of the real axis. Similarly, on the set Ω * , Y (z) is bounded and analytic away from a compact subset of the real axis. The square root in the above formula is therefore uniquely defined as the branch analytic on C\(−∞, 0], characterized by the property that the real part of the square root is non-negative. Hence, on Ω * , using the explicit formula (2.7) for m sc (z), and therefore using the fact that From (A.11), we obtain that for all δ small enough, 2 ≤ |x| ≤ K, (log N ) 4 /N ≤ y ≤ 1, and N large enough.
Step 4. Convergence to the semicircle in expectation. Assume that |x| ≤ K, (log N ) 4 /N ≤ y ≤ 1 and N y|2 − |x|| ≥ (log N ) 4 . Then for a universal constant C. Note that this bound gains an additional (N η) −1/2 factor on the precision of the estimates compared with Step 2 and Step 3, but the negative power of |2 − |x|| has increased.
To prove (A.14), with c 0 := inf z |m sc (z) + z| > 0, we have for N large enough (here we used (A.13)). Expanding the denominator in the r.h.s. of (A.1) around Em(z) + z, we find .
Taking expectation, we find .
With a Schwarz inequality, we get for arbitrary q ≥ 1. Moreover, with c fixed in (A.3), we have using (A.9), and hence .20) if N is large enough. Here we used the fact that Im m(z) + z − δ 1 (z) ≥ Im z = y. From (A.9), we also have From the definition of δ 1 (z) in (A.2), from EX (k) = 0 and from (A.5), we get Combining this bound with (A.19), we find, from (A.18), that Recall that m sc (z) solves the equation This equation is stable in a sense that the inverse of the function m → m + (m + z) −1 near zero is Lipschitz continuous with a constant proportional to |2 − |x|| 1/2 . Thus we obtain and this completes Step 4.
To prove (A.24), we use the bound which is valid for all q ≥ 1 and it follows from Theorem 3.1 in [13]. Expanding again the denominator in the r.h.s. of (A.1) around Em(z) + z, we get Taking the expectation, we find

B
Proof of Proposition 4.2 We start with the proof of (4.14). From the moment method, we know that if λ min (H) and λ max (H) denote the smallest and the largest eigenvalues of the hermitian Wigner matrix H, and if K is large enough, then (for example, one can use the bound ETr H N 2/3 ≤ C from [28]; the symmetry condition on the distribution can be removed by symmetrization). This implies that N(E) ≤ N K −cN 2/3 for E < −K and 1 − N(E) ≤ N K −cN 2/3 for all E > K. Therefore for K > 0 large enough. The last term is negligible. The main estimate is contained in the following lemma whose proof is given at the end of this section.
be the Stieltjes transform and the distribution function of ̺ * , respectively. Denote moreover by m * ± (z) the Stieltjes transforms of ̺ * ± . We assume that m * , m * + , m * − satisfy the following bounds for |x| ≤ K + 1: with some constants L 1 , L 2 , L 3 . Then which, together with (B.1), completes the proof of (4.14).
For the proof of (4.13), we fix |E| ≤ K and we choose N −3/4 ≤ η ≤ 1 to be optimized later.
(B.6) The second term on the r.h.s is estimated by Cη, using (4.5). For the first term we use Theorem 4.6 of [15]: with some positive c > 0. Now we consider the fluctuation of the smoothed distribution function We partition [−K − 2, K + 2] into intervals I r of length η. For M ≥ M 0 with a sufficiently large M 0 , and set then from Theorem 4.6 of [15] we know that Analogously to the calculation (D. 16), the size of the variance of W is determined by the size of |∇W |. On the event Ω k , we have (Note that the derivative in ∇W is with respect to the original random variables z ij = √ N h ij ). From the concentration inequality (Theorem 2.1 of [4]) we obtain that Choosing T = cN 1/2 , and η = N −3/4 , it follows that Repeating the same argument with W replaced by −W , we conclude that Combining this with (B.6) and (B.7), we have which completes the proof of Proposition 4.2.
Proof of Lemma B.1. For simplicity, in the proof we omit the star from the notation. First notice that (B.2) implies that, after taking imaginary part, To express f (λ) in terms of the Stieltjes transform, we use the Helffer-Sjöstrand functional calculus, see, e.g., [8]. Let χ(y) be a smooth cutoff function with support in [−1, 1], with χ(y) = 1 for |y| ≤ 1/2 and with bounded derivatives. Let and therefore, since f is real, (B.13) Using (B.3) and the support properties of χ ′ and f , the second contribution is bounded by For the first term in (B.13), we split the integration: where, in the second term, we dropped the imaginary part since f and χ are real. To bound the first term we note that, for every fixed x, the functions |y||Im m ± (x + iy)| = ρ ± (ds) y 2 (s − x) 2 + y 2 are monotonically increasing in |y|. This implies that, for all |y| ≤ η, As for the second term on the r.h.s. of (B.15), we integrate by parts first in x, then in y. It is sufficient to consider the regime η ≤ y ≤ 1, the case of negative y's is treated identically. We find As for the second term on the r.h.s. of (B.21), we divide the integral into several pieces: We partition the interval [−2 + κ, 2 − κ] into a disjoint union of intervals of length n γ N −1 and center w r = rn γ N −1 , where r ∈ Z, |r| ≤ r 1 := N n −γ (2 − κ). Then, for any r, N (I r ) n γ − ̺ sc (w r ) ≤ n −γ/6 (C.2) by (4.16). To prove (4.21), first we locate middle eigenvalue. Let r 0 be the index such that For definiteness, we can assume that r 0 ≥ 0. Using the second event in (4.18) we obtain that On the other hand, with the notation r 1 := min{(r 0 − 1) + , N n −γ }, we have by (C.2) that where we used that w r ≤ 1 for any r ≤ r 1 ≤ N n −γ and thus ̺ sc (w r ) ≥ ̺ sc (1) ≥ c. From (C.4) and (C.5) we conclude that r 0 ≤ CN n −7γ/6 , i.e. w r0 ≤ Cn −γ/6 . Thus we proved that Starting the proof of (4.21), we can assume that a ≥ N/2 by symmetry. Suppose first that λ a ∈ [−2 + κ, 2 − κ], i.e. λ a ∈ I r for some |r| ≤ r 1 , i.e. a ≥ N/2 implies r ≥ r 0 . Then we have using (C.2) (C.6) and that γ is small. Similarly Thus, combining these estimates with (C.7), we have i.e. |N −1 sc (aN −1 ) − w r | ≤ Cκ −1/2 n −γ/6 using (11.10) and κ 3/2 ≤ aN −1 ≤ 1 − κ 3/2 . Since λ a ∈ I r , i.e. |λ a − w r | ≤ n γ N −1 , we obtain (4.21). Finally, we consider the case when λ a > 2 − κ. The lower bound in (C.7) and the estimate (C.9) hold with r = r 1 so we get which contradicts the assumption a ≤ N (1 − κ 3/2 ) for large N .

D Proof of Lemma 5.3
We start with the outline of the proof and indicate the origin of the restriction α > 1/4. We will first regularize the logarithmic interaction on a scale η at the expense of an error of O(η) for each pair of eigenvalues, modulo logarithmic corrections (Lemma D.1). By a Schwarz inequality (D.18), the fluctuation of the regularized two body interaction is split into the product of the fluctuation of the regularized potential A x (D.14) and the fluctuation of the local semicircle law regularized on scale η. The latter is of order O(N −1/2 η −1/2 ) by the improved fluctuation bound on the local semicircle law (4.7). The former is of order O(N −1 η −1/2 ) using that the logarithmic Sobolev inequality (2.4) on the single site distribution can be turned into a spectral gap estimate for A x . Finally, we optimize the regularization error O(η) and the fluctuation error O(N −3/2 η −1 ) per particle pairs, which gives a total error of order N 2 · N −3/4 = N 1+1/4 .
The proof of the following regularization lemma is postponed until the end of the section: with respect to any Wigner ensemble whose single-site distribution satisfies (2.6) and (2.5).
Then Lemma 5.3 directly follows from the following statement: for a universal constant C > 0 and all N large enough.
Proof of Lemma D.2. Recall that ω(dx) denotes the empirical measure of the eigenvalues (4.1). We have because of the contribution of the diagonal terms.
Step 1. Recall the definition of ω η (x) from (4.2), then To prove (D.3), we observe that Here we also used that P{supp ω ∈ [−K, K]} ≥ 1 − e −CN for some large constant K. Next we observe that Inserting this bound back into (D.4), we find (D.6) Here we used the bound 1 which holds uniformly in x ∈ R, if η ≥ (log N ) 4 /N . To prove (D.7), consider the event for some K 0 > 0. Moreover, define the intervals I k = [−(k + 1)η, −kη] ∪ [kη, (k + 1)η], for all nonnegative integer k ≤ K 0 /η, and consider the event For sufficiently large K 0 and K we have Lemma 7.4 [13] and by (4.20), after adjusting c. Then because N η ≥ (log N ) 4 by assumption. This completes the proof of Step 1.
To estimate the fluctuations of A x we use that the logarithmic Sobolev inequality (2.4) implies the spectral gap, i.e., we have Let u α denote the orthonormal set of eigenvectors belonging to the eigenvalues λ α of H. Taking into account the scaling (2.1), we have We have from (D.15), (D. 16) and (4.5) that On the other hand, from (4.7) and ω η (x) = π −1 Im m(x + iη) we have for all q ≥ 1 and for |x| ≤ K with some large constant K.
In order to insert this estimate into (D.13), we need to extract the necessary decay for large x from ω η (x) − ̺ η (x). For |x| ≥ 2K 0 sufficiently large and for any q ≥ 1 we can estimate Inserting the last three equations into (D.13) with q = 2, we find This completes the proof of Step 2.
(D. 19) To prove (D.19), we write (D.20) To control the first term on the r.h.s. of the last equation, we recall that denote the expected number of eigenvalues up to x normalized by N (integrated density of states) and the distribution function of the semicircle law. Note that N(x) − N sc (x) vanishes at x = ±∞.
Introducing N η (x) := x −∞ ̺ η and integrating by parts we find

(D.21)
From the upper bound (4.4) on |E m(x + iη)| and from dx N sc (x) − (N sc * θ η )(x) ≤ Cη we find, by (4.14), The second term on the r.h.s. of (D.20) can be bounded similarly. This completes the proof of Step 3. Combining the estimates in Step 1-3 and choosing η = N −3/4 , we finish the proof of the Lemma D.2.
Proof of Lemma D.1. We split the summation into three parts: We have 23) by (D.7). For the Y 2 term, we remark that, for arbitrary 0 ≤ δ ≤ η, (D.24) To bound the r.h.s. we consider the events Θ 0 , Θ 1 from (D.8), (D.9) with sufficiently large K and K 0 so that (D.10) holds. Then for every 0 ≤ δ ≤ η. Finally, for the Y 3 term we use the level repulsion estimate (E.5) from Theorem E.3, which implies that for any interval I = [E − ε/N, E + ε/N ] with E ∈ R and 0 < ε ≤ 1 be overlapping intervals covering R. We can then write We split the interval J r into overlapping subintervals of length 2 −m+1 N −10 by defining Then (D. 27) For large |r| ≥ KN 10 , we can also use the bound P{λ j ∈ I r } ≤ C exp − cN (N −10 r) 2 , that follows from the trivial large deviation estimate for the largest eigenvalue (Lemma 7.4 [13]). Inserting these last two estimates into (D.26), we have for every 0 ≤ δ ≤ η

E Level repulsion near the spectral edge
We need to establish a Wegner-type inequality, and bounds on the level repulsion in the same spirit as in Theorem 3.4 and Theorem 3.5 of [15], for energy intervals close to the spectral edges. Since we only need these bounds for very small values of ε ≃ N −α , we are not aiming at the most general result here. The statements we present can be proven by simply replacing, in the proof of Theorems 3.4 and Theorem 3.5 of [15], the convergence to the semicircle law stated in Theorem 3.1 of [15] with Theorem 4.1. Recall that Theorem 3.1 of [15] is valid up to the smallest possible scale η > K/N but only away from the spectral edges, while Theorem 4.1 holds all the way to the spectral edges, but only up to the logarithmic scale η > (log N ) 4 /N . A better N -dependence of the bounds in the following theorem (but a worse κ-dependence) can be achieved by following the dependence on κ of the constants in Theorem 3.1 of [15].
All statements assume the conditions (2.4)-(2.6). We introduce the notation that [x] + denotes the positive part of a real number x.
Theorem E.1 (Gap distribution) Let H be an N ×N hermitian Wigner matrix and let |E| < 2. Denote by λ α the largest eigenvalue below E and assume that α ≤ N − 1. Then there are positive constants C, D, c, d such that for any N ≥ 1 and any D(log N ) 4 /(2 − |E|) ≤ K ≤ κN d.
Proof. The proof of this theorem can be obtained following the proof of Theorem 3.3 in [15], making use of Theorem 4.1 instead of Theorem 3.1 of [15] (in order to follow the |2−|E|| dependence of the probability). More precisely, we observe that the event λ α+1 −E ≥ K/N implies that there is a gap of size K/N about the energy E ′ = E + K/(2N ). Choosing M = D 1/2 κ −1/2 for a sufficiently large constant D > 0, and η = K/(N M 2 ) ≥ (log N ) 4 , we find, similarly to (7.3)-(7.4) in [15], that, apart from a set Ω c of measure P(Ω c ) ≤ Ce −c √ K , which implies, for sufficiently large D, that where c 0 = π̺(E ′ ) ≥ c √ κ. The theorem then follows because, by Theorem 4.1, the event (E.2) has probability for every E ∈ R and ε ≤ 1. Moreover y E|m(x + iy)| 2 ≤ C(log N ) 4 (κ + N −1 ) 9 (E.4) for all x ∈ R, y > 0. for all E ∈ R, all 0 < ε < 1, and all N large enough.
Proof. The proof of Theorem E.2 and Theorem E.3 follows exactly the proof of Theorem 3.4 and, respectively, Theorem 3.5 in [15], after replacing Theorem 3.3 of [15] by Theorem E.1 above (in order to follow the dependence on the distance from the edges).
Note that the results of the last three theorems are only useful in the regime of very small ε = N |I| ≪ (log N ) −4 .

F Properties of the equilibrium measure
Here we check the conditions (a) and (b) in Theorem 11.1. The main ingredient is the following: Lemma F.1 Let L ∈ G and y ∈ Y L . After rescaling, then for any fixed σ > 0 with J ′ = [−1 + σ/2, 1 − σ/2], the first and second derivatives of the potential are uniformly bounded on J ′ , i.e. where the constant is independent of y. Furthermore, the endpoints a, b of the support of the equilibrium measure ν = ν y satisfy |a + 1|, |b − 1| ≤ Cn −γ/3 log n. (F.2) Condition (b) of Theorem 11.1 is given now by (F.1). To see condition (a) of Theorem 11.1, let [a n , b n ] denote the support of the equilibrium measure ν n , then a n → −1 and b n → 1 as n → ∞, thus g n is positive on J = [−1 + σ, 1 − σ] for any fixed σ > 0 and any sufficiently large n.
For the uniform boundedness of g n (x) on J, we use the explicit formula (see, e.g. Theorem 2.5. of [24]): where P.V. denoted principal value. For sufficiently large n and for any x ∈ J the singularity of (s − x) −1 is uniformly separated away from a n and b n , i.e. from the singularity of the square roots. Moreover, V ′ n (x) is a smooth function inside (−1, 1) with sup n sup x∈J ′ |V ′ n (x)| + |V ′′ n (x)| ≤ C.
according to (F.1). Thus the uniform boundedness of g n on J follows immediately from (F.3) with standard estimates on the principal value.
Proof of Lemma F.1. Recall the definition E L = N −1 sc (LN −1 ) from (6.11). For y ∈ Y L we know from the first bound in (6.13) that dist(I y , E L ) ≤ Cn −γ/6 , and from (4.22) that y k = y −1 + k + O(k 4/5 ) N ̺ 0 , with ̺ 0 := ̺ sc (E L ), assuming γ ≤ 1/20 and Cn ≤ |k| ≤ n B ≤ N 1/2 . After rescaling, this corresponds to and we will drop the tilde for the rest of this proof. This bound on the location of y k 's will be used to estimate the derivatives of U y . For ℓ = 1, 2 and x ∈ J ′ we have
To estimate the location of the endpoints, we substitute V (x) = U y (x) into the equations (11.2). We have 2 n We will need the following explicit integration formulae for a < b (see, e.g. Formula 2.266 in [17] if a < b < y. Using the bound (F.4) on the location of y k 's, we replace the limit −n B < k with −Y ≤ y k and the limit k < n B with y k ≤ Y in the summations in (F.10) and (F.11), where Y := n B−1 ̺ −1 0 . We have, for example, for the first sum (F.10), 12) and the estimate for the other three sums in (F.10), (F.11) is identical.
With similar argument, we can remove the y k 's that are too close to [−1, 1]. Let X = n γ−1 , then C n √ a + 1 where, for the lower bound, we used that y −1 = −1, while for the upper bound we used that the number of y k 's in [−1 − X, −1] is at most Cn γ (see the third set in the definition of (4.18)) .
(F. 19) Now we consider the W 2 and assume again that u ≥ 0. With the same change of variables as above, we have The integrals on the r.h.s of (F.20) can be explicitly computed: thus we have by using that v 2 ≤ 1 and ̺ 0 ≤ π −1 (see (2.7)). Combining this estimate with the upper bound in (F.16), we have by using a < b. Therefore either a + 1 or 1 − b is smaller than Cn γ−1 , but then by using (F. 19) we obtain that both of them are smaller then Cn −γ/3 log n. This completes the proof of Lemma F.1.