The completely delocalized region of the Erdős-Rényi graph*

We analyse the eigenvectors of the adjacency matrix of the Erdős-Rényi graph on N vertices with edge probability d N . We determine the full region of delocalization by determining the critical values of d logN down to which delocalization persists: for d logN > 1 log 4−1 all eigenvectors are completely delocalized, and for d logN > 1 all eigenvectors with eigenvalues away from the spectral edges are completely delocalized. Below these critical values, it is known [1, 3] that localized eigenvectors exist in the corresponding spectral regions.


Introduction
Let A be the adjacency matrix of the Erdős-Rényi graph G(N, d/N ), defined as the random graph on N vertices where each edge of the complete graph is kept with probability d/N independently of the others. The subject of this note is the delocalization of the eigenvectors of A in the limit of large N . A commonly used measure of delocalization of a vector u ∈ C N , which we also adopt here, is the quotient where u p denotes the p -norm of u. Informally, q(u) 1 corresponds to a localized vector and q(u) = N −1+o (1) to a completely delocalized vector. We refer to [1] and the b * . . = 1 log 4 − 1 ≈ 2.59, (1.1) if d (b * − κ) log N then a semilocalized phase exists near the edge of the spectrum, characterized by q(u) N −γ for some γ < 1. Moreover, in [3] it was proved that if d (b * − κ) log N then the extreme 1 eigenvectors of A are localized.
In [1] it was proved that on the scale d log N complete delocalization persists in the spectral region [−2 + κ, −κ] ∪ [κ, 2 − κ] for the matrix A/ √ d. This spectral region excludes precisely the two regions exhibiting localized or semilocalized eigenvectors described in the previous paragraph: the neighbourhoods (−κ, κ) and ±(2 − κ, ∞) of the origin and of the spectral edges, respectively. In fact, in [1] it was proved that complete delocalization in this spectral region persists down to scales d √ log N , below which it fails throughout the spectrum.
Hence, the question of when delocalization occurs in the neighbourhoods (−κ, κ) and ±(2 − κ, ∞) was left open. We settle it here. Writing d = b log N for some constant b > 0, we show that complete delocalization for A/ √ d holds throughout the spectrum provided that b > b * and in [−2 + κ, 2 − κ] provided that b > 1. As explained above, this result is optimal since for any b < b * there are localized states near the spectral edges, and for any b < 1 there are localized states in (−κ, κ). Hence, combined with [1], our result gives a complete description of the delocalized spectral region of A/ √ d. In addition, it shows that the extreme eigenvectors of A/ √ d undergo a sharp transition from completely delocalized to localized as b crosses b * . We refer to Figure 1 below for a phase diagram summarizing our results.
We now state our main delocalization result. Theorem 1.1. For any constant κ > 0 the following holds with probability 1 − o(1).
(ii) If d (1 + κ) log N then all eigenvectors u of A whose eigenvalues are bounded in absolute value by (2 − κ) Delocalization in the sense of q(u) N −1+κ has been a central topic in random matrix theory ever since the seminal work [11]. The proof of Theorem 1.1 is based on an extension of the delocalization argument from [1]. There, it was shown that the spectral measure of the Green function of A/ √ d at some vertex x is well approximated in a certain spectral region by the spectral measure at the root of an infinite regular tree, whose root has the same degree as x and all its children have degree d. In this 1 Here, and throughout this introduction, we leave aside the largest eigenvalue of A, which for d log N is the Perron-Frobenius eigenvalue and constitutes an outlier separated from the rest of the spectrum. Here, b * is defined in (1.1) and C is the large constant from [12]. The spectrum is confined to the coloured region [2,13]. This diagram summarises the results of [1,3,12] and the present paper. In the red region the eigenvectors are completely delocalized; the light red region was established in [1,12] and the dark red region is established in the present paper. In the light blue region the eigenvectors are semilocalized [1], and near the spectral edge (dark blue line) they are localized [3]. The grey regions have width o(1) and have not been fully analysed yet.
paper, we establish this approximation in the full region where these spectral measures are regular. The main observation underlying our proof is that this spectral measure develops a singularity near the origin if and only if the normalized degree of x (see (2.4) below) is small, and it also develops a singularity in the interval ±[2, ∞) if and only if the normalized degree of x is at least 2. We combine this observation with elementary tail bounds on the maximal and minimal degrees of G(N, d/N ).
The rest of this paper is devoted to the proof of Theorem 1.1.
Notation Every quantity that is not explicitly constant depends on N . In statements of conditions we use κ to denote a positive constant. We use

Conditional delocalization for sparse matrices
In this section, we state a version of Theorem 1.1 in the more general setup of sparse matrices in Proposition 2.1 below. In Subsection 2.1, we then conclude Theorem 1.1 from Proposition 2.1 by analysing the degree distribution of Erdős-Rényi graphs.
We now introduce these sparse matrices, which generalize (appropriately scaled) adjacency matrices of Erdős-Rényi graphs. We consider matrices M of the form Here, 0 f N κ/6 , e . . = N −1/2 (1, 1, . . . , 1) * and H = (H ij ) ∈ C N ×N is a Hermitian random matrix satisfying the following assumptions for some d with We assume throughout the remainder of this paper that (2.2) holds.
The next proposition is our main result on the eigenvector delocalization of matrices M as defined in (2.1). For its formulation, we need the following notion of high probability events. We say that a (possibly N -dependent) event Ξ occurs with very high probability if for each constant ν > 0 there is a constant C > 0 such that P(Ξ) 1 − CN −ν for all sufficiently large N . Moreover, we say that an event Ξ occurs with very high probability on an event Ω if for each constant ν > 0 there is a constant C > 0 such that P(Ξ ∩ Ω) P(Ω) − CN −ν for all sufficiently large N . The eigenvector delocalization of M turns out to depend on the behaviour of the 2 -norms of the columns of H which we denote by |H xy | 2 . The proof of Proposition 2.1 shall be given directly after the statement of Lemma 3.2 below.

Delocalization for Erdős-Rényi graphs -proof of Theorem 1.1
Let now A be the adjacency matrix of the Erdős-Rényi graph G(N, d/N ). For any vertex x ∈ [N ], we define its normalized degree A xy . (2.4) The next lemma provides tail bounds for the extreme normalized degrees.
We defer the proof of Lemma 2.2 to the end of this section and first combine it with with very high probability (see [1,Remark 4.3] for details).
Proof of Theorem 1.1. For d C log N , Theorem 1.1 has been proved in [12]. Hence, we restrict to the regime (2.2) in the remainder of this proof.
We start with the proof of (ii where in the third step we used Stirling's approximation.

Conditional local law for sparse matrices and proof of Proposition 2.1
We now introduce the notation required for the main result of this section, Theorem 3.1 below. We start with the indicator functions ψ l . . = 1 βx κ for all x , ψ u . . = 1 βx 2−κ for all x , which impose lower and upper bounds on all β x , respectively. Moreover, we define the associated spectral domains For z ∈ C with Im z > 0 and α 0, we introduce for all z ∈ S # with very high probability.  . Let κ > 0 be a constant. If α κ and z ∈ S l , or α 2 − κ and z ∈ S u , then |m α (z)| 1,  If α κ then the boundedness of m α (z) for z ∈ S l follows from (3.1) and (A.2). We now assume α 2−κ and z ∈ S u . If α 2−κ and |Re z| 2−κ/2 then |z +αm| κ/2 by (A.1) and, hence, the boundedness follows from (3.1). If α κ/2 and κ |Re z| 2 − κ/2 then the boundedness has been established before. Finally, if α κ/2 then |z + αm| κ/2 due to (A.1) and |z| κ for z ∈ S u .

Proof of local law -Theorem 3.1
For the proof of Theorem 3.1, we call a vertex x ∈ We denote the set of typical vertices by T . Note that T depends on the spectral parameter z.
Furthermore, we introduce the control parameter Λ, the indicator function φ t with t > 0 defined through  Proof. This follows directly from some results of [1]. We recall the definition θ . . = 1 maxx,y|Gxy| Γ from [1, eq. (4.19)]. Note that ψ # φ t θ on S # by (3.2) if Γ is chosen large enough. Moreover, 1 Im z=1 θ as |G xy | (Im z) −1 trivially and Γ 1. By  Averaging over x ∈ T in Lemma 3.3 shows that s(z) satisfies an approximate version of the self-consistent equation (3.4) for m(z) defined in (3.1). The self-consistent equation (3.4) has another solution, denoted by m(z), whose imaginary part is negative. It is given by (iii) If |m(z) − m(z)| 2(log N ) −1/7 then, with very high probability, Proof. We first note that throughout the following arguments it suffices to consider the diagonal terms, G xx − m βx , in the definition of Λ, since θ max x =y |G xy | (log N ) −1/2 with very high probability by [1, eq. (4.38b)]. We refer to the proof of Lemma 3.3 for the definition of θ and the proof that θ = 1 in all cases considered in Lemma 3.4. Let t > 0 be a constant. By averaging over x ∈ T in Lemma 3.3, we conclude that for z ∈ S # with very high probability. Therefore, standard stability estimates for (3.4) (e.g. [12,Lemma 4.4]) yield for z ∈ S # with very high probability. If Im z = 1 then we conclude from (3.10) that |m − s| (log N ) −1/6 since Im s > 0, Im m > 0, Im m < 0 and |m − m| 1. Together with Lemma 3.3, for x ∈ T , this implies with very high probability if Im z = 1, where ε x = O((1 + β x )(log N ) −1/6 ). Since m βx is the Stieltjes transform of a probability measure on R, we have |m βx | (Im z) −1 and Im m βx 1 if Im z = 1. Hence, (3.12) implies |G xx − m βx | (log N ) −1/6 with very high probability. This proves (i).
βx +ε x | 1 due to (3.1) and |m βx | 1 on S u by (3.2). Applying these estimates to the right-hand side of (3.12) completes the proof of (ii).
We now establish Theorem 3.1 by showing that φ 7 = 1 = φ 8 for Im z = 1 and bootstrapping this information to small values of Im z using Lemma 3.4.
For the second bound, we note that m βx = m + O((log N ) −1/3 ) for x ∈ T by (3.3) and (3.5) while |G xx − m βx | + |m βx | + |m| 1 for x / ∈ T by (A.1). Therefore, the second bound in Theorem 3.1 follows from averaging the first bound in Theorem 3.1 over x ∈ [N ], distinguishing the cases x ∈ T and x / ∈ T and using (3.8).
In the case # = l, Theorem 3.1 can be proved by adjusting some arguments from [1] as we explain in the next remark.