HOW MANY REAL ZEROS DOES A RANDOM DIRICHLET SERIES HAVE?

. Let F ( σ ) = (cid:80) ∞ n =1 X n n σ be a random Dirichlet series where ( X n ) n ∈ N are independent standard Gaussian random variables. We compute in a quantitative form the expected number of zeros of F ( σ ) in the interval [ T, ∞ ), say E N ( T, ∞ ), as T → 1 / 2 + . We also estimate higher moments and with this we derive exponential tails for the probability that the number of zeros in the interval [ T, 1], say N ( T, 1), is large. We also consider almost sure lower and upper bounds for N ( T, ∞ ). And ﬁnally, we also prove results for another class of random Dirichlet series, e


Introduction.
Around 1938, in a series of papers [13,14,15,16], Littlewood and Offord proved estimates for the average number of real roots of a random polynomial p(z) = X 0 + X 1 z + ... + X n z n , where (X j ) n j=0 are random variables.In 1943, inspired in the first of these papers, Kac [11] presented a formula for the expected number of these real roots in the Gaussian case.From this formula he deduced that if n is the degree of the random polynomial, and if (X j ) n j=0 are independent standard Gaussian variables, then An analogous statement for random variables with other distributions is also true, but this has turned out to be a great challenge in the last century, when, for instance, we consider that (X j ) n j=0 are Rademacher random variables (see this 1956 paper [8] by Erdős and Offord for the Rademacher case, and these other papers [9,10] by Ibragimov and Maslova for other distributions).
In the past 50 years, this beautiful theory has evolved in deepness and in many perspectives -here we refer to these papers [6,18] by Do, Nguyen, Vu and Nguyen, Vu for a short survey and a nice state of an art on this topic.
In Analytic Number Theory, the location of the zeros of certain analytic functions are of utmost importance.For instance, the location of the zeros of the analytic continuation of the Riemann zeta function have deep connections with the distribution of prime numbers, where here and throughout this paper, s denotes a complex number s = σ + it.
The Riemann ζ function is a particular case of a Dirichlet series, and here we are interested in the case where we replace the constant 1 by random variables, i.e., where (X n ) n∈N are i.i.d.Gaussian random variables with mean 0 and variance 1.
This random Dirichlet series F (s) is, almost surely, convergent if and only if s is in the complex half plane Re(s) = σ > 1/2 due to the Kolmogorov Three-Series Theorem, and to classical results for general Dirichlet series.These series have been studied recently by the authors [3], where it has been proved a Law of the Iterated Logarithm (LIL) that describes the almost sure fluctuations of F (σ) when σ → 1/2 + (in the Rademacher case), and by Buraczewski et al. [4], where they considered a more general class of this particular random series and proved LIL and other convergence theorems.
A key difference between the zeros of this random Dirichlet series F (s) and that of the Riemann zeta function, is that, ζ(s) has no real zeros 1 in the half plane Re(s) > 0, while in the random case there are an infinite number of real zeros accumulating at the right of 1/2, almost surely, see [2].
Throughout this paper we shall specialize on real zeros but in several places we will be looking at F (s) for a given complex number s.Our target in this paper is to prove results for F (σ) for real σ > 1/2.For 1/2 < T < U , N (T, U ) denotes the number of real zeros of F (σ) in the interval [T, U ], where each zero is counted without multiplicity, and U can be either a real number or ∞.Since F (σ) is an analytic function, N (T, U ) < ∞ for all T > 1/2 and U < ∞, almost surely.
, and this alternating series is well defined for all Re(s) = σ > 0. The fact that ζ has no real zeros follows from the fact that the sequence (1/n σ ) n∈N is decreasing and the series is alternated.
As far as we are aware, little attention has been given for zeros of random Dirichlet series in the literature.We found a nice geometric point of view of the expected number of zeros of a general random series of functions by Edelman and Kostlan, see [7].For the case of our random Dirichlet series, in [7] appeared the following formula: (1) We recall that our random Dirichlet series is almost surely convergent at any complex number at the right of s = 1/2, and divergent at this point and at any complex number at the left of it.By the formula above, we can deduce that for any 1/2 < T < U < ∞, the function inside the integral at the right hand-side of ( 1) is continuous, and hence EN (T, U ) < ∞.However, the point s = 1/2 is almost surely a singularity of the analytic function given by the random Dirichlet series F (s), c.f. [2].The first aim of this paper is to make it quantitative the formula (1) as T gets closer to this singularity at s = 1/2.
Theorem 1.1.There exist δ > 0 and constants c 0 , (c n ) n≥2 such that, for all T ∈ (1/2, 1/2 + δ): Remark 1.1.Considering the Laurent expansion of ζ around its simple pole at s = 1: (3) where γ n is called the n-th Stieltjes constant, it is possible to show that the coefficients c n , in Theorem 1.1, for n ≥ 2, are given by 1 π times a polynomial p n with rational coefficients in the variables (γ n ) n≥0 .In fact, these coefficients can be explicitly computed by formal expansion of power series.For instance, c 2 = 2γ 1 +γ 2 0 2π .
1.1.Moment bounds.Another interesting investigation comes when we consider higher moments EN (T, 1) k for a real number k ≥ 1.We were able to proof the following estimate.

Theorem 1.2 (Moment estimates).
There exists a constant C > 0 such that for all k ≥ 1 and all 1/2 < T < 1, As an application of this result, for C > 0 as in Theorem 1.2, for any fixed λ > 2C, by choosing k = λ/2C in Theorem 1.2, we obtain the following Corollary by making a direct usage of Chebyshev's inequality.
Corollary 1.1 (Exponential tails).There exist constants c, C > 0 such that for any 1/2 < T < 1 and any λ > C, 1.2.Almost sure bounds.We observe that the zeros of a random polynomial can be very distinct as the degree of the polynomial varies.Here we observe that in our random Dirichlet series case, as T varies, N (T, ∞) is non-decreasing as T → 1/2 + .
Therefore it becomes natural to consider almost sure limits, and this is the content of our next result.
Theorem 1.3 (Almost sure bounds).We have the following almost sure limits: We believe that the upper bound above is close to be optimal, and at the final section we discuss how our methods can be, perhaps, modified in order to get a lower bound of the form log(1/(T − 1/2)).
1.3.More general random Dirichlet series.We also compute the expected number of real zeros of random Dirichlet series of the form where p runs orderly over an increasing set of positive real numbers P := {p 1 < p 2 < ...} with p 1 ≥ 1 and p n → ∞, and X p are independent standard Gaussian random variables.We assume some regularity in the counting function π(x) := |{p ≤ x : p ∈ P}|: where α is a real number.As an example, the positive integers satisfy the quantitative statement above with α = 0, and the prime numbers with α = −1, due to the Prime Number Theorem.
We denote by N α (T, U ) the number of zeros in the interval [T, U ] of the random series F (σ) associated to P satisfying (4).Regardless the value of α, we have that F (s) converges for all Re(s) > 1/2, and diverges for all Re(s) < 1/2, almost surely.
By letting we see from [7] that (1) generalizes to It is important to observe that the assumption (4) is not enough to deduce good analytic properties of ζ α (s) around its singularity at s = 1.Even so, a qualitative result, weaker in comparison with Theorem 1.1, can be obtained.
Theorem 1.4.As T → 1/2 + , we have that where c > 0 is a number that depends on the set P.

Notation
We use the standard notation: The case (1) is used whenever there exists a constant C > 0 such that |f (x)| ≤ C|g(x)|, for all x in a set of numbers.This set of numbers when not specified is the real interval [L, ∞], for some L > 0, but also there are instances where this set can accumulate at the right or at the left of a given real number, or at complex number.Sometimes we also employ the notation ≪ ϵ or O ϵ to indicate that the implied constant may depends in ϵ.
In case (2), we mean that lim x f (x)/g(x) = 0.When not specified, this limit is as x → ∞ but also can be as x approaches any complex number in a specific direction.

Proof of the main results
3.1.The expected number of zeros.The essence of the proof of Theorem 1.1 is the complex analytic theory of the Riemann zeta function as we show below.
Proof of Theorem 1.1.We begin by recalling some well known facts about the Riemann zeta function.Classically defined for Re(s) > 1 as we have that actually ζ has analytic continuation to the all complex plane except at s = 1 where has a simple pole with residue 1.
In what follows, we will prove eq. ( 2) without specifying the values of c n .Afterward, we will indicate how to compute the coefficients c n , n ≥ 2, as stated in Remark 1.1.
We begin by observing that where the power series above is convergent in the open ball centered at s = 1 and with radius 3, since the zero of ζ(s) closest to s = 1 is at s = −2.Thus, we reach The first index starting at n = 2 above is justified by the fact that and the power series converges absolutely, the integral of the sum is the sum of integrals: where Λ(n) is the classical von Mangoldt function 2 .Therefore By the general theory of Dirichlet series, where the interchange between the integration and summation is justified by the fact that the Dirichlet series converges absolutely for σ in the range [100, L], for any large L > 100.Therefore, the limit lim L→∞ EN (100, L) exists and is a real number.This completes the proof.□ 2 The von Mangoldt function is defined as follows: If n is the power of a prime, say n = p m , then Λ(n) = log p.If n is not a prime power, then Λ(n) = 0 Now we are going to justify that the coefficients c n = p n /π, where p n is a polynomial with rational coefficients in the variables (γ n ) n≥0 .
Before doing that, we recall the following result from Complex Analysis that can easily be obtained from Theorem 3.4, pg.66 of the book of Lang [12]: Then, in a open ball centered at z = a, f (g(z)) can be represented by a convergent power series given by Working carefully the lemma above, we see that the final power series of the composition is obtained by formally expanding each inner power series at power n and the resulting series is the sum over these expansions.
With that on hand, observe firstly that A proof of this can be found at the book of Montgomery and Vaughan [17], pg.168.Now we begin the proof of Theorem 1.2 with the following lemma concerning bounds of Dirichlet series at different points.
Proof.We have that Now we write the sum above as a Riemann-Stieltjes integral: Using integration by parts, we reach:

□
Now we recall a classical inequality for sums of independent random variables: Levy's maximal inequality: Let X 1 , ..., X n be independent random variables.
Then for all t ≥ 0 A proof of this can be found in the nice book of Peña and Giné, [5], pg. 4. In what follows we will need an infinite version of the inequality above: To obtain this, we observe that the event inside the probability in the left hand-side of ( 7) increases as n → ∞ to the event in the left hand-side of (8).Therefore, by the continuity of probabilities, to obtain (8) we only need to make the limit n → ∞ in (7).
Another inequality we will need is the following: Let x 1 , ..., x R be positive real numbers.Then, for all k ≥ 1 The proof of this inequality can be made by the following argument: Let X be a random variable with uniform distribution over {x 1 , ..., x R }.Then the above inequality is just the moment bound EX ≤ (EX k ) 1/k .
We continue with the following: where Observe that F (σ 1 ) has variance ∼ 1 δ , and F (σ 0 ) has variance ∼ 2 5δ .Therefore where we used the inequality (9) in the last step above.

Now we will estimate E| log(
can be done analogously.We split and this completes the proof of the lemma.□ We are ready to: Proof of Theorem 1.2.We let T = 1/2 + δ for some 0 < δ < 1/2.Let C (0) 0 be a circle with center σ 0 = 1/2 + 5δ/4 and radius δ/4.Let C (0) 1 be a circle with same center σ 0 but with radius δ/2.Call the rightmost point of C Set T 0 = T .The number of zeros N (T, 1) can be split as where R is defined as the smallest positive integer n so that By inequality (9), we have that Say that the center of C Thus combining (9) with Lemma 3.3, we obtain constants C, D > 0 such that and this finishes the proof.□ So, if Dc > 1, the probabilities above are summable and hence the Borel-Cantelli Lemma is applicable, that is, almost surely for all n sufficiently large Observe that N (T ) is non-decreasing as T → 1/2 + .Now, for T n ≤ T ≤ T n−1 , we have that To pass the above estimate to N (T, ∞), we just observe that any Dirichlet series has a half plane inside its region of absolute convergence in which there are no zeros.
A proof of this can be found at the book of Apostol [1] pg. 227.In our case, our random Dirichlet series converges absolutely and almost surely in the half plane Re(s) > 1.To complete the argument, we see that since a Dirichlet series is an analytic function, it cannot have and infinite number of real zeros between s = 1 and this half plane where it does not vanishes, unless that it vanishes identically, which is not the case.□ Now we continue with the proof of Theorem 1.3 The lower bound.The proof of the lower bound will be divided in some steps.Our idea is to consider the following quantities: Let σ n = 1/2 + 1/2 n and define Thus S + (R) counts the number of times that F is positive along the sequence σ n , and S − (R) the number of times that F is negative along the same sequence.
In what follows, we will prove a quantitative Law of large numbers for S + (R) and for S − (R).We need firstly the following result.
Lemma 3.4.Let F (σ) be our random Dirichlet series and σ n = 2 −1 + 2 −n .Then there exists a constant C > 0 such that Proof.We have that On the other hand, □ Lemma 3.5.Let F (σ) be our random Dirichlet series and σ n = 2 −1 + 2 −n .There exists a constant C > 0 such that for all k and l Proof.We begin by observing that F (σ k ) and F (σ l ) have joint Gaussian distribution.
This can be verified by checking that any linear combination of them has a Gaussian distribution, see the book of Shiryaev [19] pg.301.In our case, for any real numbers a and b: and since (X n ) n∈N are i.i.d.standard Gaussians, we reach that the distribution of Let X = F (σ k )/ var F (σ k ) and Y = F (σ l )/ var F (σ l ).Thus, the probability in the right-handside above is P(X > 0, Y > 0).Observe that X and Y are standard Gaussians with correlation ρ, say.Let Z be another standard Gaussian random variable independent of X.Thus (X, Y ) has the same distribution of (X, ρX + 1 − ρ 2 Z).With this we reach Now we see that f (ρ) is the probability that the pair (X, Z) lies in a sector with angle Since the distribution of (X, Z) is invariant by rotations, we have that f (ρ) = θ/2π.
So our target correlation is Using the fact that tan −1 (x) = x 0 dt 1+t 2 ≤ x and Lemma 3.4, we complete the proof of the lemma in the first case.The second case can be done analogously.□ Lemma 3.6.Let S + (R) and S − (R) be as above.Then, for all ϵ > 0 Proof.The proof of this lemma is a direct application of a result in Probability theory for weak dependencies that says the following: Let Y n be a sequence of square-integrable random variables such that If for some increasing sequence of positive real numbers b n we have that A proof of this can be found at the book of Stout [20], pg.28.
In our case, we apply the result above for Y n = 1 [F (σn)>0] , and We have that ( 11) is satisfied by Lemma 3.5.Now we take b n = n 1/2+ϵ .Since 0 ≤ Y n ≤ 1, (12) also is satisfied.Thus we get convergence of the series Now we recall a particular case of Kroeneckers' Lemma (see the book of Shiryaev and consequently has a zero in this interval.
By Lemma 3.6 and the same holds for S − .
In order to maximize the counting of number of zeros we need V as small as possible.But the equality above says that to guarantee a zero in the interval [σ R+V , σ R ] we need V a bit larger than O ϵ ((R + V ) 1/2+ϵ ), where the implicit constant in this O ϵ term might depend in ϵ and could be random.So, we seek for a sequence Indeed such property is satisfied by choosing R n = [n 2+8ϵ ] (here [x] stands for the integer part of x), since by the mean value theorem for differentiable functions, for some n ≤ θ n ≤ n + 1, and then So we showed that almost surely for all n sufficiently large, there is a zero of F (σ) in the interval [σ R n+1 , σ Rn ].The number of these subintervals has size proportional to R 1/2−ϵ ′ for some new small ϵ ′ > 0. Indeed, if n is the largest positive integer k Thus, for any ϵ > 0, almost surely for all R sufficiently large there is at least Proof of Theorem 1.4.Just as in Theorem 1.1, we have that Thus, we need to estimate, as σ → 1/2 + , quantities of the form where β = 0, 1, 2 and the last integral above is in the Riemann-Stieltjes sense.
We will present the details only for the case −2 < α < −1.The other cases can be treated similarly.
Apart from the fact that this function blows as σ → 1/2 + , we have that the exponent where Q = {q 1 = 1 < q 2 < q 3 < ...} and q n = p n /p 1 , for all n.log q q σ dσ ≪ q>1 1 q 100 < ∞.This shows that EN (100, ∞) is a real number, and this ends the proof.□ 4. Concluding remarks 4.1.Almost sure lower bound.We believe that our almost sure lower bound is far from being optimal, and we include it here for completeness.We describe an approach that could perhaps replace the exponent 1/2 by 1.If instead of considering S + and S − , we work directly with then the number of zeros can be lower bounded directly by a rescaled quantity involving S(R).The problem that we could not solve is to compute the correlations of {1 [F (σn)>0] 1 [F (σ n+1 )<0] } n≥1 , since this involves quadruple integrals and not so nice as in Lemma 3.5.We hope to investigate this in another occasion.
where A(s) is an analytic function in a open ball centered at s = 1 and with radius 1.Moreover, A(s) = O(|s − 1| 2 ) as s → 1, and hence, there exists a δ > 0 such that |A(s)| does not exceed 1/2 for all s in an open ball B of center 1 and radius δ.Thus, the function 1 + A(s) is analytic in this open ball B and has power series representation 1 d ds ζ ′ (s) ζ(s) is a continuous function and always positive in the real interval [1+δ, 100], and hence EN (1/2+δ/2, 100) is a real number.Let L > 100.Since √ a + b ≤ √ a + √ b for all a, b ≥ 0, and 0 ≤ Λ(n) ≤ log n, we have that

Lemma 3 . 1 .
Let f be analytic in a open ball centered at w = g(a), where g is an analytic function in a open ball centered at a. Suppose that

1 1+. 1 . 3 . 2 .
where w = w(s) is an analytic function at some open ball centered at s = 1, and w = O(|s − 1|) as s → 1.Using the Taylor expansion 1 1 − w = ∞ n=0 w n , we obtain that at some open ball centered at s = 1, by Lemma 3.1, 1/ζ(s) around s = 1 is described by a Taylor series whose coefficients are described by polynomials with rational coefficients in the variables (γ n ) n≥0 .The same is true for ζ ′ (s)/ζ(s), since the product of two convergent power series in a ball are again a convergent power series in a, perhaps, smaller ball.And moreover Now, d ds ζ ′ (s)/ζ(s) is described by a Laurent series whose coefficients are described by polynomials with rational coefficients in the variables (γ n ) n≥0 , and the same is true for A(s) defined above and consequently for 1 + A(s), since the power series of √ 1 + z in the variable z has rational coefficients.The last step was to integrate 1 2σ−A(2σ), and this keeps the target property.This justifies Remark 1Moment bounds.The proof is based on the following inequality involving the number of zeros of an analytic function and its maximal value in circles: Let F (s) be analytic in a domain containing the disc |s| ≤ R, let M be the maximal value of |F | on this disc, and assume that F (0) ̸ = 0.Then, for r < R, the number of zeros of F in the disc |s| ≤ r does not exceed (6) log(M/F (0)) log(R/r) .

[ 19 ]
, pg. 390): For any sequence of real numbers (a n ) n and σ > 0, if the series ∞ n=1 a n n −σ converges, then the partial sums n≤x a n = o(x σ ).By applying this result to the random variable a n = 1 [F (σn)>0] − 1/2 or a n = 1 [F (σn)<0] − 1/2, we obtain the target result.□ We are ready to the Proof of the Lower bound.We see that if both S + (R + V ) − S + (R) ≥ 1 and S − (R + V ) − S − (R) ≥ 1, then F (σ) has at least one sign change in the interval [σ R+V , σ R ],

4. 2 .F 2 p
General random Dirichlet series.It is interesting to observe from formula (5) that a number λ > 0 such that π(x) = (λ + o(1))x(log x) α has no effect in the asymptotics of EN α (T, U ). Another interesting remark comes from the fact that we could deal with a slight more general random Dirichlet series if we allow to put extra weights {a p ≥ 0 : p ∈ P}: (σ) = p∈P a p X p p σ .All we need to do is to make regularity assumptions on the partial sums π * (x) := p≤x a 2 p .In this case, formula (5) remains valid if we replace ζ α by ζ * (s) := p∈P a p s .The results of Theorem 1.4 remains unchanged if we worked out with assumptions on π * (x) instead of π(x), since all what matters is the behaviour of ζ * (s) around its singularity.