Limit theorems for additive functionals of random walks in random scenery

We study the asymptotic behaviour of additive functionals of random walks in random scenery. We establish bounds for the moments of the local time of the Kesten and Spitzer process.These bounds combined with a previous moment convergence result (and an ergodicity result) imply the convergence in distribution of additive observables (with a normalization in n^(1/4)).When the sum of the observable is null, the previous limit vanishes and we prove the convergence in the sense of moments (with a normalization in n^(1/8)).

1. Introduction 1.1. Description of the model and of some earlier results. We consider two independent sequences (X k ) k≥1 (the increments of the random walk) and (ξ y ) y∈Z (the random scenery) of independent identically distributed Z-valued random variables. We assume in this paper that X 1 is centered and admits finite moments of all orders, and that its support generates the group Z. We define the random walk (S n ) n≥0 as follows S 0 := 0 and S n := n i=1 X i for all n ≥ 1.
We assume that ξ 0 is centered, that its support generates the group Z, and that it admits a finite second moment σ 2 ξ := E[ξ 2 0 ] > 0. The random walk in random scenery (RWRS) is the process defined as follows where we set N n (y) = #{k = 0, . . . , n − 1 : S k = y} for the local time of S at position y before time n. This process first studied by Borodin [7] and Kesten and Spitzer [32] describes the evolution of the total amount won until time n by a particle moving with respect to the random walk S, starting with a null amount at time 0 and wining the amount ξ ℓ at each time the particle hits the position ℓ ∈ Z. This process is a natural example of (strongly) stationary process with long time dependence. Due to the first works by Borodin [7] and by Kesten and Spitzer [32], we know that (n − 3 4 Z ⌊nt⌋ ) t converges in distribution, as n goes to infinity, to the so-called Kesten and Spitzer process (σ ξ ∆ t , t ≥ 0), where ∆ is defined by with (β x ) x∈R a Brownian motion and (L t (x), t ≥ 0, x ∈ R) a jointly continuous in t and x version of the local time process of a standard Brownian motion (B t ) t≥0 , where ((B t ) t , (β s ) s ) is The asymptotic behaviour of (N n (0)) n has been studied by Castell, Guillotin-Plantard, Schapira and the author in [14,Corollary 6], in which it has been proved that the moments of (n − 1 4 N n (0)) n≥1 converge to those of the local time L 1 (0) at position 0 and until time 1 of the process ∆. The proof of this result was based on a multitime local limit theorem [14, Theorem 5] extending a local limit theorem contained in [13] and on the finiteness of the moments of L 1 (0) (which was a delicate question). We complete this previous work by establishing in Section 2 the following bounds for the moments of L 1 (0). Even if it uses some ideas that already existed in [14], the proof of Theorem 1 (given in Section 2) is different in many aspects. It requires indeed much more precise estimates which changes in the approach of the control of the moments. The proof of Theorem 1 relies on several auxiliary results. We summarize quickly its strategy. We will prove (see (9) coming from [14] and (10)) that where we set W k := V ect(L (1) , ..., L (k) ) and L (k+1) := (L t k+1 − L t k )/(t k+1 − t k ) 3 4 (normalized so that |L (m) | L 2 (R) has the same distribution as |L 1 | L 2 (R) ). We will prove, in Lemma 7, that ∃c, C > 0, m!
as m → +∞ and, in Lemma 6, that where d(·, ·) is the distance associated with the L 2 -norm on L 2 (R) and where V k is the set of linear subspaces of L 2 (R) of dimension at most k. Theorem 1 will then follow from the next self-interesting estimate on the local time L 1 of the Brownian motion B up to time 1.
Now we use the following classical argument for positive random variables. The upper bound provided by Theorem 1 allows us to prove that the Carleman's criterion is satisfied for E L 1 (0) where E is a centered Rademacher distribution independent of L 1 (0) and of Z, indeed: for every η 0 ∈ (0, 3 8 ). This enables us to deduce from [14,Corollary 6] that n − 1 8 E N n (0) converges in distribution to E σ −1 ξ L 1 (0) and so that where L −→ means convergence in distribution. This convergence in distribution is extended to more general observables. The proof of the moments convergence in Theorem 3 is a straigthforward adaptation of [14] and is given in Appendix B. Due to Theorem 1 and to the above argument that lead to (5), the convergence in distribution in Theorem 3 is a consequence of the moments convergence. Another strategy to prove the convergence in distribution in Theorem 3 consists in seing this result as a direct consequence of (5) combined with Proposition 13 stating the ergodicity of the dynamical system ( Ω, T , µ) corresponding to T k ((X m+1 ) m∈Z , (ξ m ) m∈Z , Z 0 ) = ((X k+m+1 ) m∈Z , (ξ m+S k ) m∈Z , Z k ) .
This dynamical system preserves the infinite measure µ := P ⊗Z X 1 ⊗ P ⊗Z ξ 0 ⊗ λ Z , where λ Z is the counting measure on Z. Actually, thanks to (5) and to the recurrence ergodicity of ( Ω, T , µ), we prove the following stronger version of the convergence in distribution of Theorem 3. −→ means convergence in distribution with respect to any probability measure absolutely continuous with respect to µ.
Theorem 3 can be seen as weak law of large numbers, with a non constant limit. When a∈Z f (a) = 0, the limit given by Theorem 3 vanishes, but then the next result provides a limit theorem for Z n = n−1 k=0 f (Z k ) with another normalization. This second result corresponds to a central limit theorem for additive functionals of RWRS. Let us indicate that, contrarily to the moments convergence in Theorem 3, the next result is not an easy adaptation of [14], even if its proof (given in Section 4) uses the same initial idea (computation of moments using the local limit theorem) and, at the begining, some estimates established in [13,14]. Indeed, important technical difficulties arise from the cancellations coming from the fact that a∈Z f (a) = 0.
In particular, for any a ∈ Z, n − 1 8 (N n (a) − N n (0)) n converges in the sense of moments to Let us point out the similarity beween these results and the classical Law of Large Numbers and Central Limit Theorem for sums of square integrable independent and identically distributed random variables. Indeed Theorems 3 and 5 establish convergence results of the respective following forms as n → +∞, with a n → +∞, I an integral (with respect to the counting measure on Z) and Y 0 k a reference random variable with integral 1 (e.g. Y 0 k = 1 0 (Z k ), note that we cannot take Y 0 k = 1 since it is not integrable with respect to the counting measure on Z).
The summation order in the expression (6) of σ 2 f is important. Indeed recall that P(Z k = 0) has order k − 3 4 and so is not summable. The sum k∈Z appearing in (6) is a priori non absolutely convergent if d = 1. Indeed, considering for example that ξ 0 is a centered Rademacher random variable (i.e. P(ξ 0 = 1) = P(ξ 0 = −1) = 1 2 ) and that f = 1 0 − 1 1 , then, for any k ≥ 0, But, σ 2 f corresponds to the following sum of an absolutely convergent series (in k): Finally, let us point out that σ 2 f defined in (6) corresponds to the Green-Kubo formula, wellknown to appear in central limit theorems for probability preserving dynamical systems (see Remark 14 at the end of Section 3).
Let us indicate that results similar to Theorem 5 exist for one-dimensional random walks, that is when the RWRS (Z n ) n≥1 is replaced by the RW (S n ) n≥1 , with other normalizations and with an exponential random variable instead of L 1 (0). Such results have been obtained by Dobrušin [21], Kesten in [31] and by Csáki and Földes in [17,18]. The idea used therein was to construct a coupling using the fact that the times between successive return times of (S n ) n≥1 to 0 are i.i.d., as well as the partial sum of the f (S k ) between these return times to 0 and that these random variables have regularly varying tail distributions. This idea has been adapted to dynamical contexts by Thomine [47,48]. Still in dynamical contexts, another approach based on moments has been developed in [41,42] in parallel to the coupling method. This second method based on local limit theorem is well tailored to treat non-markovian situations, such as RWRS. Indeed, recall that the RWRS (Z n ) n≥1 is (strongly) stationary but far to be not markovian (for example it has been proved in [14] that Z n+m − Z n is more likely to be 0 if we know that Z n = 0) and even more intricate conditionally to the scenery (it has been proved in [25] that the RWRS does not converge knowing the scenery). Luckily local limit theorem type estimates enables to prove moments convergence. But unfortunately Theorem 1 is not enough to conclude the convergence in distribution via Carleman's criterion.
The paper is organized as follows. In Section 2, we prove Theorem 1 (bounds on moments of the local time of the Kesten Spitzer process) and Theorem 2 (estimate on the distance in L 2 (R) between the local time of a Brownian motion and a k-dimensional vector space). In Section 3, we establish the recurrence ergodicity of the infinite measure preserving dynamical system ( Ω, T , µ) and obtain the convergence in distribution of Theorem 3 (Law of Large Numbers) as a byproduct of this recurrence ergodicity combined with (5). Section 3 is completed by Appendix B which contains the proof of the moments convergence of Theorem 3. In Section 4 (completed with Appendix A), we prove Theorem 5 (Central Limit Theorem).

Upper bound for moments: Proof of Theorem 1
This section is devoted to the study of the behaviour of E[(L 1 (0)) m ] as m → +∞. It has been proved in [14] that this quantity is finite, but the estimate established therein was not enough to apply the Carleman criterion. The proof of Theorem 1 requires a much more delicate study, even if it uses some estimates used in [14]. We start by establishing bounds for E[(L 1 (0)) m ]. and Proof. Recall that it has been proved in [14,Theorem 3] that with D t 1 ,...,tm := R L t i (x)L t j (x) dx i,j=1,...,m where (L t (x)) t≥0,x∈R is the local time of the Brownian motion B. Since det D t 1 ,...,tm is a Gram determinant, we have the iterative relation , where d(f, g) = f − g L 2 (R) and where V ect(L t 1 , ..., L tm ) is the sublinear space of L 2 (R) generated by L t 1 , ..., L tm . It follows that But, for any m ≥ 1 and any 0 < t 1 < ... < t m+1 < 1 and any k = 0, ..., m − 1, and where V k is the set of linear subspaces of dimension at most k of L 2 (R) and where we used the independence of (L t k+1 − L t k )(B t k + ·) with respect to (B s ) s≤t k and the fact that (L t 1 (B t k + ·), ..., L t k (B t k + ·)) is measurable with respect to (B s ) s≤t k . Thus, by induction and using the fact that the increments of B are (strongly) stationary, it follows from (10) and (12) that with the convention t 0 = 0. Recall that (L u (x)) x∈R has the same distribution ( and so (d(L u , V ect(g 1 , ..., g k ))) 2 has the same distribution as setting a ′ i := a i / √ u, and making the change of variable y = x/ √ u, with h i (x) = g i ( √ ux) and so (13) becomes which ends the proof of the lemma.
We first study the behaviour, as m → +∞, of the integral appearing in Lemma 6.
Observe that E |L 1 | −1 L 2 (R) > 0. Thus, the proof of Theorem 1 will be be deduced from the two previous lemmas combined with Theorem 2, which can be rewritten as follows Proof of the lower bound of Theorem 2. We prove the lower bound of (15). Let η 0 ∈ (0, 1 2 ). Let C 1 be the Hölder constant of order 1 2 − η 0 of L 1 . Let V k be the linear subspace of L 2 (R) generated by the set and consider L k ∈ V k given by Let K 0 > 0. We will use the fact that Observe that, if sup [0,1] |B| ≥ k−1 2k and C 1 ≤ K 0 , then The rest of this section is devoted to the proof of the upper bound of Theorem 2 (i.e. the upper bound of (15)), which is much more delicate to establish. To this end, we will prove a sequence of estimates. Let us first introduce the quantities used in this proof. We fix η 0 > 0 and Fix a, b, η, γ ∈ (0, 1 10 ) such that 0 < b 8 < a 2 and small enough so that and Let θ > 0 such that (1 − 2η)θ > 1 and and The existence of such a θ is ensured by (17) and (18). Fix then K such that 1 4a−b < K and v 0 = ⌈16/b⌉. We will also consider the following quantities which will depend on k ≥ 1. We set M := ⌈θk⌉ and M ′ := M d . For x > M ′ , we also set: Let V be a linear space generated by g 1 , ..., g k ∈ L 2 (R). Observe that Lemma 8. Uniformly on x > M ′ : Proof. We set Since C 1 admits moments of every order, it follows that Then, for every ℓ ∈ Z, the following estimate holds true Lemma 9. The following estimate holds true uniformly on x > M : where sup W means the supremum over the set of linear subspaces W of R M of dimension at most k and where Y ′ is a squared Bessel process of dimension 0 starting from x and so, due to the strong Markov property, and this, combined with (25) setting W has the same distribution as Y ′ . The lemma follows from (26) and (27).
Lemma 10. For every K > (4a − b) −1 , the following estimate holds true uniformly on x > M : Proof. Using the Burkholder-Davis-Gundy inequality, combined with the fact that Y ′ is dominated by the square of a Brownian motion starting from x − 1 4 1 , we observe that with C ′ K = 2 8K C K , and so Proof. Observe that where is the ball (for the supremum norm) of radius and is contained in the union of at most (cR x ) k euclidean balls of radius 2ε. We conclude that follows from this combined with (30) that 1 . Due to [44, after Corollary 1.4, page 441], the distribution of Y ′ ((n + 1)/x 0 ) knowing Y ′ (n/x 0 ) = y is the sum of a Dirac mass at 0 and of a measure with density where I 1 is the modified Bessel function of index 1 which satisfies (see [35, (5.10.22) or (5.11.10)]). So . We will use the expression x 0 , x 1 and ǫ given in (21) and (31).
Thus by using the Markov property (and Recalling that M = O(k), the previous estimate combined with (33) and (32) ensures that which ends the proof of the lemma.
Proof of the upper bound of Theorem 2. Formula (15) follows from (22) and Lemmas 9, 10 and 11. We will use the fact that Thanks to this, the error terms in Lemmas 9 and 10 gives directly a term in O(M ′ ) = O(k d ).
Let us detail the term coming from Lemma 11. We first observe that the exponent of (x/M ′ ) is strictly smaller than -1 for k large enough. Indeed this exponent is which is smaller than where we used the fact that M = ⌈θk⌉ ≥ θk. The fact that this quantity is strictly smaller than -1 for any k large enough comes from our conditions (17) and (19). It follows from this combined with (35) and Lemma 11 that where we used the fact that M ′ = M d and that k ≤ ⌈θk⌉/θ = M/θ. Finally, we notice that (16)) and that (20) ensures that

Law of large numbers: Proof of Theorem 3
We complete the sequence (X n ) n≥1 into a bi-infinite sequence (X n ) n∈Z of i.i.d. random variables. Theorem 3 could be proved by an adaptation of the proof of [14, Corollary 6] (combined with Theorem 1). We use here another approach enabling the study of more general additive functionals. Recall that (ξ m+S k ) m∈Z is the scenery seen from the particle at time k.
In particular, this combined with (5) ensures that Our approach to prove Proposition 12 uses an ergodic point of view. Let us consider the probability preserving dynamical system (Ω, T, µ) given by This system is known to be ergodic (see [49,30]). We set Φ(x, y) := y 0 . With these notations, Z k corresponds to the Birkhoff sum n−1 k=0 Φ•T k . Consider the Z-extension ( Ω, T , µ) over (Ω, T, µ) with step function Φ. This system is given by where λ Z = ℓ∈Z δ ℓ is the counting measure on Z and with T (x, y, ℓ) = (T (x, y), ℓ + y 0 ) .
Proof. Since (Ω, T, µ) is ergodic and since Φ is integrable and µ-centered, we know (by [46,Corollary 3.9] combined with the Birkhoff ergodic theorem) that P(Z n = 0 i.o.) = 1, thus that ( Ω, T , µ) is recurrent (i.e. conservative). Now let us prove that this system is also ergodic. Let g : Ω → (0, +∞) be a positive µ-integrable function such that g(x, y, ℓ) = g 0 (ℓ) does not depend on (x, y) ∈ Ω and with unit integral (g is a probability density function with respect to µ). By recurrence of ( Ω, T , µ), we know that Since ( Ω, T , µ) is recurrent, the Hopf-Hurewicz's theorem (see e.g. [1, p. 56]) ensures that µ-almost everywhere, where I is the σ-algebra of T -invariant events. Thus the ergodicity of ( Ω, T , µ) will follow from the fact that H (f,g) is µ-almost everywhere constant for every f as above (g can be fixed). Observe that, for k > K, Of course g • T k satisfies the same property. Thus, due to (36) and (37), it follows that where we write σ for the usual shift on Z Z given by σ((y k ) k∈Z ) = (y k+1 ) k∈Z . It follows that, for every ℓ ∈ Z, H f,g (ℓ + y 0 ). Since the support of y 0 generates the group Z, we conclude that H (f,g) is µ-almost everywhere equal to a constant. Note that the system in infinite measure ( Ω, T , µ) describes the evolution in time m of ((X m+k+1 ) k∈Z , (ξ Sm+k ) k , Z m ). In comparison, the system corresponding to ((X m+k+1 ) k , S m ) is also recurent ergodic, but the analogous system corresponding to Proof of Proposition 12. Since ( Ω, T , µ) is recurrent ergodic, the Hopf ergodic theorem ensures that, for any f ∈ L 1 ( µ), the sequence converges µ-almost everywhere to converges almost surely to I( f ), and we have proved the first part of the proposition. The second part comes from the first part combined with (5) and the Slustky theorem.
Proof of Theorem 4. Proposition 12 states that n − 1 We end this section with an interpretation of σ 2 f in terms of the famous Green-Kubo formula.
Remark 14. Assume the assumptions of Theorem 5. consider the function f : Ω → Z given by Then σ 2 f can be rewritten 4. Proof of the central limit theorem: proof of Theorem 5 We start by stating key intermediate results. We recall that d and α have been introduced in the beginning of Section 1.2.
An classical computation (detailed in Appendix A) ensures the following.
with θ = (θ j ) j=1,...,m and θ ′ = (θ ′ j,s ) j=1,...,m;s=1,...,s j and For any event E and any I ⊂ [− π d , π d ] m × [−π, π] M −m , we also set and Let γ < min(Lθ, ηθ 2M ). Let θ ′ ∈ (0, θη 2 ) such that θ ′ ≤ θ 2 − 2M Lθ. We consider the set with Ω (j) The following lemma follows from [14] (see appendix A for details). Note that, on Ω k , It will be useful to notice that with F := y ∈ Z : ∀(j, s), N ′ j,s (y) = 0 , and that Using a straighforward adaptation of the proof of [13, Proposition 10], we prove (see Appendix A) that uniformly on k, ℓ as in Proposition 15, where I uniformly on k, ℓ as in Proposition 15, where I k is the set of (θ, such that for all j = 1, ..., m, |θ j | < n − 1 2 +η j and there exists j ′ = 1, ..., M such that n It remains to estimate the integral over I uniformly on k, ℓ as in Proposition 15, and where we set uniformly on k, ℓ as above, with uniformly on k as above, and as k j /n → t j and n → +∞. We can now complete the proof of Proposition 15. The two first points of Proposition 15 comes from the upper bounds provided by Lemmas 16,17,18,19 and 20, with E k := E (det D k ) − 1 2 1 Ω k . It remains to prove the last point of Proposition 15. We assume that s j = 1 for all j and that k j /n → t j and n → +∞. Recall that d 0 = min{n ≥ 1 : nξ 0 ∈ dZ} = min{n ≥ 1 : nα ∈ dZ}. Observe that, for every a j ∈ Z there is a unique k ′ ∈ {0, ..., d 0 −1} such that a j ∈ (k j +k ′ j )α+dZ. Thus Finally, due to the last point of Lemma 20 and to the next lemma, this quantity is equivalent to as k j /n → t j and n → +∞.

Corollary 22. [A rewritting of Theorem 5]Under the assumptions of Theorem 5,
Proof. Since f is bounded, it is enough to prove the result for n = n ′ d. We start by writing where c m is the number of (r 1 , ... with F n,θ,m,s 1 ,...,sm the set of M -uple (k, ℓ) of nonnegative integers with k = (k j ) j=1,...,m , ℓ = (ℓ j,s ) j=1,...,m;s=1,...,s j such that, for all j = 1, ..., m, k j ≥ k j−1 + n θ (with convention k 0 = 0) and, for all j = 1, ..., m and all s = 1, ..., s j , 0 ≤ ℓ j,s ≤ n Lθ and, with this representation, We first study separately the following sums We say that (k, ℓ) and (k ′ , ℓ ′ ) belong to a same block if ∀r ∈ J , k r = k ′ r , ∀j ∈ J , ⌊k j /d⌋ = ⌊k ′ j /d⌋, ℓ = ℓ ′ . A block is an equivalence class for this equivalence relation. We write F ′ n,θ,m,s 1 ,...,sm for the set of (k, ℓ) such that their block is contained in F n,θ,m,s 1 ,...,sm . We will see that the contribution of the sum over F n,θ,m,s 1 ,...,sm \ F ′ n,θ,m,s 1 ,...,sm is neglectable in (58). Indeed, observe that if (k, ℓ) ∈ F n,θ,m,s 1 ,...,sm \ F ′ n,θ,m,s 1 ,...,sm , then at least one of the following condition holds true Let us fix J ′′ ⊂ J . Due to the first point of Lemma 20, the contribution to (58) of blocks having a type (a) or (b) problem at indices J ′′ is in The study of this quantity corresponds to (59) up to replace m par m − #J ′′ and to delete indices J ′′ , which thus will be in o(n − M 8 ), as proved below. Now, using the d-block structure of F ′ n,θ,m,s 1 ,...,sm , It follows from (38) that In particular this is in o(n This ends the proof of the first point of Corollary 22 (since, when M is odd, we cannot have M = 2m − #J and J = ∅) and ensures that, for M even, Assume from now on that θ = θ 0 and that M is even, J = ∅ and M = 2m, which means that s j = 1 for every j = 1, ..., m and let us estimate the following quantity Note that, when (k, ℓ) ∈ F n,θ,M/2,1,...,1 , then c (k,ℓ) = (2m)! 2 #{j:ℓ j =0} . Using this and applying Proposition 15 combined with the dominated convergence theorem, we obtain that The last part of Theorem 5 corresponds to the particular case f = δ 0 − δ a . In this case Appendix A. Proofs of technical lemmas for Theorem 5 Recall the context. Let M ≥ 1, θ ∈ (0, 1), η ∈ 0, 1 100 , L = κη 10M . Recall that n j = k j − k j−1 (with convention k 0 = 0). Assume n θ < n j < n and let ℓ j,1 , ..., ℓ j,s j = 0, ..., ⌊n Lθ ⌋ with m j=1 (1 + s j ) = M .
Proof of Lemma 16. We start by writing But, due to the definition of d, for any u, v ∈ Z, ϕ ξ (u + 2πv d ) = (ϕ ξ ( 2π d )) v ϕ ξ (u) and so, for any u ∈ R M and v ∈ Z M , and so Moreover, for any a ∈ Z, if vα − a ∈ dZ) and then this sum is equal to d. This ends the proof of Lemma 16.
Proof of Lemma 18. Recall that F = y ∈ Z : ∀(j, s), N ′ j,s (y) = 0 . Due to (50), Lemma 18 follows from the following estimate uniformly on k, ℓ as in Proposition 15. To this end, we follow and slightly adapt the proof of [13, Proposition 10] as explained below. Observe that, up to conditioning with respect to (S k+1 − S k ) k ∈{k j−1 ,...,k j −1} , this will be a consequence of ∀j = 1, ..., m, ∀u ∈ R, uniformly on k j , ℓ j,s as above. Recall that #(Z \ F) ≤ m j=1 s j s=1 ℓ j,s ≤ M n Lθ . As in [13,after Lemma 16], we observe that, for n large enough, and that where, for all k ∈ Z, θ .
In particular R \ I = k∈Z J k , where for all k ∈ Z, Let N ± be two positive integers such that P( ..,T ∈ Z T with T = N + + N − and C + k = N + for k ≤ N − and C + k = −N − otherwise, and symetrically and C − k = −N − for k ≤ N + and C − k = N + otherwise. It has been proved in [13] (see Lemma 15 therein combined with the estimate P(D n ) = 1 − o(e −cn ) in Section 2.8 therein) that, for n large enough, with E j = #{y ∈ Z : C j (y) ≥ n  first times (which are multiples of T ) when a peak of the form C ± is based on the site Y i . We also define N 0 j (Y i + N + N − ) as the number of visits of (S k j−1 +k − S k j−1 ) k≥0 before time n j to Y i + N + N − , which do not occur . We proved in [13,Lemma 16] that, for any H ≥ 0, where b j is a random variable with binomial distribution B n Thus, conditionally to are independent of each other, and all happen with probability at least 1/3. We conclude that where B j has binomial distribution B n (63) and (64) there exists a constant c ′′ > 0, such that, for any n large enough, Proof of Lemma 19. We have to estimate B k,ℓ,I (2) k ,Ω k uniformly on k, ℓ as in Proposition 15, where I (2) k = V k × [−π, π] M −m and where V k is the set of θ ∈ R m such that for all j = 1, ..., m, |θ j | < n − 1 2 +η j and such that there exists some j 0 = 1, ..., m satisfying n (68) We define the events H k = Ω k ∩ {∀y ∈ Z, | m j=1 θ j N ′ j (y)| ≤ ε 0 /2} and Due to [14, Lemma 21 and last formula of p. 2446], uniformly on k as above and uniformly on θ ∈ V k . Thus, where we used the fact that V k dθ ≤ m j=1 n − 1 2 +η j . Moreover, for n large enough, it follows from the definition of H ′ k , from (51) and (68) that Finally, it remains to estimate B k,ℓ,I (2) k ,Ω k ∩H k . To this end we write with the successive changes of variable θ ′′ j = n 3 4 Note that V ′′ k is the set of (θ ′′ 1 , ..., θ ′′ m ) such that |θ ′′ j | ≤ n Let us prove that, in the above formula, we can approximate the determinant of D ′ k by the one of D k := (n i n j ) − 3 . To this end, writing Σ m for the set of permutations of the set {1, ..., m} and κ(σ) for the signature of σ ∈ Σ m , we observe that, on Ω k , where we used the Cauchy-Schwarz inequality together with the notations and estimates given after Lemma 17. Using (45) and (51), it follows that, on Ω k , together with the definition of Ω k . Therefore, on Ω k , det D ′ k ≥ 1 2 det D k . Thus, due to (71), Since all the eigenvalues ofD ′ k are nonnegative ( D ′ k being symmetric and nonnegative), it follows that all the eigenvalues of D ′ k are smaller than trace( since ηθ, θ ′ 2 , and 3γ(m−1) 2 are all strictly smaller θ 16 . Hence for any p > 0. This combined with (69), (70) and (72) ends the proof of the lemma. It will be worthwhile to note that the previous estimate also holds true when λ ′ k is replaced by the smallest eigenvalue λ k of D k .
Before proving Lemma 20, we state a useful coupling lemma allowing us to replace det D k by a copy independent of (N ′ j,s ) j,s . Up to enlarging the probability space if necessary, we consider X ′ = (X ′ k ) k≥1 an independent copy of the increments X = (X k ) k≥0 of the random walk S. We then define the random walk S ′′ as follows: with ℓ j := max s=1,...,s j ℓ j,s . We define Ω ′′ k , N ′′ j and D ′′ k for the space as we have defined Ω k , N ′ j , D k (up to replace S by S ′′ ).

Lemma 24. Under the assumptions of Lemma 20,
with k ′ ∈ Z m such that k ′ j = 0 for all j ∈ J ′ , and with .., S k j +d−1 } and, uniformly on k, ℓ and on

Proof. We start by writing
where we set If we had a∈u+dZ f (a) = 0 for all u ∈ Z, the proof of Lemma 24 will be ended by noticing that which is in O (|θ j | + |θ j+1 |) since a∈Z |af (a)| < ∞. Since we just assume here that a∈Z f (a) = 0, we need a more delicate approach. We rewrite F as follows and with N ′ r,k ′ (y) = #{u = k r−1 + k ′ r−1 , ..., k r + k ′ r − 1 : S u = y}. Note that N ′ r,k ′ (y) = N ′ r (y) except maybe if r ∈ J ′ and y ∈ S ′ r or if r − 1 ∈ J ′ and y ∈ S ′ r−1 . We order the elements of J ′ as follows: and . Since a f (a) = 0, F 1 satisfies with with convention ∆ 0 j ′ = Id. The first part will be easily dominated by O j ′ :ǫ j ′ =0 (|θ j ′ | + |θ j ′ +1 |) . Let us study the second part of the formula exploiting the fact that a∈Z f (a) = 0. The difficulty here is that k ′ appears both in j:ǫ j =1 H j,k ′ j (0) and in ∆ ... ψ(k ′ ). The value of (ǫ 1 , ..., ǫ J ′ ) being fixed, we consider the set J ′′ of the j ′ ∈ J ′ such that and where we set k ′ j for the vector of Z m with j-th coordinate equal to k ′ j , all the other coordinates being null. Let J ′′ 0 be the set of j ∈ J ′′ such that S ′ j ∩ j ′′ ∈S ′′ \{j} S ′ j ′′ = ∅. Then , the other coordinates being null, the notation ∆ J ′′ \J ′′ 0 standing for the composition of all the operators ∆ j for j ∈ J ′′ \ J ′′ 0 . We conclude by using (77) and by noticing that  The following lemma will be useful to estimate the term F appearing in Lemma 24. It is not needed when a∈Z f (b + ad) = 0 for all b ∈ Z.
Lemma 25. For any J ′ ⊂ J , Proof. It is enough to study for any m(j) ∈ J ′ \ {j}, r j , s j ∈ {0, ..., d − 1}. This probability is dominated by for all p, v > 0. We partition the set J ′ by the equivalence relation generated by the relation j ∼ m j . We write R(j) for the class of j and R for the set of these equivalence classes. Observe that the number of equivalent classes is at most ⌊#J ′ /2⌋. We order the set J ′ in j ′ 1 < ... < j ′ J ′ . We wish to estimate where the sum is over ). Due to the local limit theorem and the independence of the increments of S, the above probability is in . Now let us control the cardinal of the admissible (A r , r ∈ R). To this end, consider the set J ′ of the smallest representants of R. Then the above quantity is smaller than Proof of Lemma 20. All the estimates below are uniformly in k. For the first estimate, we have to estimate the following integral ∀j,|θ j |<n where we set Let us study with But, on Ω k , if |θ j | ≤ n − 1 2 −η j for all j = 1, ..., m, and so as soon as n is large enough (uniformly on n j ∈ [n θ , n]). Thus |E k,ℓ (θ, θ ′ )| is dominated by for n large enough. Now, on Ω k , according to (51), It follows that with where we used the fact that since ξ admits a moment of order 2 + κ and there exists C 0 > 0 such that since ϕ ξ and u → e − u 2 2 are Lipschitz continuous. Recall that it has been proved in [14,Lemma 21] that uniformly on k.
Assume now that s j = 1 for all j = 1, ..., m (in particular J = ∅). Then where U k is the set of θ ′′ = (θ ′′ 1 , ..., θ ′′ m ) such that |θ ′′ j | ≤ n Moreover Thus, it follows that, uniformly in k and on Ω k , for all p > 0, as seen at the end of the proof of Lemma 19 (applied with D k ) and so where λ k is the smallest eigenvalue of D k . For the last term, we use (73) (applied for D k ), which ensures that on Ω k , where we used [14,Lemma 21] which ensures that E (det D k ) − 3 2 1 Ω k = O (1) uniformly in k. This combined with (92) implies that since L < min 3m 4M , κη 4 and since L(M + 1)θ < 3θ 2 − 3(m − 1)γ. The last step of the proof of the lemma consists in studying the following quantity Due to Lemma 23, and that ..,tm as k j /n → t j and n → +∞. This ends the proof of the lemma.
Appendix B. Moment convergence in Theorem 3 Let f : Z → R be such that a∈Z |f (a)| < ∞. In this appendix we prove that all the moments of n − 1 4 n−1 k=0 f (Z k ) converge to those of a∈Z f (a)σ −1 ξ L 1 (0), as n → +∞.
Due to Theorem 1, it is enough to prove the convergence of every moment. The key result is the following proposition. as n → +∞ and n i /n → T i , where D t 1 ,...,t k = ( R L t i (x)L t j (x) dx) i,j=1,...,k where L is the local time of the brownian motion B, limit of (S ⌊nt⌋ / √ n) t as n goes to infinity.
Moreover, for every k ≥ 1 and every ϑ ∈ (0, 1), there exists C = C(k, θ) > 0, such that P [Z n 1 = a 1 , . . . , Z n 1 +···+n k = a k ] ≤ C  [14] are unchanged. The only difference in the proof concern [14, Proposition 17] and more specifically [14,Lemma 23] for which the there is a multiplication by e −i k j=1 (a j −a j−1 )θ j in the integral. The only difference in the proof of [14,Lemma 23] is that the quantity I n 1 ,...,n k considered therein (n i corresponding to ⌊nT i ⌋ − ⌊nT i−1 ⌋) is slightly modified with the multiplication in the integral by a quantity converging in probability to 1 (with the notations of the proof of [14,Lemma 23]. Indeed, considering the real part of the integral, this quantity is cos( k j=0 (a j − a j−1 )(A − 1 2 n 1 ,...,n k r) j ) (with the notations of [14,Lemma 23]) which is equal to 1 up to an error in O min 1, µ −1 n 1 ,...,n k |r| 2 where µ n 1 ,...,n k is the smallest eigenvalue of A n 1 ,...,n k , which is proved to converges to 0 in [14,Lemma 23], and so the asymptotic behaviour of I n 1 ,...,n k is the same as when a j ≡ 0.
Proof of the convergence of moments in Theorem 3. Take ϑ < 1 4 . Note that the last point of the lemma ensures that P [Z n 1 = a 1 , . . . Z n 1 +···+n k = a k ] ≤ C . Let α 0 be such that αα 0 ∈ 1 + dZ. Then a i = q i α + dZ is equivalent to q i ∈ a i α 0 + dZ. Thus