Moderate deviations and laws of the iterated logarithm for the renormalized self-intersection local times of planar random walks

Let B_n be the number of self-intersections of a symmetric random walk with finite second moments in the integer planar lattice. We obtain moderate deviation estimates for B_n - E B_n and E B_n- B_n, which are given in terms of the best constant of a certain Gagliardo-Nirenberg inequality. We also prove the corresponding laws of the iterated logarithm.


Introduction
Let {S n } be a symmetric random walk on Z 2 with covariance matrix Γ.Let is the usual Kroenecker delta.We refer to B n as the self-intersection local time up to time n.We call the renormalized self-intersection local time of the random walk up to time n.
In [5] it was shown that γ n , appropriately scaled, converges to the renormalized self-intersection local time of planar Brownian motion.Renormalized self-intersection local time for Brownian motion was originally studied by Varadhan [18] for its role in quantum field theory.Renormalized self-intersection local time turns out to be the right tool for the solution of certain "classical" problems such as the asymptotic expansion of the area of the Wiener sausage in the plane and the range of random walks, [4], [14], [13].
One of the applications of self-intersection local time is to polymer growth.If S n is a planar random walk and P is its law, one can construct self-repelling and self-attracting random walks by defining dQ n /dP = c n e ζBn/n , where ζ is a parameter and c n is chosen to make Q n a probability measure.When ζ < 0, more weight is given to those paths with a small number of selfintersections, hence Q n is a model for a self-repelling random walk.When ζ > 0, more weight is given to paths with a large number of self-intersections, leading to a self-attracting random walk.Since E B n is deterministic, by modifying c n , we can write dQ n /dP = c n e ζ(Bn−E Bn)/n .It is known that for small positive ζ the self-attracting random walk grows with n while for large ζ it "collapses," and its diameter remains bounded in mean square.It has been an open problem to determine the critical value of ζ at which the phase transition takes place.The work [2] suggested that the critical value ζ c could be expressed in terms of the best constant of a certain Gagliardo-Nirenberg inequality, but that work was for planar Brownian motion, not for random walks.In the current paper we obtain moderate deviations estimates for γ n and these are in terms of the best constant of the Gagliard-Nirenberg inequality; see Theorem 1.1.However the critical constant ζ c is different (see Remark 4.3) and it is still an open problem to determine it.See [6] and [7] for details and further information on these models.
In the present paper we study moderate deviations of γ n .Before stating our main theorem we recall one of the Gagliardo-Nirenberg inequalities: which is valid for f ∈ C 1 with compact support, and can then be extended to more general f 's.We define κ(2, 2) to be the infimum of those values of C for which the above inequality holds.In particular, 0 < κ(2, 2) < ∞.For further details, see [8].
In this paper we will always assume that the smallest group which supports {S n } is Z 2 .For simplicity we assume further that our random walk is strongly aperiodic.We call Theorem 1.1 a moderate deviations theorem rather than a large deviations result because of the second restriction in (1.3).Our techniques do not apply when this restriction is not present, and and in fact it is not hard to show that the value on the right hand side of (1.4) should be different when b n ≈ n; see Remark 4.3.
Moderate deviations for −γ n are more subtle.In the next theorem we obtain the correct rate, but not the precise constant.
Theorem 1.2 Suppose E |S 1 | 2+δ < ∞ for some δ > 0. There exist C 1 , C 2 > 0 such that for any θ > 0 and sequence b n → ∞ with b n = o(n 1/θ ) Here are the corresponding laws of the iterated logarithm for γ n .
In this paper we deal exclusively with the case where the dimension d is 2. We note that in dimension 1 no renormalization is needed, which makes the results much simpler.See [15,9].When d ≥ 3, the renormalized intersection local time is in the domain of attraction of a centered normal random variable.Consequently the tails of the weak limit are expected to be of Gaussian type, and in particular, the tails are symmetric; see [13].
Theorems 1.1-1.3 are the analogues of the theorems proven in [2] for the renormalized self-intersection local time of planar Brownian motion.Although the proofs for the random walk case have some elements in common with those for Brownian motion, the random walk case is considerably more difficult.The major difficulty is the fact that we do not have Gaussian random variables.Consequently, the argument for the lower bound of Theorem 1.1 needs to be very different from the one given in [2, Lemma 3.4].This requires several new tools, such as Theorem 4.1, which we expect will have applications beyond the specific needs of this paper.

Integrability
Let {S ′ n } be an independent copy of the random walk {S n }.Let and set I n = I n,n .Thus In particular We also have Proof Using symmetry and independence By [17, p. 75], It follows from the proof of [8, Lemma 5.2] that for any integer k ≥ 1 Furthermore, by [13, (5.k)] we have that I n /n converges in distribution to a random variable with finite moments.Hence for any integer k ≥ 1 [8,Theorem 5.1] with p = 2 and a = m, and then (2.4), (2.9) and (2.10), we obtain where C > 0 can be chosen independently of m and n.Hence The conclusion then follows using the power series for e x .
For any random variable X we define We write (m, n] 2 < = {(j, k) ∈ (m, n] 2 ; j < k} (2.15) For any A ⊂ {(j, k) ∈ (Z + ) 2 ; j < k}, write In our proofs we will use several decompositions of B n .If J 1 , . . ., J ℓ are consecutive disjoint blocks of integers whose union is {1, . . ., n}, we have and also Proof.We first prove that there is c > 0 such that We have ). (2.23) Using Hölder's inequality with 1/p = 1 − 2 −n/2 , 1/q = 2 −n/2 we have Repeating this procedure, We now prove our lemma for general n.Given an integer n ≥ 2, we have the following unique representation: (2.29) By Hölder's inequality, with M as in (2.18) where the inequality follows from Using (2.32) and Lemma 2.1, we can take c > 0 so that In particular, this shows that  [12,Proposition 6.7], Since the last term is summable, it will contribute O(n) to (2.40).Also, and our Lemma follows from the well known fact that where γ is Euler's constant.
If we only assume finite second moments, instead of (2.41) we use (2.7) and proceed as above.
Lemma 2.5 For any θ > 0 Proof.By Lemma 2.3 this is true for some θ o > 0. For any θ > θ o , take an integer m ≥ 1 such that θm −1 < θ o .We can write any n as n = rm + i with 1 ≤ i < m.Then We claim that To see this, write Using (2.5) for E B((0, mr] × (mr, n]) = E I mr,i and (2.38) for E B((mr, n] 2 < ) then completes the proof of (2.47).
Note that the summands in (2.46) are independent.Therefore, for some constant C > 0 depending only on θ and m, Then, by Chebychev's inequality, for any fixed h > 0

3
Proof of Theorem 1.1 By the Gärtner-Ellis theorem ( [11, Theorem 2.3.6]),we need only prove Indeed, by the Gärtner-Ellis theorem the above implies that Using (2.45) we will then have Theorem 1.1.It thus remains to prove (3.1).
Let f be a symmetric probability density function in the Schwartz space S(R 2 ) of C ∞ rapidly decreasing functions.Let ǫ > 0 be a small number and write As in the proof of [10, Theorem 1], (3.1) will follow from (3.5) and the next Theorem.
With the notation of (2.16) we have since sup x p j (x) = p j (0) for a symmetric random walk.Then as in the proof of (2.4) we have that Hence, where the last line follows from (3.11).Write (3.14) Therefore, by (3.12) The proof of Theorem 3.1 is completed in the next two lemmas.
Lemma 3.2 For any θ > 0, lim sup where Applying Jensen's inequality on the right hand side of (3.5), Combining the last two displays with (3.5) we have that lim sup where the third line follows from the substitution g(x) = | det(A)|f (Ax) with a 2 × 2 matrix A satisfying and the last line in [8, Lemma A.2 ]; here I 2 is the 2 × 2 identity matrix.
By the inequality ab ≤ a 2 + b 2 we have that and taking c −2 ↑ H −2 we see that for any θ > 0, lim sup We have Replacing θ by θ/ √ l, n by n/l, and b n by b Thus (3.17) follows by the same argument we used to prove (3.16).Lemma 3.3 For any θ > 0 and any 1 ≤ j < k ≤ l, Proof.Define a j , b j so that D j = (a j , b j ] (1 ≤ j ≤ l).We now fix 1 ≤ j < k ≤ l and estimate Without loss of generality we may assume that v =: Note that I n = I n (0).By (3.9) we have that Similarly, we have Then for any r ≥ 1 and using Fourier inversion Thus from (3.40) we find that Then if r ≥ 1, using the fact that f (λ) is supported in (−π, π) 2 , we obtain (3.39).

Intersections of Random Walks
Let S 1 (n), S 2 (n) be independent copies of the symmetric random walk S(n) in Z 2 with a finite second moment.
Let f be a positive symmetric function in the Schwartz space S(R 2 ) with f dx = 1 and f supported in (−π, π) 2 .Given ǫ > 0, and with the notation of the last section, let us define the link Proof of Theorem 4.1.We have where from now on we work modulo ±π.Then by scaling we have As in (4.3)-(4.4),using Lemma 3.4, the fact that ǫ(b −1 n n) 1/2 ≥ 1 for ǫ > 0 fixed and large enough n, and abbreviating Using our assumption that h supported in [−π, π] 2 , and that ǫ −1 ≤ (b −1 n n) 1/2 for ǫ > 0 fixed and large enough n, we have that To prove (4.2) it suffices to show that for each λ > 0 we have for some C < ∞ and all ǫ > 0 sufficiently small.We begin by expanding By (4.4), (4.6) and the symmetry of S 1 we have By the Cauchy-Schwarz inequality For any permutation π of {1. . . ., m} let where the first sum is over all permutations π of {1. . . ., m}.Set φ(u) = E e iu•S (1) .(4.16)It follows from our assumptions that φ(u) ∈ C 2 , ∂ ∂u i φ(0) = 0 and ∂ 2 ∂u i ∂u j φ(0) = −E S (i) (1)S (j) (1) where S(1) = (S (1) (1), S (2) (1)) so that for some δ > 0 is independent of the permutation π.Hence writing For each A ⊆ {2, 3, . . ., m} we use D m (A) to denote the subset of For any u ∈ R d let u denote the representative of u mod (b −1 n n) 1/2 2πZ 2 of smallest absolute value.We note that and Using the periodicity of φ we see that (4.19) implies that for all u we bound the integral in (4.25) by and when we expand the right hand side as a sum of monomials we can be sure that no factor | u k | 1/2 appears more than twice.Thus we see that we can bound (4.29) by where the max runs over the the set of functions h(j) taking values 0, 1 or 2 and such that j h(j) = m.Changing variables, we thus need to bound where, see (4.23), We now bound (4.32) by bounding successively the integration with respect to u 1 , . . ., u m .Consider first the du 1 integral, fixing u 2 , . . ., u m .By (4.33) the du 1 integral is over the rectangle u 2 +C n , hence the factors involving u 1 can be bounded using (4.34).Proceeding inductively, using (4.33) when n j − n j−1 > 0 and (4.35) when n j = n j−1 , leads to the following bound of (4.32), and hence of (4.29) on D m (A): .
Here A c means the complement of A in {1, . . ., m}, so that A c always contains 1.
Note that Using this together with (4.25), but with m replaced by 2m, and the fact that (2m!) 1/2 /m! ≤ 2 m , we see that (4.7) is bounded by We have A⊆{1,2,3,...,2m} 1 = 2 2m , and the number of ways to choose the {h(j)} is bounded by the number of ways of dividing 2m objects into 3 groups, which is 3 2m .Then noting that j∈A c (1/2 − h(j)/8) is an integer multiple of 1/8 which is always less than m, we can bound the last line by for ǫ > 0 sufficiently small.(4.7) then follows from the fact that for any a > 0 Remark 4.3 Without the the restriction that b n = o(n), Theorem 1.1 is not true.To see this, let N be an arbitrarily large integer, let ε = 2/N 2 , and let X i be be an i.i.d.sequence of random vectors in Z 2 that take the values (N, 0), (−N, 0), (0, N), and (0, −N) with probability ε/4 and P(X 1 = (0, 0)) = 1 − ε.The covariance matrix of the X i will be the identity.Let b n = (1 − ε)n.Then the event that S i = S 0 for all i ≤ n will have probability at least (1 − ε) n , and on this event which would contradict (1.4).
The same example shows that the critical constant in the polymer model is different than the one in [2].Then This shows that the critical constant is no more than 2 log 1 1−ε .
In this section we prove the upper bound for (5.1).Let t > 0 and write With K > 1, the error term can be taken to be independent of t and {b n }.Thus, by (2.39), there is constant log a > 0 independent of t and {b n } such that It is here that we use the condition that E |S 1 | 2+δ < ∞ for some δ > 0, needed for (2.39).By first using Chebyshev's inequality, then using (5.2), (5.4) and the independence of the B((n i−1 , n i ] 2 < ), for any φ > 0, where γ t is the renormalized self-intersection local time of planar Brownian motion {W s } up to time t.By Lemma 2.5 and the dominated convergence theorem, where we used the scaling By [2, p. 3233], the limit Taking the minimizer φ = a −1 e −(1+C) we have lim sup This proves the upper bound for (5.1).

Theorem 1.2: Lower bound for E B n − B n
In this section we complete the proof of Theorem 1.2 by proving the lower bound for (5.1).
Let B(x, r) be the ball of radius r centered at x. Let F k = σ{X i : i ≤ k}.Let us assume for simplicity that the covariance matrix for the random walk is the identity; routine modifications are all that are needed for the general case.We write Θ for (2π) −1 det (Γ) −1/2 = (2π) −1 .We write D(x, r) for the disc of radius r in Z 2 centered at x.
Let K = [b n ] and L = n/K.Let us divide {1, 2, . . ., n} into K disjoint contiguous blocks, each of length strictly between L/2 and 3L/2.Denote the blocks J 1 , . . ., J K .Let Define the following sets: where κ 1 , κ 2 , κ 3 are constants that will be chosen later and do not depend on We want to show Once we have (6.4), then (6.5) and by induction On the set E, we see that S(B i ) ∩ S(B j ) = ∅ if |i − j| > 1.So we can write We then obtain .12) which would complete the proof of the lower bound for (5.1), hence of Theorem 1.2.
So we need to prove (6.4).By scaling and the support theorem for Brownian motion (see [1,Theorem I.6.6]),if W t is a planar Brownian motion and |x| ≤ √ L/16, then ) and (6.13) where c 5 does not depend on L. Using Donsker's invariance principle for random walks with finite second moments together with the Markov property, if we choose κ 1 large enough.Again using the Markov property, Now let us look at F i, 4 .By [17, p. 75], P(S j = y) ≤ c 7 /j with c 7 independent of y ∈ Z 2 so that By [1,Theorem I.6.11],we have ≤ c 14 (6.20)When j ≤ 2 −2k L, then the fact that the random walk has finite second moments implies that the probability that |S j | exceeds 2 −k+1 √ L is bounded by c 23 j/(2 −2k+2 L).When j > 2 −2k L, we use [17, p. 75], and obtain So if take κ 3 large enough, we obtain (6.26).
This completes the proof of (6.4), hence of Theorem 1.2.

7
Laws of the iterated logarithm First, let S j , S ′ j be two independent copies of our random walk.Let and note that Using the independence of S and S ′ , By Cauchy-Schwarz, this is less than We can rewrite Therefore, By Lemma 2.2 this can be bounded independently of k and n if a is taken small, and our result follows.
We are now ready to prove the upper bound for the LIL for B n − E B n .Write Ξ for √ det Γ κ(2, 2) −4 .Recall that for any integrable random variable Z we let Z denote Z − E Z. Let ε > 0 and let q > 1 be chosen later.Our first goal is to get an upper bound on Let m 0 = 2 N , where N will be chosen later to depend only on ε.Let A 0 be the integers of the form n − km 0 that contained in {n/4, . . ., n}.For each i let A i be the set of integers of the form n − km 0 2 −i that are contained in {n/4, . . ., n}.Given an integer k, let k j be the largest element of A j that is less than or equal to k.For any k ∈ {n/2, . . ., n}, we can write 2 )Ξ −1 n log log n for some k 0 ∈ A 0 ; or else (b) for some i ≥ 1 and some pair of consecutive elements k i , k ′ i ∈ A i , we have We bound the first term on the right by Theorem 1.1, and get the bound if j and k are consecutive elements of A i .Note that B([1, j] × [j + 1, k]) is equal in law to I j−1,k−j .Using Lemma 7.1, we bound the second term on the right hand side of (7.14) by The number of pairs of consecutive elements of A i is less than 2 i+1 (n/m 0 ).So if we add (7.15) and (7.16) and multiply by the number of pairs, the probability of (b) occurring for a fixed i is bounded by  We now choose m 0 to be the largest power of 2 so that c 6 (n/m 0 ) 1/2 > 2; recall n is big.
Let us use this value of m 0 and combine (7.12) and (7.18).Let n ℓ = q ℓ and C ℓ = { max By our estimates, P(C ℓ ) is summable, so for ℓ large, by Borel-Cantelli we have max  Let ∆ = 2π √ det Γ.Let us write J n = E B n − B n .First we do the upper bound.Let m 0 , A i , and k j be as in the previous subsection.We write, for n/2 ≤ k ≤ n, If max n/2≤k≤n J k ≥ (1 + ε)∆ −1 n log log log n, then either (a) J k 0 ≥ (1 + ε 2 )∆ −1 n log log log n for some k 0 ∈ A 0 , or else (b) for some i ≥ 1 and k i , k ′ i consecutive elements of S i we have To estimate the probability in (b), suppose j and k are consecutive elements of A i .There are at most 2 i+1 (n/m 0 ) such pairs.We have as in the previous subsection.Provided n is large enough, c 2 √ j √ k − j = c 2 √ j √ 2 −i m 0 will be less than ε 80i 2 n log log log n for all i.So in order for J k − J j to be larger than ε We choose m 0 to be the largest possible power of 2 such that c 4 (n/m 0 ) > 2.
Combining (7.28) and (7.30), we see that if we set q > 1 close to 1, n ℓ = [q ℓ ], and n ℓ log log log n ℓ }, (7.31) then ℓ P(E ℓ ) is finite.So by Borel-Cantelli, the event E ℓ happens for a last time, almost surely.Exactly as in the previous subsection, taking q close enough to 1 and using the fact that ε is arbitrary leads to the upper bound.
The proof of the lower bound is fairly similar to the previous subsection.almost surely.This is true for every ε, so the limsup is 0. Combining this with (7.34) and substituting in (7.33) completes the proof.

2
and let us call any rectangle of the form 2πk + C n , where k ∈ Z 2 , an elementary rectangle.Note that any rectangle of the form v + C n , where v ∈ R 2 , can be covered by 4 elementary rectangles.Hence for any v ∈ R 2 and 1 ≤ s ≤ n v+Cn e

Remark 4 . 2
It follows from the proof that in fact for ρ > 0 sufficiently small, for any λ > 8)and using(2.38)again,E B n − B n ≥ Θn log n − c 3 n − Θn log(n/b n ) (6.9) = Θn log b n − c 3 n on the event E. We conclude that P(E B n − B n ≥ Θn log b n − c 3 n) ≥ e −c 2 bn .(6.10)We apply (6.10) with b n replaced by b ′ n = c 4 b n , where Θ log c 4 = c 3 .Then Θn log b ′ n − c 3 n = Θn log b n + Θn log c 4 − c 3 n = Θn log b n .(6.11)

L
) (S j ) for m ≤ [2L] + 1 and let C m = C [2L]+1 for m > L. By the Markov property and independence,

7. 1
Proof of the LIL for B n − E B n

. 10 )m 0 c 1 ( 4 .
For each k 0 , using Theorem 1.1 and the fact that k 0 ≥ n/4, the probability in (a) is bounded byexp(−(1 + ε 4 ) log log k 0 ) ≤ c 1 (log n) −(1+ ε 4 ) .(7.11)There are at most n/m 0 elements of A 0 , so the probability in (a) is bounded by n log n) 1+ ε (7.12)Now let us examine the probability in (b).Fix i for the moment.Any two consecutive elements of A i are 2 −i m 0 apart.Recalling the notation (2.16) we can write

)
If we now sum over i ≥ 1, we bound the probability in (b) by c 5 n m 0 exp − c 6 (n/m 0 ) 1/2 log log n .(7.18)

7 . 2
LIL for E B n − B n

2 n
log log log n.(7.27)There are at most n/m 0 elements of A 0 .Using Theorem 1.2, the probability of (a) is bounded byc 1 n m 0 e −(1+ε 4 ) log log n .(7.28)