Rate of growth of a transient cookie random walk

We consider a one-dimensional transient cookie random walk. It is known from a previous paper [3] that a cookie random walk ( X n ) has positive or zero speed according to some positive parameter α > 1 or ≤ 1. In this article, we give the exact rate of growth of ( X n ) in the zero speed regime, namely: for 0 < α < 1, X n /n α +12 converges in law to a Mittag-Leﬄer distribution whereas for α = 1, X n (log n ) /n converges in probability to some positive constant.


Introduction
Let us pick a strictly positive integer M . An M -cookie random walk (also called multi-excited random walk) is a walk on Z which has a bias to the right upon its M first visits at a given site and evolves like a symmetric random walk afterwards. This model was introduced by Zerner [20] as a generalization, in the one-dimensional setting, of the model of the excited random walk studied by Benjamini and Wilson [4]. In this paper, we consider the case where the initial cookie environment is spatially homogeneous. Formally, let (Ω, P) be some probability space and choose a vectorp = (p 1 , . . . , p M ) such that p i ∈ [ 1 2 , 1) for all i = 1, . . . , M . We say that p i represents the strength of the i th cookie at a given site. Then, an (M,p)-cookie random walk (X n , n ∈ N) is a nearest neighbour random walk, starting from 0, with transition probabilities: P{X n+1 = X n + 1 | X 0 , . . . , X n } = p j if j = ♯{0 ≤ i ≤ n, X i = X n } ≤ M , 1 2 otherwise.
In particular, the future position X n+1 of the walk after time n depends on the whole trajectory X 0 , X 1 , . . . , X n . Therefore, X is not, except in degenerated cases, a Markov process. The cookie random walk is a rich stochastic model. Depending on the cookie environment (M,p), the process can either be transient or recurrent. Precisely, Zerner [20] (who considered an even more general setting) proved, in our case, that if we define
Thus, a 1-cookie random walk is always recurrent but, for two or more cookies, the walk can either be transient or recurrent. Zerner also proved that the limiting velocity of the walk is well defined. That is, there exists a deterministic constant v = v(M,p) ≥ 0 such that lim n→∞ X n n = v almost surely.
However, we may have v = 0. Indeed, when there are at most two cookies per site, Zerner proved that v is always zero. On the other hand, Mountford et al. [11] showed that it is possible to have v > 0 if the number of cookies is large enough. In a previous paper [3], the authors showed that, in fact, the strict positivity of the speed depends on the position of α with respect to 1: • if α ≤ 1, then v = 0, • if α > 1, then v > 0.
In particular, a positive speed may be obtained with just three cookies per site. The aim of this paper is to find the exact rate of growth of a transient cookie random walk in zero speed regime. In this perspective, numerical simulations of Antal and Redner [2] indicate that, for a transient 2-cookies random walk, the expectation of X n is of order n ν , for some constant ν ∈ ( 1 2 , 1) depending on the strength of the cookies. We shall prove that, more generally, ν = α+1 2 . These results also hold with sup i≤n X i and inf i≥n X i in place of X n .
In fact, we shall prove this theorem by proving that the hitting times of the walk T n = inf{k ≥ 0, X k = n} satisfy   Theorem 1.1 bears many likenesses to the famous result of Kesten et al. [9] concerning the rate of transience of a one-dimensional random walk in random environment. Indeed, following the method initiated in [3], we can reduce the study of the walk to that of an auxiliary Markov process Z. In our setting, Z is a branching process with migration. By comparison, Kesten et al. obtained the rates of transience of the random walk in random environment via the study of an associated branching process in random environment. However, the process Z considered here and the process introduced in [9] have quite dissimilar behaviours and the methods used for their study are fairly different.
The remainder of this paper is organized as follow. In the next section, we recall the construction of the associated process Z described in [3] as well as some important results concerning this process. In section 3, we study the tail distribution of the return time to zero of the process Z. Section 4 is devoted to estimating the tail distribution of the total progeny of the branching process over an excursion away from 0. The proof of this result is based on technical estimates whose proofs are given in section 5. Once all these results obtained, the proof of the main theorem is quite straightforward and is finally given in the last section.

The process Z
In the rest of this paper, X will denote an (M,p)-cookie random walk. We will also always assume that we are in the transient regime and that the speed of the walk is zero, that is Recall the definition of the hitting times of the walk: We now introduce a Markov process Z closely connected with these hitting times. Indeed, we can summarize Proposition 2.2 and equation (2.3) of [3] as follows: Proposition 2.1. There exist a Markov process (Z n , n ∈ N) starting from 0 and a sequence of random variables (K n , n ≥ 0) converging in law towards a finite random variable K such that, for each n Therefore, a careful study of Z will enable us to obtain precise estimates on the distribution of the hitting times. Let us now recall the construction of the process Z described in [3].
For each i = 1, 2, . . ., let B i be a Bernoulli random variable with distribution We define the random variables A 0 , A 1 , . . . , A M −1 by Therefore, A j represents the number of "failures" before having j + 1 "successes" along the sequence of coin tossing (B i ). It is to be noted that the random variables A j admit some exponential moments: According to Lemma 3.3 of [3], we also have Let (ξ i , i ∈ N * ) be a sequence of i.i.d. geometric random variables with parameter 1 2 (i.e. with mean 1), independent of the A j . The process Z mentioned above is a Markov process with transition probabilities given by As usual, we will use the notation P x to describe the law of the process starting from x ∈ N and E x the associated expectation, with the conventions P = P 0 and E = E 0 . Let us notice that Z may be interpreted as a branching process with random migration, that is, a branching process which allows both immigration and emigration components.
M − 1 particles emigrate from the system and the remaining particles reproduce according to a geometrical law with parameter 1 2 and there is also an immigration of A M −1 new particles.
• If Z n = i ∈ {0, . . . , M − 1}, then Z n+1 has the same law as A i , i.e. all the i particles emigrate the system and A i new particles immigrate.
We conclude this section by collecting some important results concerning this branching process. We start with a monotonicity result.
Lemma 2.2 (Stochastic monotonicity w.r.t. the environment and the starting point). Let p = ( p 1 , . . . , p M ) denote another cookie environment and let Z denote the associated branching process. Assume further that p i ≤ p i for all i. Let also 0 ≤ x ≤ x. Then, the process Z starting from x ( i.e. under P x ) is stochastically dominated by the process Z starting from x ( i.e. under P b x ).
Proof. We first prove the monotonicity of Z with respect to its starting point. To this end, we simply notice, from the definition of the random variables Since all the quantities in the definition of Z are positive, it is now immediate that, given x ≤ y, the random variable Z 1 under P x is stochastically dominated by Z 1 under P y . The stochastic domination for the processes follows by induction.
Let A i denote the random variables associated with the cookie environment p. It is clear from the definition (2.1) that A i ≤ A i . We deduce that Z 1 under P x is stochastically dominated by Z 1 under P x and therefore also by Z 1 under P b x for any x ≥ x. As before, we conclude the proof by induction.
Let us recall that we made the assumption that p i < 1 for all i. This implies Therefore, Z is an irreducible and aperiodic Markov chain. Moreover, for any k ≥ M − 1, Since we assume that α > 0, a simple martingale argument now shows that Z is recurrent. In fact, more is known: according to section 2 of [3], the process Z is positive recurrent and therefore converges in law, independently of its starting point, towards a non-degenerate random variable Z ∞ whose law is the unique invariant probability for Z.
The study of Z ∞ was undertaken in [3]. In particular, Proposition 3.6 of [3] gives the asymptotic behaviour of the generating function G(s) def = E[s Z∞ ] as s increases to 1: where C = C(p) > 0 is a constant, the notation f ∼ g meaning f = g(1 + o(1)).
We may use this estimate, via a Tauberian theorem, to obtain the asymptotics of the tail distribution of Z ∞ as stated in Corollary 3.8 of [3]. However, there is a mistake in the statement of this corollary because (2.7) does not ensure, when α = 1, the regular variation of P{Z ∞ > x}. The correct statement given below follows directly from (2.7) using Corollary 8.1.7 of [5].
Proposition 2.3 (Rectification of Corollary 3.8 of [3]). There exists c = c(p) > 0 such that, The result given above when α = 1 is weaker than that for the case α < 1. Still, in view of Lemma 2.2, it is straightforward that Z ∞ is also stochastically monotone inp. Therefore, the estimate of Proposition 2.3 when α < 1 gives an upper bound for the decay of the tail distribution of Z ∞ in the case α = 1. Indeed, given an environmentp with α(p) = 1, for any β < 1, we can construct an environment p with α( p) = β such that p i ≤ p i for all i. Therefore, when α = 1, we deduce Remark 2.4. In fact, when α = 1, the stronger statement P{Z ∞ > x} ∼ c/x holds. According to the remark following corollary 8.1.7 of [5], it suffices to show that The process Z is a positive recurrent Markov chain so that E[σ] < ∞. Moreover, using the well known expression of the invariant measure (c.f. Theorem 1.7.5 of [12]), we have, for any non negative function f , In particular, we get the following corollary which will be useful: Proof. In view of (2.9), we just need to show that This result, when α < 1, is a direct consequence of Proposition 2.3. In the case α = 1, it follows

The return time to zero
We have already stated that Z is a positive recurrent Markov chain, thus the return time σ to zero has finite expectation. We now strengthen this result by giving the asymptotic of the tail distribution of σ in the case α < 1. The aim of this section is to show: Proposition 3.1. Assume that α ∈ (0, 1). Then, for any starting point Notice that we do not allow the cookie environment to be such that α = 1 nor the starting point x to be 0. In fact, these assumptions could be dropped but it would unnecessarily complicate the proof of the Proposition which is technical enough already. Nevertheless, Proposition 3.1 still yields the following corollary valid for all α ∈ (0, 1] with initial starting point 0: Proof. Lemma 2.2 implies that σ, the first return to 0 for Z, is also monotonic with respect to the cookie environment and the initial starting point. In particular, when α < 1, we get and therefore E[σ β ] < ∞ for all 0 ≤ β < α + 1. The case α = 1 is deduced from the case α < 1, as for (2.8), by approximation, using the monotonicity property with respect to the environment.
The method used in the proof of the proposition is classical and based on the study of probability generating functions. Proposition 3.1 was first proved by Vatutin [13] who considered a branching process with exactly one emigrant at each generation. This result was later generalized for branching processes with more than one emigrant by Vinokurov [15] and also by Kaverin [8]. However, in our setting, we deal with a branching process with migration, that is, where both immigration and emigration are allowed. More recently, Yanev and Yanev proved similar results for such a class of processes, under the assumption that, either there is at most one emigrant per generation [18] or that immigration dominates emigration [17] (in our setting, this would correspond to α < 0).
For the process Z, the emigration component dominates the immigration component and this leads to some additional technical difficulties. Although there is a vast literature on the subject (see the authoritative survey of Vatutin and Zubkov [14] for additional references), we did not find a proof of Proposition 3.1 in our setting. We shall therefore provide here a complete argument but we invite the reader to look in the references mentioned above for additional details.
Recall the definition of the random variables A i and ξ i defined in section 2. We introduce, for s ∈ [0, 1], Let F j (s) def = F • . . . • F (s) stand for the j-fold of F (with the convention F 0 = Id). We also define by induction We use the abbreviated notations F j def = F j (0), γ n def = γ n (0). We start with a simple lemma.
Proof. Assertion (a) is straightforward. According to (2.2), the functions H k are analytic on (0, 2) and (b) follows from a Taylor expansion near 1. Similarly, (c) follows from a Taylor expansion near 1 of the function δ combined with (2.3). Finally, γ n can be expressed in the form which yields (d).
Let Z stand for the process Z absorbed at 0: We also define, for x ≥ 1 and s ∈ [0, 1], and for 1 ≤ k ≤ M − 2, Proof. The value g x,k (1) represents the expected number of visits to site k before hitting 0 for the process Z starting from x. Thus, an easy application of the Markov property yields This proves (a). We now introduce the return times σ k def = inf(n ≥ 1, Z n = k). In view of the Markov property, we have Since Z is a positive recurrent Markov process, Using the Markov property, as above, but considering now the partial sums, we get, for any N ≥ 1, Since and we conclude the proof by letting N tend to +∞.
Lemma 3.5. The function J x defined by (3.1) may be expressed in the form Proof. From the definition (2.4) of the branching process Z, we get, for n ≥ 0, Since E[s ξ ] = F (s) and G n,x (0) = P x { Z n = 0}, using the notation introduced in the beginning of the section, the last equality may be rewritten Iterating this equation then setting s = 0 and using the relation G 0,x (F n+1 ) = (F n+1 ) x , we deduce that, for any n ≥ 0, Notice also that P x { Z n = 0} = 1 − G n,x (0). In view of (3.2) and making use of the relation Therefore, summing over n, for s < 1, We conclude the proof noticing that ∞ Proof of Proposition 3.1. Recall that the parameter α is such that 0 < α < 1. Fix x ≥ 1 and 1 ≤ k ≤ M − 2. In view of (d) of Lemma 3.3, we have Using the same arguments, we also deduce that so that we may write The first term on the r.h.s. of (3.4) converges towards −g ′ k (1)H ′ k (1)/α as s tends to 1 (this quantity is finite thanks to Lemma 3.4). Making use of the relation γ n+1 = δ(F n )γ n , we can also rewrite B k in the form With the help of Lemma 3.3, it is easily checked that Since α < 1, we conclude that We can deal with J x in exactly the same way. We now find J x (1) = x α and setting we also find that, as s → 1 − , Putting together (3.6) and (3.8) and using Lemma 3.5, we obtain Combining (3.9) and (3.11), we get This shows in particular that C x ≥ 0. Furthermore, Karamata's Tauberian Theorem for power series (c.f. Making use of two successive monotone density theorems (c.f. for instance Theorem 1.7.2 of [5]), we conclude that It remains to prove that C x = 0. To this end, we first notice that, for x, y ≥ 0, we have Thus, C y ≥ P y {Z 1 = x}C x so it suffices to show that C x is not zero for some x. In view of (a) of Lemma 3.4, the quantity is bounded in x. Looking at the expression of C x given in (3.10), it just remains to prove that B x (1) can be arbitrarily large. In view of (3.7), we can write But for each fixed n, the function decreases to 0 as x tends to infinity, so the monotone convergence theorem yields Thus, B x (1) tends to infinity as x goes to infinity, which completes the proof of the proposition.
Remark 3.6. The study of the tail distribution of the return time is the key to obtaining conditional limit theorems for the branching process, see for instance [8; 13; 15; 18]. Indeed, following Vatutin's scheme [13] and using Proposition 3.1, it can now be proved that Z n /n conditioned on not hitting 0 before time n converges in law towards an exponential distribution. Precisely, when α < 1, for each x = 1, 2, . . . and r ∈ R + , It is to be noted that this result is exactly the same as that obtained for a classical critical Galton-Watson process ( i.e. when there is no migration). Although, in our setting, the return time to zero has a finite expectation, which is not the case for the critical Galton-Watson process, the behaviours of both processes, conditionally on their non-extinction, are still quite similar.

Total progeny over an excursion
The aim of this section is to study the distribution of the total progeny of the branching process Z over an excursion away from 0. We will constantly use the notation In particular, ν ranges through ( 1 2 , 1]. The main result of this section is the key to the proof of Theorem 1.1 and states as follows. Let us first give an informal explanation for this polynomial decay with exponent ν. In view of Remark 3.6, we can expect the shape of a large excursion away from zero of the process Z to be quite similar to that of a Galton-Watson process. Indeed, if H denotes the height of an excursion of Z (and σ denotes the length of the excursion), numerical simulations show that, just as in the case of a classical branching process without migration, H ≈ σ and the total progeny σ−1 k=0 Z k is of the same order as Hσ. Since the decay of the tail distribution of σ is polynomial with exponent α + 1, the tail distribution of σ−1 k=0 Z k should then decrease with exponent α+1 2 . In a way, this proposition tells us that the shape of an excursion is very "squared".
Although there is a vast literature on the subject of branching processes, it seems that there has not been much attention given to the total progeny of the process. Moreover, the classical machinery of generating functions and analytic methods, often used as a rule in the study of branching processes seems, in our setting, inadequate for the study of the total progeny.
The proof of Proposition 4.1 uses a somewhat different approach and is mainly based on a martingale argument. The idea of the proof is fairly simple but, unfortunately, since we are dealing with a discrete time model, a lot of additional technical difficulties appear and the complete argument is quite lengthy. For the sake of clarity, we shall first provide the skeleton of the proof of the proposition, while postponing the proof of the technical estimates to section 5.2.
Let us also note that, although we shall only study the particular branching process associated with the cookie random walk, the method presented here could be used to deal with a more general class of branching processes with migration.
We start with an easy lemma stating that P{ σ−1 k=0 Z k > x} cannot decrease much faster than Proof. When α = ν = 1, the result is a direct consequence of Corollary 2.5 of section 2. We now assume α < 1. Hölder's inequality gives Taking the expectation and applying again Hölder's inequality, we obtain, for ε > 0 small enough This result is valid for any ε ′ small enough and completes the proof of the lemma.
Proof of Proposition 4.1. In view of the Tauberian theorem stated in Corollary 8.1.7 of [5], it suffices to show that where C > 0 and C ′ ∈ R. Let us stress that, according to the remark following the corollary, we do need, in the case α = 1, the second order expansion of the Laplace transform in order to apply the Tauberian theorem.
The main idea is to construct a martingale in the following way. Let K ν denote the modified Bessel function of second kind with parameter ν. For λ > 0, we define We shall give some important properties of φ λ in section 5.1. For the time being, we simply recall that φ λ is an analytic, positive, decreasing function on (0, ∞) such that φ λ and φ ′ λ are continuous at 0 with Our main interest in φ λ is that it satisfies the following differential equation, for x > 0: Now let (F n , n ≥ 0) denote the natural filtration of the branching process Z i.e. F n def = σ(Z k , 0 ≤ k ≤ n) and define, for n ≥ 0 and λ > 0, it is clear that the process is an F-martingale. Furthermore, this martingale has bounded increments since Therefore, the use of the optional sampling theorem is legitimate with any stopping time with finite mean. In particular, applying the optional sampling theorem with the first return time to 0, we get which we may be rewritten, using φ λ (0) = 2 ν−1 Γ(ν), in the form: The proof of Proposition 4.1 now relies on a careful study of the expectation of σ−1 k=0 µ(k). To this end, we shall decompose µ into several terms using a Taylor expansion of φ λ . We first need the following lemma: Proof. Assertion (a) is just a rewriting of equation (2.6). Recall the notations introduced in section 2. Recall in particular that This proves (b). When p is an even integer, we have and assertion (c) can be proved by developing (Z n+1 −Z n ) p in the same manner as for (b). Finally, when p is an odd integer, Hölder's inequality gives Continuation of the proof of Proposition 4.1. For n ∈ [1, σ − 2], the random variables Z n and Z n+1 are both non zero and, since φ λ is infinitely differentiable on (0, ∞), a Taylor expansion yields where r n is given by Taylor's integral remainder formula we get When n = σ − 1, equation (4.9) is a priori incorrect because then Z n+1 = 0. However, according to (4.3) and (4.4), the functions φ λ (t), φ ′ λ (t) and tφ ′′ λ (t) have finite limits as t tends to 0 + , thus (4.9) still holds when n = σ − 1. Therefore, for any n ∈ [1, σ − 1], In view of (a) and (b) of Lemma 4.3 and recalling the differential equation (4.4) satisfied by φ λ , the r.h.s. of the previous equality may be rewritten On the other hand, in view of (4.5) and (4.6), we have (4.10) Thus, for each n ∈ [1, σ − 1], we may decompose µ(n) in the form where In particular, we can rewrite (4.7) in the form Note that we have to treat µ(0) separately since (4.11) does not hold for n = 0. We now state the main estimates: Lemma 4.4. There exist ε > 0 and eight finite constants (C i , C ′ i , i = 0, 2, 3, 4) such that, as λ tends to 0 + , Notice that the remainder term θ n in the Taylor expansion of φ λ (Z n ) is not really an error term since, according to (e) of the lemma, its contribution is not negligible in the case α < 1. We postpone the long and technical proof of these estimates until section 5.2 and complete the proof of Proposition 4.1. In view of (4.12), using the previous lemma, we deduce that there exist two constants C, C ′ such that It simply remains to check that the constant C is not zero. Indeed, suppose that C = 0. We first assume α = 1. Then, from (4.13), which implies E[ σ−1 k=0 Z k ] < ∞ and contradicts Corollary 2.5. Similarly, when α ∈ (0, 1) and C = 0, we get from (4.13), This implies, for any 0 < ε ′ < ε, that which contradicts Lemma 4.2. Therefore, C cannot be zero and the proposition is proved.

Some properties of modified Bessel functions
We now collect some properties of modified Bessel functions. All the results cited here are gathered from [1] (section 9.6 and 9.7), [10] (section 5.7), [16] (section 7) and [6] (section 2). For η ∈ R, the modified Bessel function of the first kind I η is defined by and the modified Bessel function of the second kind K η is given by the formula We are particularly interested in Thus, the function φ λ defined in (4.2) may be expressed in the form (b) If η = 0, then F 0 (x) = − log x + log 2 − γ + o(1) as x → 0 + where γ denotes Euler's constant.

Behaviour at infinity
In particular, for every η > 0, there exists c η ∈ R such that In particular, F η solves the differential equation Concerning the function φ λ , in view of (5.1), we deduce In particular, φ λ solves the differential equation

Proof of Lemma 4.4
The proof of Lemma 4.4 is tedious but requires only elementary methods. We shall treat, in separate subsections the assertions (a) -(e) when α < 1 and explain, in a last subsection, how to deal with the case α = 1.
We will use the following result extensively throughout the proof of Lemma 4.4.
Lemma 5.3. There exists ε > 0 such that where we used Corollary 2.5 to conclude on the finiteness of c 4 . From Markov's inequality, we deduce that P According to Corollary 3.2, for δ < α, we have E[σ 1+δ ] < ∞, so Hölder's inequality gives which completes the proof of the lemma.

Proof of (a) of Lemma 4.4 when α < 1
Using the expression of µ(0) given by (4.10) and the relation (5.3) between F ′ ν and F 1−ν , we have Thus, using the dominated convergence theorem, Furthermore, using again (5.3), we get Therefore, we obtain which proves (a) of Lemma 4.4.

Proof of (b) of Lemma 4.4 when α < 1
Recall that Thus, µ 1 (n) is positive and Moreover, for any y > 0, we have 1 − e −y − ye −y ≤ min(1, y 2 ), thus where we used the fact that F ν is decreasing for the last inequality. In view of (5.2), we also have F ν (−3 log λ) ≤ c ν λ 3 2 and therefore On the one hand, according to (2.9), we have On the other hand, Proposition 2.3 states that P{Z ∞ ≥ x} ∼ c x α as x tends to infinity, thus This estimate and (5.5) yield

Proof of (c) of Lemma 4.4 when α < 1
Recall that Lemma 4.3), the quantity |µ 2 (n)|/λ ν is smaller than M α ||f 1 || ∞ ||F 1−ν || ∞ . Thus, using the dominated convergence theorem, we get It remains to prove that, for ε > 0 small enough, as We can rewrite the l.h.s. of (5.7) in the form On the one hand, the first term is bounded by where we used formula (5.3) for the expression of F ′ 1−ν for the second inequality. On the other hand the second term of (5.8) is bounded by where we used Lemma 5.3 for the last inequality. Putting the pieces together, we conclude that (5.7) holds for ε > 0 small enough.

Proof of (d) of Lemma 4.4 when α < 1
Recall that Note that, since α ≤ 1, we have Z α−1 n ≤ 1 when Z n = 0. The quantities f 2 (Z n ), F ν ( √ λZ n ) and F 1−ν ( √ λZ n )) are also bounded, so we check, using the dominated convergence theorem, that Furthermore we have The first term is clearly bounded by c 9 λ 1−ν . We turn our attention to the second term. In view of (5.3), we have where we used 2 − 2ν = 1 − α for the last equality. Therefore, As for the third term of (5.10), with the help of Lemma 5.3, we find Putting the pieces together, we conclude that

Proof of (e) of Lemma 4.4 when α < 1
Recall that µ 4 (n) = −e −λ P n k=0 Z k E[θ n | F n ]. (5.11) This term turns out to be the most difficult to deal with. The main reason is that we must now deal with Z n and Z n+1 simultaneously. We first need the next lemma stating that Z n+1 cannot be too "far" from Z n .
Lemma 5.4. There exist two constants K 1 , K 2 > 0 such that for all n ≥ 0, Proof. This lemma follows from large deviation estimates. Indeed, with the notation of section 2, in view of Cramér's theorem (c.f. Theorem 2.2.3 of [7]), we have, for any j ≥ M − 1, where we used the fact that (ξ i ) is a sequence of i.i.d geometric random variables with mean 1. Similarly, recalling that A M −1 admits exponential moments of order β < 2, we also deduce, for j ≥ M − 1, with possibly extended values of K 1 and K 2 that Throughout this section, we use the notation, for t ∈ [0, 1] and n ∈ N, In particular V n,t ∈ [Z n , Z n+1 ] (with the convention that for a > b, [a, b] means [b, a]). With this notation, we can rewrite the expression of θ n given in (4.8) in the form Using the expression of φ ′ λ and φ ′′ λ stated in Fact (5.2), we get (1 − t)(I 1 n (t) + I 2 n (t))dt, (5.12) with Notice that the interchanging of and E is correct since we have the upper bounds which are both integrable. We want to estimate We deal with each term separately.
Dealing with I 1 : We prove that the contribution of this term is negligible, i.e.
To this end, we first notice that where we used (5.2) to find c 1−ν such that F 1−ν (x) ≤ c 1−ν e −x/2 . We now split (5.14) according to whether (a) One the one hand, Lemma 4.3 states that n for all p ∈ N and Z n = 0.
Hence, for 1 ≤ n ≤ σ − 1, we get where we used the fact that the function x 6+α 4 e − x 4 is bounded on R + for the last inequality. On the other hand,  And therefore n ] is finite so the proof of (5.13) is complete.
Dealing with I 2 : It remains to prove that To this end, we write I 2 n (t) = −αλ ν (J 1 n (t) + J 2 n (t) + J 3 n (t)), Again, we shall study each term separately. In view of (5.17) and (5.18), the proof of (e) of Lemma 4.4, when α < 1, will finally be complete once we establish the following three estimates: Proof of (5.19): Using a technique similar to that used for I 1 , we split J 1 into two different terms according to whether For the first case (a), we write, for 1 ≤ n ≤ σ − 1, recalling that V n,t ∈ [Z n , Z n+1 ], where we used (c) of Lemma 4.3 to get an upper bound for the conditional expectation.
For the second case (b), noticing that V n,t ≥ (1 − t)Z n and keeping in mind Lemma 5.4, we get Moreover, according to Corollary 2.5, we have E which yields (5.19).
Proof of (5.20): We write J 2 Again, we split the expression of J 2 n according to three cases: (5.25) We do not detail the case Z n+1 < 1 2 Z n which may be treated with the same method used in (5.23) and yields a similar bound which does not depend on Z n : In particular, this estimate gives: In order to deal with the second term on the r.h.s. of (5.25), we write According to Corollary 2.5, when 1 In this case, we get to λ α 4 . Hence, for every α ∈ (0, 1), we can find ε > 0 such that It remains to give the upper bound for the last term on the r.h.s. of (5.25). We have On the one hand, when Z n = 0 and Z n+1 = 0, we have |V α−1 n,t −Z α−1 n | ≤ 2 thus, for 1 ≤ n ≤ σ−1, we get where we used (c) of Lemma 4.3 and Lemma 5.4 for the last inequality. On the other hand, These two bounds yield with β = min( 1−α 4 , 1 8 ). Combining (5.26), (5.28) and (5.29), we finally obtain (5.20).
Proof of (5.21): Recall that In particular, J 3 n (t) does not depend on λ. We want to show that there exist C 4 ∈ R and ε > 0 such that We must first check that This may be done, using the same method as before by distinguishing two cases: Since the arguments are very similar to those provided above, we feel free to skip the details. We find, for 1 ≤ n ≤ σ − 1, n < ∞, with the help of the dominated convergence theorem, we get Furthermore we have Using Hölder's inequality, we get where we used Lemma 5.3 for the last inequality. This yields (5.21) and completes, at last, the proof of (e) of Lemma 4.4 when α ∈ (0, 1).

Proof of Lemma 4.4 when α = 1
The proof of the lemma when α = 1 is quite similar to that of the case α < 1. Giving a complete proof would be lengthy and redundant. We shall therefore provide only the arguments which differ from the case α < 1.
For α = 1, the main difference from the previous case comes from the fact that the function F 1−ν = F 0 is not bounded near 0 anymore, a property that was extensively used in the course of the proof when α < 1. To overcome this new difficulty, we introduce the function G defined by Using the properties of F 0 and F 1 stated in section 5.1, we easily check that the function G satisfies (2) There exists c G > 0 such that G(x) ≤ c G e −x/2 for all x ≥ 0.
Thus, each time we encounter F 0 (x) in the study of µ k (n), we will write G(x) − F 1 (x) log x instead. Let us also notice that F 1 and F ′ 1 are also bounded on [0, ∞). We now point out, for each assertion (a) -(e) of Lemma 4.4, the modification required to handle the case α = 1.
As in section 5.2.1, we have and by dominated convergence, Furthermore, using the fact that F ′ 1 is bounded, we get The beginning of the proof is the same as in the case α < 1. We get According to (2.8), there exists c 48 > 0 such that and therefore, for λ sufficiently small. We conclude that Using the definition of G, we now have Since f 1 (x) is equal to 0 for x ≥ M − 1, we get the following (finite) limit Using the same idea as in (5.8), using also Lemma 5.3 and the fact that F ′ 1 is bounded, we deduce that which completes the proof of the assertion.
As in (c), this assertion will be proved as soon as we establish that and that We prove that the first term is o(λ ε ) using the same method as in (5.9). Concerning the second term, we write, with the help of (2.9), using (2.8) with β = 1/2 for the last inequality.
It is worth noticing that, when α = 1, the contribution of this remainder term is negligible compared to (a), (c), and (d) and does not affect the value of the constant in Proposition 4.1. This differs from the case α < 1. Recall that where θ n is given by (4.8). Recall also the notation V n,t def = Z n + t(Z n+1 − Z n ). Just as in (5.12), we write As in (5.14), we have In view of the relation and with similar techniques to those used in the case α < 1, we deduce It remains to estimate I 2 n (t) which we now decompose into four terms: We can obtain an upper bound of order λ ε Z 1−ε n forJ 1 n (t) by considering again three cases: We deal with (2) combining Lemma 5.4 and the fact that G ′ is bounded. Finally, the case of (3) may be treated by similar methods to those used for dealing with J 2 n (t) in he proof of (e) when α < 1 (i.e. we separate into two terms according to whether Z n+1 ≤ λ −1/4 or not).
Keeping in mind that F 1 is bounded and that |F ′ 1 (x)| = xF 0 (x) ≤ c 53 √ xe −x , the same method enables us to deal withJ 2 n (t) andJ 3 n (t). Combining these estimates, we get for ε > 0 small enough. Therefore, it simply remains to prove that (1 − t)J 4 n (t)dt (5.35) exists and is finite. In view of the dominated convergence theorem, it suffices to show that We consider separately the cases Z n+1 > Z n and Z n+1 ≤ Z n . On the one hand, using the inequality log(1 + x) ≤ x, we get On the other hand, we find Since E[ σ−1 n=1 √ Z n ] is finite, we deduce (5.36) and the proof of assertion (e) follows.
6 Proof of Theorem 1.1 Recall that X stands for the (M,p)-cookie random walk and Z stands for its associated branching process. We define the sequence of return times (σ n ) n≥0 by σ 0 def = 0, σ n+1 def = inf{k > σ n , Z k = 0}.
In particular, σ 1 = σ with the notation of the previous sections. We write where S ν denotes a positive, strictly stable law with index ν and where c is a strictly positive constant. Let us note that (6.1) may also be deduce directly from the convergence of the Laplace transform of 1 n 1/ν σ k=0 Z k (resp. log n n σ k=0 Z k ) using (4.1). Moreover, the random variables (σ n+1 − σ n , n ∈ N) are i.i.d. with finite expectation E[σ], thus Since T n is the inverse of sup k≤n X k , we conclude that   This completes the proof of the theorem for sup k≤n X k . It remains to prove that this result also holds for X n and for inf k≥n X k . We need the following lemma.
Lemma 6.1. Let X be a transient cookie random walk. There exists a function f : N → R + with lim K→+∞ f (K) = 0 such that, for every n ∈ N, Proof. The proof of this lemma is very similar to that of Lemma 4.1 of [3]. For n ∈ N, let ω X,n = (ω X,n (i, x)) i≥1,x∈Z denote the random cookie environment at time T n "viewed from the particle", i.e. the environment obtained at time T n and shifted by n. With this notation, ω X,n (i, x) denotes the strength of the i th cookies at site x: ω X,n (i, x) = p j if j = i + ♯{0 ≤ k < T n , X k = x + n} ≤ M, 1 2 otherwise.
Since the cookie random walk X has not visited the half line [n, ∞) before time T n , the cookie environment ω X,n on [0, ∞) is the same as the initial cookie environment, that is, for x ≥ 0, ω X,n (i, x) = p i if 1 ≤ i ≤ M, 1 2 otherwise. (6.3) Given a cookie environment ω, we denote by P ω a probability under which X is a cookie random walk starting from 0 in the cookie environment ω. Therefore, with these notations, ωp ,+ (i, x) = 1 2 , for all x < 0 and i ≥ 1, ωp ,+ (i, x) = p i , for all x ≥ 0 and i ≥ 1 (with the convention p i = 1 2 for i ≥ M ). According to (6.3), the random cookie environment ω X,n is almost surely larger than the environment ωp ,+ for the canonical partial order, i.e. ω X,n (i, x) ≥ ωp ,+ (i, x) for all i ≥ 1, x ∈ Z, almost surely.
The monotonicity result of Zerner stated in Lemma 15 of [19] yields P ω X,n {X visits − K at least once} ≤ P ωp ,+ {X visits − K at least once} almost surely.
Combining this with (6.4), we get We now complete the proof of Theorem 1.1. Let n, r, p ∈ N, using {T r+p ≤ n} = {sup k≤n X k ≥ r + p}, we get Taking the probability of these sets, we obtain P{sup k≤n X k < r} ≤ P{ inf k≥n X k < r} ≤ P{sup k≤n X k < r + p} + P{ inf k≥T r+p X k < r}.
But, using Lemma 6.1, we have Choosing x ≥ 0 and r = ⌊xn ν ⌋ and p = ⌊log n⌋, we get, for α < 1, as n tends to infinity, lim n→∞ P inf k≥n X k n ν < x = lim n→∞ P sup k≤n X k n ν < x .
Of course, the same method also works when α = 1. This proves Theorem 1.1 for inf k≥n X k . Finally, the result for X n follows from inf k≥n X k ≤ X n ≤ sup k≤n X k .