Correlation lengths for random polymer models and for some renewal sequences

We consider models of directed polymers interacting with a one-dimensional defect line on which random charges are placed. More abstractly, one starts from renewal sequence on $\Z$ and gives a random (site-dependent) reward or penalty to the occurrence of a renewal at any given point of $\mathbb Z$. These models are known to undergo a delocalization-localization transition, and the free energy $\tf$ vanishes when the critical point is approached from the localized region. We prove that the quenched correlation length $\xi$, defined as the inverse of the rate of exponential decay of the two-point function, does not diverge faster than $ 1/\tf$. We prove also an exponentially decaying upper bound for the disorder-averaged two-point function, with a good control of the sub-exponential prefactor. We discuss how, in the particular case where disorder is absent, this result can be seen as a refinement of the classical renewal theorem, for a specific class of renewal sequences.


Introduction and motivations
The present work is motivated by the following two problems: • Critical behavior of the correlation lengths for directed polymers with (de-)pinning interactions. Take a homogeneous Markov chain {S n } n≥0 on some discrete state space Σ, with S 0 = 0 and law P. A trajectory of S is interpreted as the configuration of a directed polymer in the space Σ × N.
In typical examples, S is a simple random walk on Σ = Z d or a simple random walk conditioned to be non-negative on Σ = Z + . Of particular interest is the case where the distribution of the first return time of S to zero, K(n) := P(min{k > 0 : S k = 0} = n), decays like a power of n for n large. This holds in particular in the case of the simple random walks mentioned above. We want to model the situation where the polymer gets a reward (or penalty) ω n each time it touches the line S ≡ 0 (which is called defect line). In other words, we introduce a polymer-line interaction energy of the form where N will tend to infinity in the thermodynamic limit. The defect line is attractive at points n where ω n > 0 and repulsive when ω n < 0. In particular, one is interested in the situation where ω n are IID quenched random variables. There is a large physics literature (cf. [ 1 models, due to their connection with, e.g., problems of (1 + 1)-dimensional wetting of a disordered wall or with the DNA denaturation transition.
In the localized phase where the free energy (defined in next section) is positive and the number of contacts between the polymer and the defect line, |{1 ≤ n ≤ N : S n = 0}|, grows proportionally to N , one knows [10] that the two-point correlation function |P ∞,ω (S n+k = 0|S n = 0) − P ∞,ω (S n+k = 0)| (1.1) decays exponentially in k, for almost every disorder realization. Here, P ∞,ω (.) is the Gibbs measure for a given randomness realization and the index ∞ refers to the fact that the thermodynamic limit has been taken. The exponential decay of correlation functions has been applied, for instance, to prove sharp results on the maximal excursions lenght in the localized phase [10, Theorem 2.5] and bounds on the finite-size correction to the thermodynamic limit of the free energy [10,Theorem 2.8].
The inverse of the rate of decay is identified as a correlation length ξ. A natural question is the relation between ξ and the free energy f, in particular in proximity of the delocalization-localization critical point, where the free energy tends to zero (see next section) and the correlation length is expected to tend to infinity. The disorder average of the two-point function (1.1) is also known [10] to decay exponentially with k, possibly with a different rate [18].
The important role played by the correlation length, and by its relation with the free energy, in understanding the critical properties of disordered pinning models was emphasized in a recent work by K. Alexander [2]. , (1.2) with the convention that 1/∞ = 0. It is natural (and quite useful in practice, especially in queuing theory applications) to study the speed of convergence in (1.2). In this respect, it is known (cf. for instance [4,Chapter VII.2], [17]) that, if z max := sup{z > 0 : n∈N e zn p(n) < ∞} > 0, (1.3) then there exist r > 0 and C < ∞ such that |u n − u ∞ | ≤ Ce −rn . (1.4) However, the relation between z max and the largest possible r in Eq. (1.4), call it r max , is not known in general. A lot of effort has been put in investigating this point, and in various special cases, where p(.) satisfies some structural ordering properties, it has been proven that r max ≥ z max (see for instance [5], where power series methods are employed and explicit upper bounds on the prefactor C are given). In even more special cases, for instance when τ i are the return times of a Markov chain with some stochastic ordering properties, the optimal result r max = z max is proved (for details, see [15,18], which are based on coupling techniques). However, the equality r max = z max cannot be expected in general. In particular, if p(.) is a geometric distribution, p(n) = e −nc e c − 1 with c > 0, then one sees that u n = u ∞ for every n ∈ N so that r max = ∞, while z max = c. On the other hand, if for instance p(1) = p(2) = 1/2 and p(n) = 0 for n ≥ 3, then z max = ∞ while r max is finite. These and other nice counter-examples are discussed in [5]. The two problems are known to be strictly related: indeed, in the homogeneous situation (ω n ≡ const) the law of the collection {n : S n = 0} of points of polymer-defect contact is given, in the thermodynamic limit, by a renewal process of the type described above, with p(n) proportional to K(n)e −nf (cf., for instance, [9,Chapter 2]). In this case, therefore, the free energy f plays the role of z max above.
With respect to the first problem listed above, the main result of this paper is that, in the limit where f tends to zero (i.e., when the parameters of the model are varied in such a way that the critical point is approached from the localized phase), the correlation length ξ is at most of order 1/f, for almost every disorder realization. An exponentially decaying upper bound, with a good control of the sub-exponential prefactor, is derived also for the disorder average of the two-point function (1.1), cf. Equation (2.17) of Theorem 2.1 and the discussion in Remark 2.2.
As a corollary we obtain the following result for the second problem above: if the jump law p(.) of the renewal sequence is of the form p(n) = a zmax L(n) n α e −zmaxn , with 1 ≤ α < ∞ and L(.) a slowly varying function (not depending on z max ), then for z max small one has that r max z max and C z −c max for some positive constant c (see Theorem 2.1 and Remarks 2.2, 4.1 below for the precise statements). In particular, this means that |u n − u ∞ | starts decaying exponentially (with rate at least of order z max ) as soon as n ≫ 1/z max .

Notations and main result
We will define our "directed polymer" model in an abstract way where the Markov chain S mentioned in the introduction does not appear explicitly. In this way the intuitive picture of the Markov chain trajectory as representing a directed polymer configuration is somewhat hidden, but the advantage is that the connection with renewal theory becomes immediate. The link with the polymer model discussed in the introduction is made by identifying the renewal sequence τ below with the set of the return times of the Markov chain S to the site 0.
Let K(.) be a probability distribution on N := {1, 2, . . .}, i.e., K(n) ≥ 0 for n ∈ N and n∈N K(n) = 1. (2.1) We assume that for some 1 ≤ α < ∞. Here, L(.) is a slowly varying function, i.e., a positive function L : R + ∋ x → L(x) ∈ (0, ∞) such that lim x→∞ L(xr)/L(x) = 1 for every r > 0. Given x ∈ Z, we construct a renewal process τ := {τ i } i∈N∪{0} with law P x as follows: τ 0 = x, and τ i − τ i−1 are IID integer-valued random variables with law K(.). P x can be naturally seen as a law on the set Note that, thanks to (2.1), τ is a recurrent renewal process (possibly, null-recurrent). Now we modify the law of the renewal by switching on a random interaction as follows. We let {ω n } n∈Z be a sequence of IID centered random variables with law P and E ω 2 0 = 1. For simplicity, we require also ω n to be bounded. Then, given h ∈ R, β ≥ 0, x, y ∈ Z with x < y and a realization of ω we let where, of course, and P x,y,ω is still a law on Ω x . Note that the normalization condition (2.1) is by no means a restriction: if we had Σ := n∈N K(n) < 1, we could perform the replacements 3) and the measure P x,y,ω would be unchanged.
One defines the free energy as The convergence holds almost surely and in L 1 (P), and f(β, h) is P( dω)-a.s. constant (see [9,Chap. 4] and [3]). It is known that f(β, h) ≥ 0: to realize this, it is sufficient to observe that which tends to zero for N → ∞. One then decomposes the phase diagram into localized and delocalized regions defined as On the other hand, in D the density tends to zero with N : Another quantity which will play an important role in the following is As it is known (cf. [10, Theorem 2.5 and Appendix B]) for (β, h) ∈ L one has On the other hand, it is unknown whether the ratio f(β, h)/µ(β, h) remains bounded for h → h c (β). µ(β, h) is related to the maximal excursion length in the localized phase, in the sense that essentially ∆ N ≃ log N/µ(β, h), see [10, Theorem 2.5] (cf. also [1] for a proof of the same fact in a related model, the heteropolymer at a selective interface).
As was proven in [10] (but see also [6] for the proof of the almost sure existence of the infinite-volume Gibbs measure for the heteropolymer model in the localized phase), the limit exists, P( dω)−a.s., for every (β, h) ∈ L and for every bounded local observable f , and is independent of the way the limits x → −∞, y → ∞ are performed. A bounded local observable is a bounded function f : {τ : τ ⊂ Z} → R for which there exists I, finite subset of Z, such that f (τ 1 ) = f (τ 2 ) whenever τ 1 ∩ I = τ 2 ∩ I. The smallest possible I is called support of f . An example of local observable is |{τ ∩ I}|, the number of points of τ which belong to I. On the other hand, τ 1 is not a local observable.
A useful identity is the following: let a ∈ Z and f, g be two local observables, whose supports are contained in {. . . , a − 2, a − 1} and {a + 1, a + 2, . . .}, respectively. Then, if x < a < y, (2. 16) In other words, conditioning on the event that a belongs to τ makes the process to the left and to the right of a independent. This is easily checked from the definition (2.3) of the Boltzmann-Gibbs measure and from the IID character of Our first result is an exponentially decaying upper bound on the disorder-averaged two-point correlation function, in the localized phase: The constant C 1 (ǫ, β, h) does not vanish at the critical line: for every bounded subset Remark 2.2. Note that Theorem 2.1 is more than just a bound on the rate of exponential decay of the disorder-averaged two-point correlation. Indeed, thanks to the explicit bound on the prefactor in front of the exponential, Eq. (2.17) says that the exponential decay, with rate at least of order µ 1+ǫ , commences as soon as k ≫ µ −1−ǫ | log µ|. This observation reinforces the meaning of Eq. (2.17) as an upper bound on the correlation length of disorder-averaged correlations functions.
It would be possible, via the Borel-Cantelli Lemma, to extract from Eq. (2.17) the almost-sure exponential decay of the disorder-dependent two-point function. However, from [18] one expects the almost-sure exponential decay to be related to f(β, h) rather than to µ(β, h). Indeed, we have the following: Recalling that f > µ, it is clear that Theorem 2.3 cannot be deduced from Theorem 2.1.

Remark 2.4.
It is quite tempting to expect that, in analogy with Theorem 2.1, the (random) prefactor C 2 (ω) is bounded above by for some random variable C 5 such that, say, EC 5 (ω, ǫ, β, h) ≤ c(B, ǫ) < ∞ for (β, h) belonging to a bounded set B ⊂ L. This would mean that the almost sure exponential decay with decay rate at least of order f 1+ǫ commences as soon as k ≫ n(ω)f −1−ǫ | log f|, with n(ω) random but typically of order one even close to the critical point. However, this kind of result seems to be out of reach with the present techniques.
Once the exponential decay of the two-point function is proven, it is not difficult to obtain similar results for the correlation between any two given local observables (cf. Remark 5.1 below for some more details): Theorem 2.6. Let A and B be two bounded local observables, with supports S A and S B , and where C 1 and C 2 are as in Theorems 2.1 and 2.3.

Sketch of the idea: auxiliary Markov process and coupling
In this section, we give an informal sketch of the basic ideas underlying the proof of the upper bounds for the two-point function. The actual proof is somewhat involved and takes Sections 4 to 7.
The basic trick is to associate to the renewal probability K(.) a Markov process {S t } t≥x such that, very roughly speaking, its trajectories are continuous "most of the time" and the random set of times {t ∈ Z ∩ [x, ∞) : S t = 0} has the same distribution as the discrete renewal process {τ i } i∈N∪{0} associated to K(.), with law P x . This construction is done in Section 4, where we see that S . is strictly related to the Bessel process [16] of dimension 2(α + 1). Once we have S . , we switch on the interaction and in the thermodynamic limit x → −∞, y → ∞ we obtain a new measureP ∞,ω on the paths {S t } t∈R . An important point will be that the process S . , underP ∞,ω , is still Markovian, and that the marginal distribution of τ := {t ∈ Z : S t = 0} is just the measure P ∞,ω defined in Eq. (2.15). At that point, we take two copies (S 1 . , S 2 . ) of the process, distributed according to the product measureP ⊗2 ∞,ω , and we define the coupling time T (S 1 , S 2 ) = inf{t ≥ 0 : Indeed, if the two paths meet before time k, we can let them proceed together from then on and they will either both touch zero at t = k, or both will not touch it. Note that at the left-hand side of (3.  [18] a sharp result was proven in a specific case: if P is the law of the zeros of the one-dimensional simple random walk conditioned to be non-negative (but that proof works also for the unconditioned simple random walk), then the limit in (2.18) exists for (β, h) ∈ L and equal exactly f(β, h). Similarly, for the disorder-averaged two-point function the analogous limit exists and equals µ(β, h). The simplification that occurs in the situation considered in [18] is that two trajectories of the Markov chain which is naturally associated to K(.), i.e., of the simple random walk, must necessary meet whenever they cross each other. This avoids the construction of the auxiliary Markov chain and makes the coupling argument much more efficient.
Let us emphasize that, in general, it is not even proven that the rate of exponential decay of the (averaged or not) two-point correlation function tends to zero when the critical point is approached (although this is very intuitive, and known for instance in the case considered in [18], as already mentioned).

The Markov process
t } t≥s be the Bessel process of dimension δ and denote its law by P (s) ρ . The Bessel process is actually well defined also for δ ≤ 2, but we will not need that here. For the application we have in mind, we choose the initial condition ρ . , which gives the probability of being in y at time t having started at x at time 0, is known explicitly [16]: its density in y with respect to the Lebesgue measure is given, for t, x > 0, by conditioned on T (s) < ∞. Finally, for n ∈ N we set K (δ) (n) :=P One can prove (cf. Appendix A; the proof is an immediate consequence of results in [13] and [12]) that the existence of the limit being part of the statement. Note thatρ (s) .
We choose the parameter of the Bessel process as δ = 2(1 + α + ǫ), with ǫ > 0 (this is the same ǫ which appears in the statement of Theorem 2.1). Then, from Eqs. (4.3), (4.4) and (2.2) it is immediate to realize that there exists p = p(ǫ) with 0 < p < 1 such that, for every n ∈ N, whereK(n) ≥ 0 and, of course, n∈NK (n) = 1. The important point here is the nonnegativity ofK(n), which implies that both K(.) andK(.) are probabilities on N, to which renewal processes are naturally associated. Note for later convenience that, as a consequence of (B.2), Remark 4.1. Note that, if the slowly varying function L(n) in (2.2) tends to a positive constant for n → ∞, one can choose ǫ = 0 and in that case (4.7) can be improved into x = (0, 0). The process will satisfy the following two properties: u } u≤t ) a random variable Ψ which takes value 0 with probability (1 − p), and 1 with probability p (p being defined in Eq. (4.6)). At that point (see Figure 1): • If Ψ = 0, then we extract a random variable m ∈ N with probability lawK(.) and we let φ In the same time interval, we let ψ (x) u = Ψ = 0. At time t + m, we are back to condition (4.9) and we start again the procedure with an independent extraction of Ψ.
• If Ψ = 1, then we let φ (x) u evolve like the processρ (t) u for u ∈ (t, t + T (t) ) where, we recall, T (t) is the (random, but almost surely finite) first time after t whenρ (t) equals 1/2. In particular, φ . At time T (t) we are back to condition (4.9) and we start again with an independent extraction of Ψ.

The process S (x)
. so constructed (whose law will be denoted byP x ), satisfies the following properties which are easily checked: u } u∈I , I bounded subset of R.) This is a consequence of the fact that in the localized region τ has a nonzero density in Z and that the limit exists for functions depending only on τ , as discussed in Section 2. We will call simply S . = (φ . , ψ . ) the limit process obtained as x → −∞, y → ∞, and τ = {t ∈ Z : φ t = 0}. D The process S . is Markovian. More precisely: if A is a local event supported on [u, ∞) thenP ∞,ω (A|{S t } t≤u ) =P ∞,ω (A|S u ). (This property is easily checked for x, y finite, and then passes to the thermodynamic limit). E Let again τ = {t ∈ Z : φ t = 0} and A a,b the event {a ∈ τ, b ∈ τ, {a+1, . . . , b−1}∩τ = ∅}, for a, b ∈ Z with x < a < b < y. Under the lawP x,y,ω , conditionally on A a,b , the variable ψ a+ (= ψ u for every u ∈ (a, b], from our construction of S . ) is independent of {S t } t∈(−∞,a)∪(b,∞) and is a Bernoulli variable which equals 0 with probability and 1 with probability where the lower bound follows from (4.7). As for {φ u } u∈(a,b] , conditionally on A a,b it is also independent of {S t } t∈(−∞,a)∪(b,∞) . If in addition we condition on ψ a+ = 0, then φ u = b − u, while if we condition on ψ a+ = 1 then {φ u } u∈(a,b] has the same law as a trajectory of ρ

The coupling inequality
Consider two independent copies S 1 . , S 2 . of the process S . , distributed according to the product measureP ⊗2 ∞,ω (.). As a consequence of property C of Section 4, we can rewrite Given two trajectories of S . , define their first coupling time after time zero as It is important to remark that we are not requiring T (S 1 , S 2 ) to be an integer. Then, from the Markov property of S it is clear that the r.h.s. of (5.1) equalŝ Therefore, we conclude that To proceed with the proof of Theorems 2.1 and 2.3 we are left with the task of giving upper bounds for the probability that the coupling time is large. This will be done in Section 7, but first we need results on the geometry of the set {t ∈ Z : φ t = 0} ∩ {1, . . . , k}, for k large and close to the critical line.
The upper bounds of Section 7 on the probability of large coupling times imply therefore Theorem 2.6 (indeed, the proof of Eqs. (7.1) and (7.6) can be easily repeated in absence of the conditioning on the event φ 1 0 = 0.)

Estimates on the distribution of returns in a long time interval
Ideas similar to those employed in this section have been already used in Ref. [10] and, more recently, in [2].
To simplify notations, we will from now on set v := (β, h), µ := µ(v) and f := f(v). Also, in the following whenever a constant c(v) is such that for every bounded B ⊂ L one has 0 < c − (B) ≤ inf v∈B c(v) ≤ sup v∈B c(v) ≤ c + (B) < ∞, we will say with some abuse of language that it is independent of v. In particular, this means that c(v) cannot vanish or diverge when the critical line is approached.
In this section we prove, roughly speaking, that if the interval {1, . . . , k} is large there are sufficiently many points of τ in it, and that these points are rather uniformly distributed. We will need also an analogous P( dω)-almost sure result. However, in this case the strategy has to be modified and {1, . . . , k} has to be divided into blocks whose lengths depend on ω: namely, let i 0 (ω) = 0, where A I is the event We can rewrite (in a unique way) B I := ∪ ℓ∈I B ℓ as a disjoint union of intervals, (If m(I) = 1, the formula is slightly modified in that the sum is only on x 1 ≤ i 1 and y 1 ≥ j 1 ; the estimates which follow hold also in this case). Here we are using the fact that the disorder variables are bounded, say, |ω n | ≤ ω max . To obtain (6.9) observe that, if i − r := max{τ i : τ i ≤ i r } and j + r := min{τ i : τ i ≥ j r }, P ∞,ω (A I ; i − r = x r , j + r = y r ∀r = 1, . . . , m(I)) (6.10) ≤ P ∞,ω (A I |i − r = x r , j + r = y r ∀r = 1, . . . , m(I)) ≤ m(I) r=1 K(y r − x r )e βωy r −h Z xr,yr,ω (6.11) where we used (2.16) in the last step. It is clear that, on the event A I , i − r ≥ i r − R if r > 1 (otherwise the block {i r − R, . . . , i r − 1} would be contained in B I , which is not possible due to i r ≥ j r +R) and similarly j + r ≤ j r +R if r < m(I). Then, (6.9) immediately follows. Note that by the first inequality in (B.3) one can bound Z xr,yr,ω ≥ Z xr,ir,ω Z ir,jr,ω Z jr,yr,ω . Therefore, using Eqs. (B.1), (B.2) and (B.4), we get that for some positive c 7 , c 8 . The factor µ −c 7 comes, through (B.4), from the sum A ω I := {B ω ℓ ∩ τ = ∅ for every ℓ ∈ I} ∩ {B ω ℓ ∩ τ = ∅ for every ℓ / ∈ I} (6.23) and rewrite B I := ∪ ℓ∈I B ω ℓ as {i xr (ω) + 1, . . . , i yr (ω)} where the indices x r , y r are chosen so that i xr (ω) ≥ i y r−1 (ω)+2. Then, with a conditioning argument similar to the one which led to Eq. (6.12), one finds for c sufficiently large In the third inequality we used, once more, Jensen's inequality for the logarithm function and in the fourth one the monotonicity of x → x log(1/x) for x > 0 small, plus Eq. (6.4) and the assumption that |I| ≥ ηM (ω). Considering all possible sets I of cardinality not smaller than ηM (ω), we see that the l.h.s. of (6.5) is bounded above by Finally, we can go back to the problem of estimating from above theP ⊗2 ∞,ω -probability that the coupling time is larger than k, cf. Section 5. This will conclude the proof of Theorems 2.1, 2.3 and 2.6. 7.1. The average case. We wish first of all to prove that To this purpose observe that, if τ a = {t ∈ Z : φ a t = 0}, a = 1, 2, This would be an immediate consequence of Proposition 6.1 if the conditioning on 0 ∈ τ 1 were absent. However, the proof of Proposition 6.1 can be repeated exactly in presence of conditioning, i.e., when the measure P ∞,ω (.) is replaced by P 0,∞,ω (.) := lim y→∞ P 0,y,ω (.) in Eq. (6.3). Therefore, +EP ⊗2 ∞,ω T (S 1 , S 2 ) > k + 1 U c , φ 1 0 = 0 , where U c is the complementary of the event U . On the other hand, provided that η is chosen sufficiently small (but independent of v) it is obvious that if the event U c occurs there exist at least, say, M/10 integers 1 < ℓ i < M such that ℓ i > ℓ i−1 + 2 and B r ∩ τ a = ∅, for every a ∈ {1, 2} and r ∈ {ℓ i −1, ℓ i , ℓ i +1}. The condition ℓ i > ℓ i−1 +2 simply guarantees that any two triplets of blocks of the kind {B ℓ i −1 , B ℓ i , B ℓ i +1 } are disjoint for different i, a condition we will need later in this section. We need to introduce the following definition: PSfrag replacements Using also Lemma 7.2 one has then that, conditionally on U c , theP ⊗2 ∞,ω -probability that T (S 1 , S 2 ) > k does not exceed Recalling the definitions (6.1) and (6.2) of R and M , one can bound this probability from above with exp −d 5 (ǫ)kµ 1+5ǫ /| log µ| 2 .
A.1. Proof of Lemma 7.2. Let x, y be any pair of sites which satisfies the conditions required by Definition 7.1. Assume for definiteness that x ∈ τ 1 , y ∈ τ 2 . We assume also that x < y, otherwise the lemma is trivial. For technical reasons, it is also convenient to treat apart the case x = y − 1. In this case, the lemma follows immediately from (B.3). Indeed, from this is easily deduced in particular that, conditionally on y ∈ τ 2 , the probability that also y − 1 ∈ τ 2 is greater than some positive constant, independent of ω.
As for the more difficult case where x < y − 1, it is clear that there exists x ≤ t ≤ y such that φ 1 t = φ 2 t whenever φ 2 x ≥ 1 (we assume that x = τ 2 , otherwise the existence of t such that φ 1 t = φ 2 t is trivial). This follows (see also Figure 2) from the observation that φ 1 x + = 1, φ 1 y ≥ 1/2 and that there exists y − 1 < s ≤ y with φ 2 s = 1/2, together with the fact that the trajectories of the Bessel process are continuous almost surely. Therefore, the Lemma follows if we can prove that the probability that φ 2 x ≥ 1 is bounded below by a positive constant. This is the content of (A.4) below.
In order to state (A.4), we need to introduce the Bessel Bridge process of dimension δ [16, Chapter XI.3]. Given u ≥ 0 and a, v > 0, the Bessel Bridge is a continuous process {X t } t∈[0,a] (whose law is denoted by P a,δ u,v ) which starts from u at time 0, ends at v at time a and such that, given 0 < s 1 < . . . < s k < a, the law of (X s 1 , . . . , X s k ) has density Then, what we need is inf u,v≥1/2  Let p(x 1 , . . . , x 2n−1 ) be the probability density of (X 1/n , . . . , X (2n−1)/n ). Given x a := (x a 1 , . . . , x a 2n−1 ), x a j > 0, a = 1, 2, define x 1 ∨ x 2 := ((x 1 1 ∨ x 2 1 ), . . . , (x 1 2n−1 ∨ x 2 2n−1 )) and analogously x 1 ∧ x 2 . Then, from the continuity and Markov property of the Bessel Bridge process [16,Chapter XI.3] it is clear that p(x 1 ∨ x 2 )p(x 1 ∧ x 2 ) ≥ p(x 1 )p(x 2 ). This is just the FKG inequality, which implies in particular that the probability in (A.10), for any given n, is not smaller than P 2,δ u,v (X 1 ≥ 1). In this section we collect some technical estimates, which in very similar form have been already used in the previous literature. Let us notice at first that, for every x < y and uniformly in ω, Z x,y,ω ≥ e βωy−h K(y − x).
(B.1) Also, Eq. (2.2) and the property of slow variation imply that for every ǫ > 0 there exist positive constants d 1 (ǫ), d 2 (ǫ) such that, for every n ∈ N, In Lemma A.1 of [10] it was proven that there exists c 1 , which in the case of bounded disorder can be chosen independent of ω, such that for every x < z < y Z x,z,ω Z z,y,ω ≤ Z x,y,ω ≤ c 1 ((z − x) ∧ (y − z)) c 1 Z x,z,ω Z z,y,ω . (B.3) As it was shown in [10, Proposition 2.7], this immediately implies that there exists c ′ 1 > 0 such that, for every y > x, 1 |y − x| E log Z x,y,ω − f(v) ≤ c ′ 1 log |y − x| |y − x| .
Similarly, one can see that This work originated from discussions with Giambattista Giacomin, to whom I am very grateful for several suggestions. Partial supported by the GIP-ANR project JC05 42461 (POLINTBIO) is acknowledged.