Scaling limit of a one-dimensional polymer in a repulsive i.i.d. environment

The purpose of this paper is to study a one-dimensional polymer penalized by its range and placed in a random environment $\omega$. The law of the simple symmetric random walk up to time $n$ is modified by the exponential of the sum of $\beta \omega_z - h$ sitting on its range, with~$h$ and $\beta$ positive parameters. It is known that, at first order, the polymer folds itself to a segment of optimal size $c_h n^{1/3}$ with $c_h = \pi^{2/3} h^{-1/3}$. Here we study how disorder influences finer quantities. If the random variables $\omega_z$ are i.i.d.\ with a finite second moment, we prove that the left-most point of the range is located near $-u_* n^{1/3}$, where $u_* \in [0,c_h]$ is a constant that only depends on the disorder. This contrast with the homogeneous model (i.e. when $\beta=0$), where the left-most point has a random location between $-c_h n^{1/3}$ and $0$. With an additional moment assumption, we are able to show that the left-most point of the range is at distance $\mathcal U n^{2/9}$ from $-u_* n^{1/3}$ and the right-most point at distance $\mathcal V n^{2/9}$ from $(c_h-u_*) n^{1/3}$. Here again, $\mathcal{U}$ and $\mathcal{V}$ are constants that depend only on $\omega$.


Introduction
We study a simple symmetric random walk (S k ) k≥0 on Z, starting from 0, with law P. Let ω = (ω z ) z∈Z be a collection of i.i.d.random variables with law P, independent from the random walk S, which we will call environment or field.We also assume that E[ω 0 ] = 0 and E[ω 2 0 ] = 1.For h > 0, β > 0 and a given realization of the field ω, we define the following Gibbs transformation of P, called the (quenched) polymer measure: where R n = R n (S) := S 0 , . . ., S n is the range of the random walk up to time n, and is the partition function, such that P ω,β n,h is a (random) probability measure on the space of trajectories of length n.In other words, the polymer measure P ω,β n,h penalizes trajectories by their range and rewards visits to sites where the field ω takes greater values.
In this setting, the disorder term z∈Rn ω z is typically of order |R n | 1/2 : one can prove that a β z∈Rn ω z − h|R n | ∼ −h|R n | for P-almost all ω, see [5].Thus, disorder does not sufficiently a In the rest of the paper we shall use the standard Landau notation: as x → a, we write g(x) ∼ f (x) if limx→a g(x)  f (x) = 1, g(x) = ō(f (x)) if limx→a g(x) f (x) = 0,g(x) = Ō(f (x)) if lim sup x→a g(x) f (x) < +∞ and f ≍ g if g(x) = Ō(f (x)) and f (x) = Ō(g(x)).
impact the behavior of the polymer on a first approximation, which is seen in Theorem 1.1 below.We introduce the following notation: fix ω and let ξ ω n be E-valued random variables, with (E, d) a metric space.Consider ξ ω ∈ E, we write We will say that "ξ ω n converges in P ω,β n,h -probability" even if P ω,β n,h depends on n.If this holds for P-almost all ω, we will say that ξ n converges in P ω,β n,h -probability, P-almost surely.In our results, we will take (E, d) to be (R, | • |), or the closed bounded subsets of R d endowed with the Hausdorff distance.
Let us express the results of [5] with this notation, which states that |R n | ∼ c h n 1/3 for P-almost all realization of ω. ).For all h > 0, define c h := (π 2 h −1 ) 1/3 .Then, for any h, β > 0, P-almost surely we have the following convergence The main goal of this paper is to extract further information on the polymer, notably on the location of the segment where the random walk is folded, or on how |R n | fluctuates at lower scales than n 1/3 .
To do so, we will prove the following expansion of the partition function: there are random variables u * , U, V and processes X, Y such that holds P-a.s. with the ō(1) going to 0 in P β,ω n,h -probability.The random variables are given by variational problems in Theorems 1.3, 1.6, which are the main results of this paper.Thanks to this expansion, we also get a precise description of R n at scale n 2/9 under P ω,β n,h , that is R n ≈ −u * n 1/3 + Un 2/9 , (c h − u * )n 1/3 + Vn 2/9 .

About the homogeneous setting
Since we are working in dimension one, we make use of the fact that the range is entirely determined by the position of its extremal points, meaning that R n is exactly the segment M − n , M + n , where M − n := min 0≤k≤n S k and M + n := max 0≤k≤n S k .We will also adopt the following notation: Hence, T n is the size of the range and T * n is the typical size of the range at scale n 1/3 under P ω,β n,h that appears in (1.1).
In the homogeneous setting, that is when β = 0, it is proven in [7] that the location of the left-most point is random (on the scale n 1/3 ) with a density proportional to sin(πu/c h ).As far as the size of the range T n is concerned, it is shown to have Gaussian fluctuations.In fact, [7] treats the case of a parameter h = h n that may depend on the length of the polymer: in this case, fluctuations vanish when the penalty strength h n is too high.We state the full result for the sake of completeness.Theorem 1.2 ([7, Theorem 1.1]).Recall the notations of (1.2) and replace h by h n in the definition of T * n .Then for β = 0 and all ω, we have the following results: • Assume that h n ≥ n −1/2 (log n) 3/2 and lim n→∞ n −1/4 h n = 0. Let a n := 1 √ 3 , which is such that lim n→∞ a n = +∞.Then for any r < s and any 0 ≤ a < b ≤ 1, 2 ˆs r e − u 2 2 du ˆb a sin(πv) dv .
We will see that the disordered model displays a very different behavior: the location of the left-most and right-most points are P-deterministic, in the sense that they are completely determined by the disorder field ω (at least for the first two orders).

First convergence result
Akin to [5], we define the following quantities: for any j ≥ 0 for which the sum is not empty, Using Skorokhod's embedding theorem (see [24,Chapter 7.2] and Theorem 4.1 below) we can define on the same probability space a coupling ω = ω(n) of ω and two independent standard Brownian motions X (1) and X (2) such that for each n, ω(n) has the same law as the environment ω and −−−→ n→∞ X (2)  v (ω) v≥0 in the Skorokhod metric on the space of all càdlàg real functions.With an abuse of notation, we will still denote by ω this coupling, while keeping in mind that the field now depends on n.
Our first result improves estimates on the asymptotic behavior of Z ω,β n,h and (M − n , M + n ).
Theorem 1.3.For any h, β > 0, we have the following P-a.s.convergence where X (1) and X (2) are the two independent standard Brownian motions defined above.Furthermore, u * := arg max u∈[0,c h ] X u + X (2) c h −u is P-a.s.unique and Comment.Theorem 1.3 still holds if (ω z ) are i.i.d and in the domain of attraction of an α-stable law with α ∈ (1, 2), only replacing the Brownian motions X (i) by Lévy processes as in [5] and n 1/6 by n 1/3α : we refer to Theorem A.1 and its proof in Appendix A. As most of the work in this paper requires stronger assumptions on the field ω we will not dwell further on this possibility and focus on the case where E ω 2 0 = 1.

Heuristic.
Intuitively, the result of Theorem 1.3 is a consequence of the following reasoning: if we assume that the optimal size is T * n (at a first approximation), the location of the polymer should be around the points (x n , y n ) ∈ N 2 such that x n + y n ≈ T * n and Σ − xn + Σ + yn is maximized.Translating in terms of the processes X (1) , X (2) , we want to maximize n −1/6 (Σ − xn + Σ + yn ), which is "close" to X (1) ynn −1/3 .Since x n + y n ∼ T * n we have y n n −1/3 ∼ c h − x n n −1/3 and we want to pick x n n −1/3 to maximize u → X (1)

Second order convergence result
To ease the notation we will denote X u := X (1) c h , where W is a standard Brownian motion.Hence, the supremum on [0, c h ] of X u is almost surely finite, attained at a unique u * which follows the arcsine law on [0, c h ].
In order to extract more information on the typical behavior of the polymer, we need to go deeper into the expansion of log Z ω,β n,h .To do so, we factorize Z ω,β n,h by e βn 1/6 Xu * and we study the behavior of log Z ω,β n,h + 3 2 hc h n 1/3 − βn 1/6 X u * , which is related to the behavior of X near u * .Studying Wiener processes near their maximum leads to study both the three-dimensional Bessel process and the Brownian meander (see Appendix B).
Proposition 1.4.Conditional on u * , there exist two independent Brownian meanders (M σ ) |σ|=1 such that for any u ∈ [0, c h ], 1 {u * <u} . (1.5) Proof.Recall the fact that X has the law of √ 2W +X (2) , and as M > X (t) if t ≥ 0. By Proposition B.1, conditional on the value of u * , the processes M < X and M > X are two independent Brownian meanders, with respective duration u * and c h − u * .By the scaling property of the Brownian meander, both M < X and M > X can be obtained from two independent standard Brownian meanders M σ , σ ∈ {−1, +1}.Some other technical results about the meander are presented in Appendix B. We also define the following process, which we call two-sided three-dimensional Bessel (BES 3 ) process.Definition 1.1.We call two-sided three-dimensional Bessel process B the concatenate of two independent three-dimensional Bessel processes B − and B + .Namely, for all s ∈ R, Additionally, we will use the following coupling between (X u + X (2) u − X c h −u ) seen from u * and a two-sided BES 3 process and a Brownian motion.This will allow us to obtain P-almost sure results instead of convergences in distribution; in particular we obtain trajectorial results that depend on the realization of the environment.The proof is postponed to Appendix C and relies on the path decomposition of usual Brownian-related processes.
Proposition 1.5.Let X u = X (1)  u + X (2) c h −u .Then, conditionally on u * , one can construct a coupling of (X (1) , X (2) ) and B a two-sided BES 3 , Y a two-sided standard Brownian motion such that: almost surely, there is a where we have set χ = χ(u, ω) := √ Comment.It should be noted that χ actually only depends on the sign of u, which means that the process χB has the Brownian scaling invariance property.This will be used in Section 3.2 to get a suitable coupling.
Theorem 1.6.Suppose E |ω 0 | 3+η < ∞ for some η > 0. With the coupling of Proposition 1.5, we have the P-a.s.convergence where } is P-a.s.unique and we have In particular, we have Comment.We should be able to obtain a statement assuming only that E |ω 0 | 2+η < ∞ for some positive η.The statement is a bit more involved, as we need to use a different coupling between ω and X (1) , X (2) .For any K > 1, we write . In Section 4.2, we are able to the following convergence: write W 2 for the right-hand side of (1.6), then P-almost surely However, we are not able to get the proof that lim K→+∞ lim n→∞ Z≤K n,ω /Z ω,β n,h = 1.We give in Section 4.2 some heuristics for why this second convergence should be true, and why our method fails to prove it.

Comments on the results, outline of the paper
Expansion of the log-partition function.One may think about our results as an expansion of log Z β,ω n,h up to several orders, gaining each time some information on the location of the endpoints of the range.A way to formulate such result is, for some real numbers α 1 > • • • > α p ≥ 0, to define the following sequence of free energies which we may call k-th order free energy, at scale α k : when these quantities exist and are in R \ {0}.Theorems 1.1, 1.3 and 1.6 can be summarized in the following statement: assuming that E ω 3+η 0 < ∞ for some positive η, then letting α k = 1 3k for k ∈ {1, 2, 3}, we have P-a.s.
Note that the first two orders of log Z ω,β n,h , meaning f ω and f ω , are respectively called the free energy and the surface energy.

Coupling and almost sure results
Observe that we combine two different couplings that have different uses to prove our results: • A coupling for a given size n between the environment ω and two Brownian motions X (1)  and X (2) .This coupling allows for the almost sure convergence in Theorem 1.3, and the assumption E ω 3+η 0 < +∞ is used to have a good enough control on the coupling.This will be detailled in Section 4.
• A coupling between (X (1) , X (2) , u * ) and (B, Y) to study the behavior of the Brownian motions X (1) and X (2) near u * .This allows us to get the almost sure convergence of Theorem 1.6.This is the object of Proposition 1.5 and Appendix C.
We explain how these two couplings combine to get the P-almost sure results of Theorem 1.3,1.6 in Section 3.2.The main idea is to make the environment ω and the processes X (1) , X (2) depend on n in order to "fix" what we see uniformly in n large enough.
Local limit conjecture In Section 5 we study a simplified model where the random walk is constrained to be non-negative.By restricting it so, the processes involved are less complex as they depend on only one variable (which represents the higher point of the polymer), which simplifies the calculations.The idea is to give some insights on what happens when studying log Z ω,β n,h − 3 i=1 n α i f (i) (h, β), especially on the scale of the 4-th order free energy.The environment is taken to be Gaussian in order to get the coupling of n −1/6 Σ ± zn 1/3 with no coupling error (otherwise the result may not be the same).We give in Section 5 a detailed justification for the following conjecture, which is a form of local limit theorem.
The main obstacle to prove this conjecture is that W is given by zooming in the process Ỹu,v := Y U ,V − Y u,v as (u, v) gets close to (U, V).With our methods here, we need to know some properties of Ỹ which is a seemingly complex process due to the already nontrivial nature of Y.
In Section 5, we study the model with a fixed minimum, that is by replacing the random walk with the random walk conditioned to stay positive.In this case, the process Y becomes a Brownian motion with parabolic drift, which allows us to conjecture the law of the corresponding Ỹ as well as a local limit theorem.

Related works
The case of varying parameters β, h.As mentioned above, the present model has previously been studied in [5], with the difference that the parameters β, h were allowed to depend on n, the size of the polymer.More precisely, the polymer measure considered was given for arbitrary ĥ, β ∈ R by The authors in [5] obtained P-almost sure convergences of n −λ log Z ω,βn n,hn for some suitable λ ∈ R, which corresponds to a first order expansion of the log-partition function.Afterwards, asymptotics for EE ω,βn n,hn [|R n |] as well as scaling limits for (M − n , M + n ) were established and displayed a wide variety of phases.In addition, the authors also investigated the case where (ω z ) are i.i.d and in the domain of attraction of an α-stable law with α ∈ (0, 1) ∪ (1, 2) to unveil an even richer phase diagram.
Theorems 1.3 and 1.6 confirm the conjecture of Comment 4 of [5] that for a typical configuration ω, the fluctuations of the log-partition function and n −1/3 (M − n , M + n ) are not P-random for fixed h, β > 0. With our methods, it should be possible to extend our results to account for size-dependent h = h n , β = β n , with similar results for "reasonable" h n , β n (meaning with sufficiently slow growth/decay).
Link with the random walk among Bernoulli obstacles.Take a Bernoulli site percolation with parameter p, meaning a collection O = z ∈ Z d , η z = 1 where η z are i.i.d.Bernoulli variables with parameter p, and write P = B(p) ⊗Z its law on Z. Consider the random walk starting at 0 and let τ denote the time it first encounters O (called the set of obstacles): one is interested in the asymptotic behavior of the survival probability P(τ > n) as n → ∞ and of the behavior of the random walk conditionally on having τ > n, see for example [13] and references therein.The annealed survival probability E P P(τ > n) is given by and we observe that this is exactly Z ω,0 n,hp with h p = − log(1 − p).Thus, for β = 0, our model can be seen as an annealed version of the random walk among Bernoulli obstacles with common parameter p = 1 − e −h .
If we push the analogy a bit further and assume βω z − h ≤ 0 for all z ∈ Z, we can see Z ω,β n,h as the annealed survival probability of the random walk among obstacles O ω = z ∈ Z d , η ω z = 1 where η ω z are i.i.d.Bernoulli variables with random parameter p ω z = 1 − e βωz−h .The averaging is done on the random walk (with law P) and the Bernoulli variables (with law P ω = z∈Z B(p ω z )), while the parameters p ω z = 1 − e βωz−h (with law P) are quenched.
Link with the directed polymer model.Another famous model is given by considering a doubly indexed field (ω i,z ) (i,z)∈N×Z and the polymer measure This is known as the directed polymer model (in contrast with our non-directed model) and has been the object of an intense activity over the past decades, see [10] for an overview.Let us simply mention that the partition function solves (in a weak sense) a discretized version of Stochastic Heat Equation (SHE) with multiplicative space-time noise ∂ t u = ∆u + βξ • u.Hence, the convergence of the partition function under a proper scaling β = β(n), dubbed intermediate disorder scaling, has raised particular interest in recent years: see [1,8] for the case of dimension 1 and [9] for the case of dimension 2, where this approach enabled the authors to give a notion of solution to the SHE; see also [4] for the case of a heavy-tailed noise.
The main difference with our model is how the disorder ω plays into the polymer measure.The directed polymer gets a new reward/penalty ω i,z at each step it takes, whereas in our model such event only happens when reaching a new site of Z, in some sense "consuming" ω z when landing on z for the first time.

Outline of the paper.
This paper can be split into three parts.The first part in Section 2 consists in the proof of Theorem 1.3.The second and main part focuses on the proof of Theorem 1.6.This proof is split into Section 3 where ω is assumed to be Gaussian and Section 4 where we explain how to get the general statement thanks to a coupling.A third part, in Section 5, studies the simplified model where the random walk is constrained to be non-negative.Precise results under some technical assumption help us formulate the conjectures in (1.10) and (1.11).
Finally, we prove in Appendix A the generalization of Theorem 1.3 to the case when ω does not have a finite second moment, as announced.We also state some useful properties of the Brownian meander that we use in our proofs in Appendix B. In Appendix C we detail a way to couple Brownian meanders with a two-sided three-dimensional Bessel process so that they are equal near 0 (i.e.we prove Proposition 1.5).

Second order expansion and optimal position
We extensively use the following notation: For a given event A (which may depend on ω), we write the partition function restricted to A as This section consists in the proof of Theorem 1.3 and is divided into two steps: • We first make use of a coarse-graining approach with a size δn 1/3 to prove the convergence of the rescaled log Z ω,β n,h − 3 2 hT * n .At the same time, we locate the main contribution as coming from trajectories whose left-most point is around −u * n 1/3 , proving (1.3).
• We then prove that P-a.s., n −1/3 M − n converges in P ω,β n,h -probability to −u * , using the previously step and the fact that P ω,β n,h (A) = Z ω,β n,h (A)/Z ω,β n,h .Since we also have the result of (1.1), we deduce (1.4) thanks to Slutsky's lemma as M − n , M + n and T n are defined on the same probability space.

A rewriting of the partition function
Theorem 1.1 implies that there is a vanishing sequence (ε n ) n≥0 such that P-almost surely, (2.1) In particular, we may restrict our study to Gambler's ruin formulae derived from [15, Chap.XIV] can be used to compute sharp asymptotics for P R n = −x, y , see [7,Theorem 1.4].In particular, we get where we defined the function Θ n (x, y) for x + y = T as we can rewrite with a Taylor expansion Here Ō((T * n ) −1 ) is deterministic and uniform in x, y such that |∆ x,y n | ≤ ε n T * n .Therefore, writing π (e h − 1) 2 , we have as n → +∞ with a deterministic ō(1).Then, write φ n (T ) := hT + nπ 2 2T 2 , we have (2.5) We also easily check that T * n is the minimizer of φ n and that: and φ ′′′ n (T ) = − 12nπ 2 T 5 .Thus, with a Taylor expansion, we have for x, y ≥ 0 that satisfies Assembling (2.5) and (2.6) with (2.4), We also get in particular that P-almost surely, More generally, for any event A ⊆ {|∆ n | ≤ ε n T n }, with the same considerations, we can write Z ω,β n,h (A) as the sum in (2.7) restricted to trajectories satisfying A.

Convergence of the log partition function
In order to lighten notation, we always omit integer parts in the following.
Lemma 2.1.For any integers k 1 , k 2 and any δ > 0, we have P-almost surely Let us use this lemma to conclude the proof of the convergence (1.3).Since the sum in (2.10) has 2c h δ terms, we easily get that Dividing by βn 1/6 and taking the limit n → ∞, Lemma 2.1 yields We write u = k 1 δ and v = k 2 δ, belonging to the finite set U δ defined as so where for the last identity, we have used the continuity of X (1) and X (2) .The same goes for lim inf Proof of Lemma 2.1.The proof is inspired by the proof of Lemma 5.1 in [5].Recall the definition (2.11) of Z ω,β n,h (k 1 , k 2 , δ) and note that for k 1 δn 1/3 ≤ x < (k 1 + 1)δn 1/3 and k 2 δn 1/3 ≤ y < (k 2 + 1)δn 1/3 : where the error term R δ n is defined for u, v ≥ 0 by Using the coupling ω and Lemma A.5 of [5] (for Lévy processes), P-a.s., for all ε > 0, for all n large enough (how large depends on ε, δ, ω), uniformly in u and v, since U δ is a finite set.Thus, letting n → ∞ then ε → 0 we obtain that P-almost surely, in which we recall the definition (2.12) of W ± (u, v, δ).
On the other hand, since Z ω,β n,h (k 1 , k 2 , δ) is a sum of non-negative terms, we get a simple lower bound by restricting to configurations with almost no fluctuation around T * n : in which the supremum is taken on the (x, y) that satisfy the criteria of Z ω,β n,h (k 1 , k 2 , δ), see (2.11).In the above, the ō( 1) is deterministic and comes from the contribution of n −1/6 log sin( xπ x+y ); in the case where k 1 = 0, we restrict the supremum to additionally having x ̸ = 0, so that we always have sin( xπ x+y ) ≥ c x+y ∼ c n 1/3 .After the exact same calculations as above, we get the lower bound Proof.Recall that X has the same law as

Path properties under the polymer measure
Proof of Theorem 1.3-(1.4).The proof essentially reduces to the following lemma.
By Slutsky's Lemma (for a fixed ω in the set of ω's for which both convergences are true), Lemma 2.3 combined with (1.1) readily implies that P-a.s.
n,h -probability.Note that Slutsky's lemma can be used on M + n , M − n , T n since they are all defined on the same probability space.
Proof of Lemma 2.3.The proof is analogous to what is done in [5].Define the following set We shall prove that for almost all ω, we have P ω,β n,h For this, we denote by A ε,ε ′ n the event we only need to prove that lim n→∞ 1 We apply the same decomposition we used in the proof of Theorem 1 ) by unicity of the supremum, we have thus proved that, P-a.s., n −1/3 M − n → −u * in P ω,β n,h -probability.
3 Proof of Theorem 1.6 for a Gaussian environment In this section we prove Theorem 1.6 under the assumption that ω 0 has a Gaussian distribution.
We take full advantage of the fact that in this case, the coupling with the Brownian motions X (1) , X (2) , is just an identity: it will thus not create any coupling error and allows us to work directly on these processes.The proof still requires some heavy calculations as we must first find what are the relevant trajectories in the factorized log-partition function.
Going forward, we take the following setting: random variables ω z are i.i.d. with normal distribution N (0, 1) and X (1) , X (2) are standard Brownian motions such that We will adapt the following proof to a general environment in Section 4 by controlling the error term due to the coupling.
We define Xu * , so that (1.6) can be rewritten as a statement regarding the convergence of n −1/9 log Zω,β n,h .Here are the four steps of the proof: • We first rewrite Zω,β n,h to make X xn −1/3 − X u * appear.Having this negative quantity makes it easier to find the relevant trajectories.Indeed, when |X |M − n |n −1/3 − X u * | is too large for a given trajectory, the relative contribution of this trajectory to the partition function goes exponentially to 0. This means that this configuration has a low P ω,β n,h -probability.
• We prove the P-almost sure convergence of n −1/9 log Zω,β n,h restricted to the event towards a positive value.It consists again of a coarse-graining approach where each component Zω, 2 .This leads to defining (U, V) via a variational problem.
• We prove that n −1/9 log Zω,β n,h restricted to (A K,L n,ω ) c is almost surely negative as n → ∞ as soon as K or L is sufficiently large.Coupled with the previous convergence towards a positive limit, we prove that all of these trajectories have a negligible contribution.
• Afterwards, the convergences in P ω,β n,h -probability are derived in the same way as for Theorem 1.3.Corollary 3.1 (of Lemma 2.3).For any ε > 0, consider the event There exists a vanishing sequence (ε n ) n≥1 such that

Going forward, we will work conditionally on u
c h ) has the law of a standard Brownian motion, thus according to Proposition B.1, the processes (X u * −X u * −t , t ≥ 0) and (X u * − X u * +t , t ≥ 0) are two Brownian meanders, respectively on [0, u * ] and [0, c h − u * ].Recall that since u * follows the arcsine law on [0, c h ], these intervals are P-almost surely nonempty.

Rewriting the partition function
We define where for the last identity we have used the relation (3.1) between X and ω.Then, we can rewrite with Since u * ̸ ∈ {0, c h } P-a.s., using (2.8) and Theorem 1.3 we can write: where both ō(1) are deterministic and are a Ō(ε n ).
Note that X u * − (X yn −1/3 is not necessarily positive since the supremum in (1.3) is taken over non negative u and v such that u + v = c h , whereas x + y ̸ = c h n 1/3 in the general case.However we can write X (1) n can be rewritten as Note that it is not problematic that c h − xn −1/3 can be negative if y is small enough, since X (2) can be defined on the real line.Although (3.6) may seem more complex to study than (3.3), having a term that is always non-positive is useful to isolate the main contributions to the partition function.
Recall that X u * − X xn −1/3 can be expressed in terms of Brownian meanders depending on the sign of u * − xn −1/3 , see Proposition 1.4.More precisely, there exist M + , M − two independent standard Brownian meanders such that Heuristic.In (3.5) and in view of (3.6), the term inside the exponential can be split into three parts.The first part is −βn 1/6 (X u * − X xn −1/3 ), which is negative and of order n 1/6 |u * − . We thus can easily compare the second term to the last one: dominant terms in (2.8) are all negative when (∆ n ) 2 n −1/3 ≫ (∆ n ) 1/2 or in other words if ∆ n ≫ n 2/9 .Thus we will show that the corresponding trajectories have a negligible contribution to Z ω,β n,h , and that we can restrict the partition function to trajectories such that ∆ n = Ō(n 2/9 ).We can apply the same reasoning to the first term, which must verify n 1/6 (X ), from which we will deduce

Coupling and construction
Here we explain how these two coupling combine to yield all the desired results.We start by picking u * according to the arcsine law on [0, c h ] and by considering a three-dimensional twosided Bessel process B as well as an independent two-sided Brownian motion Y, both defined on R. Since the process (X u * − X u * +u )/ √ 2 is a two-sided Brownian meander (with left interval [0, u * ] and right interval [0, c h − u * ]), using Proposition 1.5 we can find a δ 0 (ω) such that if |u| ≤ δ 0 , we have We are interested in a coupling that will be such that n all n large enough and for any un −1/9 sufficiently close to 0. To do so, for each n we construct from B a suitable X n , with the same law as X, that satisfies the desired equality.Consider only the pair (δ 0 , B) that was previously defined, and let n 0 be such that ε n 0 ≤ δ 0 .Then, for any n ≥ n 0 , we paste the trajectory of n −1/18 √ 2χB un 1/9 , which is still a two-sided three-dimensional Bessel process multiplied by ), and we plug their trajectory to complete the process X n .The full definition of X n is thus given by We can similarly define Y n u * +u − Y n u * := n −1/18 √ 2Y un 1/9 where no particular coupling is needed.From X n and Y n , we can recover our new Brownian motions X (1),n , X (2),n .
In the case of a Gaussian environment, we can define the random variables (ω z ) z∈Z d using (3.1).For the other cases, we construct the environment ω = ω n from the processes (X (1),n , X (2),n ) using Skorokhod's embedding theorem (see Theorems 4.1,4.2).
In both cases, all of our processes are defined to have almost sure convergences, and when n is greater than some n 0 (ω) (which only only depends on δ 0 ) and |un −1/9 | ≤ ε n ≤ δ 0 , we have Our construction will be used to prove Proposition 3.2 and Proposition 3.6 in order to get proper limits when zooming around u * .This is done by making it so that if n is large enough, the process we study do not depend on n, as illustrated by Proposition 1.5.

Restricting the trajectories
We recall that in what follows, we work conditionally on the value of u * , yet we will still write P for the law of ω under conditional to this value.
Our goal is now to characterize the main contribution to the partition function directly in terms of M − n and M + n or equivalent quantities, and not in terms of the processes.With this goal in mind, we define, for K, L ≥ 0: . In this section, In the next section, we will prove, in Proposition 3.6 and Lemma 3.9, that P-a.s., lim n→∞ n −1/9 log Z≤ n,ω (K, L) > 0. The following proposition and its Corollary 3.3 shows that P-a.s. for K or L large enough, lim n→∞ n −1/9 log Z> n,ω (K, L) < 0, meaning that trajectories in Z> n,ω (K, L) have a negligible contribution.
Proposition 3.2.Uniformly in n ≥ 1 such that ε n < 1 2 , we have Let us use Proposition 3.2 and conclude on the main result of this section.For any K > 1 we define Using Proposition 3.2, we can prove the main result of this section.
Corollary 3.3.For P-almost all ω, there is a K 0 > 1 such that for all K ≥ K 0 , lim sup Write p K for the probability in (3.11), in particular we proved that p K → 0. Thus, we can extract a subsequence K k such that Using Borel-Cantelli lemma, P-almost surely there is a In the rest of the section, we will prove Proposition 3.2.This proof boils down to upper bounds on both probabilities, but with a fixed n instead of the limsup.The fact that the bounds are uniform in n, and a use of our coupling, will give us the result.
We first explain a small argument that we will use repetitively throughout the paper in order to transfer estimates from standard Brownian meanders to X u * − X u * +u .Take an interval I and real numbers α, λ > 0. In the following Lemmas we will need to compute probabilities such as To get bounds on those probabilities, we use (3.7), which gives an expression of X u * − X u * +u as a concatenate of rescaled Brownian meanders M σ , σ ∈ {−1, +1}.Taking for example σ = +1, that is u ≥ 0, we have In the following lemmas, we work conditionally on u * and we have no need for precise values of the constants.Thus, we can consider u * and c h − u * as constants, and remove them from our calculations in order to ease the notation.For example, instead of getting a bound on (3.12), it is sufficient to get a bound on P(inf u∈I M + u ≤ α).

Lemma 3.4.
There is a positive constant Using (3.5), we have where we have set k,l := sup with u = xn −1/3 and v = yn −1/3 .Thus, a union bound yields where we have used that for n large enough (how large depends only on h) for some constant c ′ h , uniformly in L ≥ c 2 h /π and l ≥ 0. We now work out an upper bound on P X (2) Ln −1/9 on the intervals where the supremum are taken in X (2) k,l .Thus, we have the upper bound and X (2),u k,l for the first and the second term of the right-hand side of (3.14) respectively, as well as α l := c ′ h n −1/18 2 2l L 2 /β.We first need to control the term k = 0, in which we know that M n 0,l = 0, then we are left to bound By the reflection principle for Brownian motion, both of these variables are the modulus of a Gaussian of variance 2 2 l L n 1/9 and 2 l L n 1/9 respectively.Thus, we have the upper bound for some constants c 0 , c 1 , c 2 , c > 0.
We now focus on k ≥ 1 and decompose on whether M n k,l is less or greater than , we use Hölder inequality for p > 1: Since k can we taken arbitrarily, in the definition of M n k,l we can replace X u * − X u * +u by M u for M a standard Brownian meander on [0, 1].Thus, with the help of Corollary B.3 with λ = 2 and the previous argument (L and k can be taken up to a positive multiplicative constant), we compute And for the other probability, we use Therefore, , which has a finite sum in k ≥ 1 and l ≥ 0 that goes to 0 when L → +∞ when p is suffienciently close to 1.
On the other hand, and again by (3.14) and the Brownian reflection principle, which is again summable in k, l, with a sum that goes to 0 when L → +∞.In conclusion, we proved that P Z> n,ω (0, L) ≥ e −n 1/9 is bounded by ce −CL 3 , uniformly in n large enough, thus proving the lemma.Lemma 3.5.There is a positive C such that for any n ≥ 1 such that ε n < 1 2 , any K ≥ 1 Proof.We will use the same strategy as for Lemma 3.4.In particular we control both M − n +u * n 1/3 and ∆ n , instead of only M − n + u * n 1/3 .Thus, we consider and when summing on l ≥ 0 we get Z> n,ω (K, 0 2)Kn 2/9 similar to the notation in Lemma 3.4.Let us introduce With the same considerations as before, we have by a union bound Observe that each probability in the sum is equal to Since we have the restriction ), thus there is a constant c > 0 such that for n large enough (how large depends only on h), uniformly in k, K, l.Therefore, we get that . Recalling the definition of ξ k,l above we have Let us now decompose over the values of C n,l .Since X u * −X u ≥ 0, when C n,l < 0 the probability equals 0, so we can intersect with C n,l ≥ 0. We have where we have used Cauchy-Schwartz inequality.First, let us treat the last probability: using the Brownian scaling, we have We can get a bound on this probability using usual Gaussian bounds and the reflection principle: Then, we substitute α with j − 1 + β −1 (cl 2 − 2) to get the upper bound for some constants c, c ′ (that depend only on h, β).
For the other probability, with the argument explained previously (since K and j can again be taken up to a positive multiplicative constant), we only need to get a bound on For σ = −1 we can do the same reasoning, thus we only need to get a bound for (3.17).We use Corollary B.3: for any λ > 0, we have which translates for λ = (2 k K) 1/3 to where we used that 1 x(1−e −α/x ) is bounded for x ≥ 1, uniformly in α ≥ 1 (note that j ≥ 1).Together with the above, this yields the following upper bound for 2 k Kn −1/9 ≤ ε n < 1 2 : The sum on j ≥ 1 is bounded from above by c ′′ (l + 1) 3 (where the constant c ′′ does not depend on l ≥ 0), so we finally get The lemma follows since the last sum is finite.
Proof of Proposition 3.2.Use Proposition 1.5 and consider some n ≥ n 0 (ω) such that ε n < δ(ω) (recall the definitions in Proposition 1.5).Then, using the same notation as in the proofs of Lemmas 3.5,3.4,we have On the other hand, Thus, we see that the random quantities ξ k,l and X (2) l do not depend on n when n is large enough (meaning n ≥ n 0 ), thus the same being true for Z> n,ω (0, L).This proves the proposition by applying Lemmas 3.4, 3.5 to n = n 0 .

Convergence of the restricted log partition function
In this section we study the convergence of n −1/9 log Z≤ n,ω (K, L) for fixed K and L (large), in which we recall that Z≤ n,ω (K, L) . It is a bit more convenient to transform the condition |∆ n | ≤ Ln 2/9 into the condition |M + n − (c h − u * )n 1/3 | ≤ Ln 2/9 , which restricts to the same trajectories after adjusting the value of L. Finally, since we plan to take the limit for K, L → ∞ it is enough to treat the case where K = L. Thus, we define Note that for any K > 1, if we recall the definition (3.9) of Z>K n,ω , we have As explained in the beginning of this section, as K → ∞, Z≤K n,ω contains all the relevant trajectories giving the main contribution to the partition function.
Control of Rn (k 1 , k 2 , δ).We now seek a bound on Rn (k 1 , k 2 , δ).First we have To control the random part ζ k 1 ,k 2 n,δ (xn −1/3 , yn −1/3 ), we use the following proposition, that we prove afterwards.Proposition 3.7.Let δ j = 2 −j , j ∈ N, then, P-almost surely, there exists a positive C ω such that for any n and j large enough, any k 1 , k 2 ∈ − K δ j , K δ j , we have We will still denote this parameter by δ while keeping in mind that δ → 0 along a specific sequence.Assembling these results, we see that (3.25) ) is bounded by a function ε(ω, δ) that goes to 0 as δ ↓ 0 uniformly in As in the proof of Theorem 1.3 we write u = k 1 δ and v = k 2 δ in (3.22): recalling the definition (3.3) of Ω, this leads to Recall Proposition 1.5 and its notation.Set and denote by B a two-sided three-dimensional Bessel process and by Y a standard Brownian motion, independent from B. Then for any n ≥ n 0 (ω), −1 , and Assembling those results with (3.26), we established the following convergence (which is an identity for n ≥ n 0 (ω)): and, using the uniform continuity of Y u,v and of (u + v) 2 on [−K, K] 2 , we have Finally, letting δ go to 0 proves the convergence of n −1/9 log Z≤K n,ω .
Proof of Proposition 3.7.
The proof essentially boils down to the following lemma and a use of Borel-Cantelli lemma.
Lemma 3.8.There exists some positive constants λ, µ such that for any δ ∈ (0, 1) and any Using Lemma 3.8 and a union bound immediately yields Summing over δ j = 2 −j gives a bound which is summable in C: this allows us to use a Borel-Cantelli lemma.This means that with P-probability 1, there is a positive C ω such that for all , thus proving the proposition.
Proof of Lemma 3.8.Recall the definition (3.23) and that 2X (2) 2 , which simplifies to We split ζ k 1 ,k 2 n,δ (u, v) into four parts corresponding to the terms in X and those in Y : which we call "meander parts" because of (3.7) We use a union bound to separately control the probability for each increment to be greater than Cδ 1/4 8n 1/18 .Control of the Brownian parts.First, recall that by construction, X and Y are independent.This also means that since u * is X-measurable, u * and Y are also independent.Thus, the Brownian reflection principle yields = sup where W is a standard Brownian motion.This leads us to and similarly for Control of the meander parts.We have to bound the following: |; we will focus on bounding (3.29) since the other bound follows from it b .Recall (3.7) to get .
Observe that χ = χ(u, ω) is a constant that only depends on the sign of k 1 and that since K and C are arbitrary chosen, we only need to get a bound on the probability of | being greater than Cδ 1/4 8n 1/18 .Without any loss of generality we can suppose that k 1 ≥ 0: to get the case where k 1 ≤ 0 we only need to do the same proof with |k 1 + 1| instead.Use Lemma B.2 and Markov's inequality to get We show below that there is a constant c = c(K) > 0 such that, for n large enough uniformly in k 1 ∈ {0, 1, . . ., K/δ} and 0 < δ < 1, thus proving Lemma 3.8.
b Observe that if vn 1/3 ∈ C + n,δ (k2), writing ṽ := c h − v and assuming v ̸ = c h − u * + k 2 δ n 1/9 , we have Proof of (3.30).We want to get an upper bound on quantities E e α(Mv−Mu) for specific u < v, α > 0. In order to do so, we first condition on the value of M u and use the transition probabilities of the Brownian meander to get an upper bound, which we then integrate with respect to the law of M u .Let us set κ n δ := k 1 δn −1/9 with k 1 ̸ = 0 (we treat the case k 1 = 0 at the end) and α := n 1/18 / √ δ.
Thus we have to bound ) × e αm ) we can rewrite the above as Usual bounds for Gaussian integrals (notice that Ψ(m) = e 1/2 e − 1 2 (αm−1) 2 ) then yield the upper bound conditioned to which we simplify (using that Ψ(m) ≤ e 1/2 for all m) as Step 2: Averaging on x = M k 1 δn −1/9 .In order to take the expectation in (3.31), we use the following bounds, given in (B.3) (using that Recalling that κ n δ = k 1 δn −1/9 ≤ ε n → 0, we can use the above to get that for n sufficiently large, there is a C > 0 such that This proves the bound (3.30) in the case k 1 > 0.
3.5 Convergence of the log partition function, proof of Theorem 1.6-(1.6)Lemma 3.9.P-almost surely, there exists a unique (U, V) such that We get a positive lower bound since almost surely, there are real numbers In order to show that W 2 is almost surely finite, see B u as the modulus of a 3-dimensional standard Brownian motion W u and consider a one dimensional Wiener process W .We use the fact that t −1 (|W ≤ 0 P-a.s. when |u + v| is large enough, meaning that the supremum of this continuous process is almost surely taken on a compact set, thus it is finite.The existence of (U, V) is also a consequence of the continuity of Y u,v − c h,β √ 2(u + v) 2 and of the fact that the supremum is P-a.s.taken on a compact set.The uniqueness of the maximum follows from standard methods for Brownian motion with parabolic drift (see [5,Appendix A.3]).
Comment.We could have taken another form of Z ω,β n,h (k 1 , k 2 , δ) given by (3.3), without using the process X that was only useful to reject trajectories whose minimum is too far from −u * n 1/3 .This would have led us to the alternative form where X(i) u are Brownian-related processes provided by a suitable coupling.However, these limit processes are not independent and their distribution may not be known processes, making W ′ 2 less exploitable.
Proof of Theorem 1.6-(1.6).We first see that P-almost surely, by Proposition 3.6 and Lemma 3.9, we have On the other hand, using Corollary 3.3, Therefore, P-almost surely we have which, according to Corollary 3.1, proves the convergence of Z β,ω n,h .Combining Proposition 3.2 with the fact that the right-hand side quantity in Proposition 3.6 increases with K and thus converges almost surely as K → ∞ yields Theorem 1.6 in the case of a Gaussian environment.We only need to see that the convergence is towards a non trivial quantity, which is the object of the following lemma.

Path properties at second order
Proof of Theorem 1.6-(1.7).The proof of (1.7) is a repeat of the proof of Lemma 2.3, this time writing Afterwards, using the definition of U ε,ε ′ 2 we prove as above that and thus lim sup 4 Generalizing with the Skorokhod embedding 4.1 Proof of Theorem 1.6, case of a finite (3 + η)th moment For now, Theorem 1.6 has only been established for a Gaussian environment ω, meaning that the variables (ω z ) are i.i.d with a normal distribution.In the following, we will explain how we can generalize those results to any random i.i.d.field with sufficient moment conditions.
We first expand on the coupling between the random field ω and the Brownian motions X (i) , i = 1, 2. Our starting point is the following statement from [24,Chapter 7.2].
Theorem 4.1 (Skorokhod).Let ξ 1 , . . ., ξ m be i.i.d.centered variables with finite second moment.For a Brownian motion W , there exists independent positive variables τ 1 , . . ., τ m such that Moreover, for all k ≤ m, we have The following theorem gives us asymptotic estimates for the error of this coupling.
We can easily adapt this statement and choose the Wiener processes X (1) and X (2) to be independent Brownian motions such that, as n → ∞, as long as E[|ω z | p ] < +∞ for some p ∈ (2,4).Since in the partition function we can restrict to trajectories with x and y are taken between 0 and (c h + ε n )n 1/3 (recall (2.8)), we can obtain a uniform bound over every u, v we consider, meaning that P-a.s.there is some constant C(ω) such that for all n ≥ 1, yn −1/3 : the equation (3.5) becomes ) with ō(1) deterministic and uniform in x, y, and Ωx,y n := n 1/6 X (1) Take p = 3 + η with η ∈ (0, 1), and assume that E [|ω 0 | p ] < +∞.Then, using (4.1), we have for all summed (x, y).Therefore, combining with (4.2), we get that P-a.s.

Adaptation to the case of a finite (2 + η)th moment
We now explain how we can infer (1.8), i.e. a version of Theorem 1.6 where we only assume that E |ω 0 | 2+η < ∞ for some positive η, from adapting the proofs of Section 3. We are able to prove that the relevant trajectories converge to the suspected limit for Z ω,β n,h , however some technicalities prevent us from getting the full theorem.
The key observation is the following: when subtracting β ω z instead of βn 1/6 X u * from log Z ω,β n,h + 3 2 hc h n 1/3 , we precisely cancel out the . This leaves us with a smaller sample of the variables (ω z ), with size | which is at most 2ε n n 1/3 (see (3.5)), and of order n 2/9 when restricting to trajectories giving the main contribution (see Proposition 3.2).
Informally, let Ω u * n,h (x, y) be the sum of ω z that are between u * n 1/3 and x, and between (c h − u * )n 1/3 and y, with proper signs.With the same arguments as before, we are led to get a convergence for We now want to rewrite Ω u * n,h (x, y) as Ω x,y n in (3.6) with an additional error term that is a ō(n 1/9 ).Since we are only interested in the (ω z ) that are present in Ω u * n,h (x, y), we see that we only need to have a good coupling between the environment and (X (1) , X (2) ) near (u * , c h − u * ) instead of a global coupling like the one we used in Section 4.1.
An application of Theorem 4.2 allows to consider that the field ω satisfies: xn −1/3 + X where Z≤K n,ω is defined the same way as in Section 3.6, only by substraction the sum of ω z 's instead of X.
What remains to show is that n −1/9 log Z>K n,ω has a non-positive limsup as K, n → +∞ in the same spirit as Proposition 3.2.However, we are not able to get a sufficient decay for the probabilities appearing in the proofs of Lemmas 3.4 & 3.5.The union bound thus fails to conclude the proof, although we have no doubt on the result being true.

Simplified model : range with a fixed bottom
In this section we shall focus on a simpler model in which one of the range's extremal points is fixed at 0. The main motivation is that this model is sufficiently close to our original model to give us some insight on finer properties of the original polymer, while being easier to study because of the range being only a single variable: the highest point of the polymer.
We give in this section a conjecture on the simplified model which is supported by previously known results about Brownian motions with drift.This conjecture states that the fourth order expansion of the log-partition function is given by a quantity of order one, with a limited dependence on n.It is natural to expect the same behavior for our original model, hence Conjecture 1.7.
Let us focus on this simplified polymer, which is modeled by a non-negative random walk.The polymer measure is given by ω i 1 {∀k≤n,S k ≥0} P(S).
For now, we will keep studying the case where the field ω is composed of i.i.d.Gaussian variables.
We once again take a Brownian motion X such that 1 . The partition function is given by ϕ n (T )e −hT + i≤T ω i −g(T )n with g(T ) = π 2 /2T 2 (see [7], this is analogous to what is done in Section 2.1).It is not difficult to see that our results up to Section 3 still hold, first we have This tells us that the range has size T n ∼ c h n 1/3 at first order.Since M − n = 0, it is natural to expect (and not hard to prove) which is an analogue of Theorem 1.3 with the knowledge that u * = 0. Factorizing by e βn 1/6 Xc h yields the following exponential term h n 1/3 .Proposition 5.1.For any h, β > 0 there is a standard Brownian motion W such that P-a.s., (5.1) ) which contains all the main contributions for K large.Split over kδn 2/9 ≤ T * n − T ≤ (k + 1)δn 2/9 and the main contribution will be given by the supremum over where we wrote s = kδ.We can conclude similarly to the proof of Theorem 1.6 by changing the limit process Y u,v to B which is the limit of the processes u and is a standard Brownian motion.Once again, we can couple the Brownian motion X = X (n) so that the processes B (n) are equal to B when n is large, in the same fashion as Proposition 1.5.We can prove that the right-hand side of (5.1) is P-a.s.positive and finite, attained at a unique point s * .
To sum up the results of this simplified model, we write the following statement Theorem 5.2.Recall the notation of (1.9), this time with Zω,β n,h .Then P-almost surely, Recall the following notation of (1.2): Corollary 5.3.There is a vanishing sequence (ε n ) such that Our goal is now to find out whether factorizing the partition function by this quantity leads to a bounded logarithm or not; in other words, we are looking for the 4th order free energy, in the spirit of Section 1.9.We develop here some heuristic to justify that the 4th order free energy is at scale α 4 = 0.
Going forward we work conditionally to s * .We define We first rewrite the factorized partition function Zω,β n,h .If we write T n = c h n 1/3 +∆ n and we recall that thanks to the coupling, for u in a neighborhood of 0, we have W u = n 1/18 X c h + u n 1/9 − X c h for sufficiently large n, we can rewrite The exponential term is non-positive, which means that the typical trajectories for the polymer are those that minimize the difference in (5.3).
Comment.Previous works studied with some extent the laws of s * and Y s * (see [19]).In particular s * follows the so-called Chernov distribution, which is symmetric.Writing Ai for the Airy function, [19, Theorem 1.1] states that In all the following, we use the fact that the distribution of s * is symmetric to reduce to the case s * > 0. We will also work conditionally on the value of s * , meaning on the location of the maximum of Y .We write α = 2c h,β s * , then observe that for any s > 0, The asymmetry of the meander can be used to prove the following "reflection principle".
Lemma B.2 (Reflection principle for the meander).Let M be a Brownian meander, then for all b > 0 and all 0 ≤ s < t ≤ 1, Proof.If we denote by T b the hitting time of b, we have Now, write L b the lime of last visit to b before time t, on [L b , t] the process M r − b is a Brownian bridge conditioned to be above −b.We only need to see that any trajectory of M from b to (0, b] which stays above 0 can thus be transformed into a trajectory from b to [b, 2b) that stays above 0 by reflecting the trajectory between the last visit L b to b and t (see Figure 2).Since these two Brownian bridges have the same probability and [b, 2b) ⊂ [b, +∞) it shows that this operation is injective and thus P (T b ∈ ds, M t < b) ≤ P (T b ∈ ds, M t ≥ b) for all s ≤ t (note that this is a consequence of the Brownian reflection principle).Therefore, we proved If we study the supremum of an increment M r − M s , s ≤ r ≤ t we only need to repeat the proof for a starting point M s = x and integrate over all the positions x.Since the meander is a Markov process, we get P sup s≤r≤t M r − M s ≥ b ≤ 2P (M t − M s ≥ b) .Afterwards, we only need to see that again using the asymmetry of M , we have that Corollary B.3.For any λ > 1, a > 0 and 0 ≤ s < t < 1 2 , we have as well as Proof.We decompose the probability on whether M s , M t ≤ λa, meaning we only have to consider P inf s≤r≤t M r ≤ a, M s > λa, M t > λa .For this, we first use Brownian bridge estimates: see that for any z, w, T > 0, we have thus we have For any α > 0 and z, w > α, we define Then, using (B.4) with z, w, z − α, w − α > 0, we can deduce Consider the mapping f T : (x, y) → e − 2 T xy .Using the mean value theorem, there is a c ∈ Injecting in (B.5), this yields In particular, if we assume z, w ≥ λα for some λ > 1, then f T (z, w) ≤ f T (λa, λa) and we obtain Therefore, for any λ > 1 and a > 0, Let us mention that a process related to the meander is the 3-dimensional Bessel process B. It can be defined as the solution of the SDE dB t = dW t + B −1 t dt, or as the sum B t = |W t | + L t where L is the local time of W at 0; it is a homogeneous Markov process that has the Brownian scaling property (B αt ) t d = ( √ αB t ) t .We refer to [23] for those results.The link between the Bessel process and the meander is given by the following result.
Proposition B.4.The law P +,T of the Brownian meander on [0, T ] has a density with respect to P B the law of the three-dimensional Bessel process: if X is the canonical process, we have In particular, ∀α > 0, ∀s ≤ T, P +,αT (X αs ∈ dx) = P +,T ( √ αX s ∈ dx).
Proof.The formula for the density can be found in [17,Section 4].Afterwards, for any positive measurable function f and any α > 0, we have C Coupling of Brownian meander, a three-dimensional Bessel process and a Brownian excursion In this section we will expand on the way we can construct our different processes to have the almost sure results of Theorems 1.3 and 1.6.In particular we want the following result: 1 n 1/6 vn 1/3 −un 1/3 ω z a.s.−−−→ n→∞ X (1)  u + X (2)   v and n 1/18 (X u * + u n 1/9 − X u * ) a.s.
Skorokhod's embedding theorem (Theorem 4.1) allows us to sample the Brownian motions X (i) , i = 1, 2 to get a new environment ω(n) to obtain the first convergence.Thus we must find how we can couple both processes X (i) to the processes B, Y in Theorem 1.6, that is we need to prove Proposition 1.5.This is based on two intermediate results, Lemmas C.1 and C.2 below, which couple a meander, resp.a Bessel-3 process, to a Brownian excursion.Lemma C.2.For any T ∈ [0, 1], There exists a coupling of the Brownian excursion e on [0, 1] and the three-dimensional Bessel process B such that there is a positive ε(ω) for which we have B t = e t for any t ∈ [0, ε(ω)].
Proof.It is known (see for example [18, p79]) that the Brownian excursion can be decomposed into two Bessel bridges of duration 1  2 joining at a point V whose law has density 16 √ 2π v 2 e −2v 2 .Thus we only need to define a coupling between a 3d-Bessel process B and a 3d-Bessel bridge B ′ with duration 1  2 and endpoint V .We use the fact that both processes can be realized by the modulus of a three-dimensional Brownian motion.
Consider two independent, three-dimensional Brownian bridges X and Y of duration 1/2, such that X 0 = x ∈ R 3 (resp.Y 0 = y ∈ R 3 ) and X 1 2 = Y 1 2 = 0. Denote by τ the first time X and Y have the same modulus: τ := inf 0 ≤ t ≤ 1 2 : |X t | = |Y t | .We have the following result.Lemma C.3.Almost surely, there exists ε(ω) > 0 such that τ ≤ 1 2 − ε(ω).Using this lemma, we can conclude the construction of the coupling.After time τ , we define a coupling by taking the trajectory of X between τ and The new process Ŷ is such that for every t ∈ [τ, 1  2 ], we have |X t | = | Ŷt |.Recall that the Brownian bridge is a diffusion process (as the solution to an SDE), thus is Markovian, and τ is a stopping time for both processes X and Y .It follows that Ŷ is a Brownian bridge between y and 0.
To create the coupling between the two Bessel processes B and B ′ , we choose the starting points x and y so that they respectively correspond to W 1 2 (with W a 3d-Brownian motion) and a uniform variable on the sphere centered at 0 of radius V .Then the processes B t = |X Proof of Lemma C.3.On [0, 1  2 ], consider B a 3-dimensional Bessel process starting at 0 and e the Brownian excursion, which is a Bessel bridge of duration 1  2 starting at 0 and ending at V .We define I s,t := {∀r ∈ (s, t), e r ̸ = B r } the event on which e and B never intersect between 0 and t (with the exception of 0).From [17, (3.1)], we have P x (A, B t ∈ dz) = z x P x (A, W t ∈ dz, H 0 > t), where W is a Brownian motion and H 0 its first hitting time of 0. Then for any ε > 0, conditioning on the values of (e ε , B ε ) and (V, B t ), we can write We are interested in taking t = 1/2, but this result could be used for any fixed t > 0, in the sense that the Bessel process and the Brownian excursion almost surely cross each-other on ]0, t] for any fixed t.Take a positive C > 0 to be chosen later (we will choose C = ε −1/8 ).Then, we first get a bound using Cauchy-Schwartz inequality twice:   E (e t B t ) 4 1 4 P (B t ∨ V > C) Since ε < t and e, B are independent, we have E (e t B t ) 4 ≤ c(t) and where we used the transition probabilities for the Bessel process [23, VI §3 Prop.3.1], the Brownian excursion [18, Section 2.9 (3a)] and Γ(  This means that P (I 0,t ) = 0 and in particular, taking t = 1 2 , one can almost-surely find a positive ε such that Lemma C.3 is true.
Proof of Lemma C.4.We can assume 0 < x < a and 0 < y < b (otherwise the probability is zero), then we have z − h dP(S),

Figure 1 :
Figure 1: A typical trajectory under the polymer measure for a given u * and large n almost surely positive and finite, and attained at a unique point u * of [0, c h ].

Figure 2 :
Figure 2: Reflection of the trajectory b → (0, b] with respect to the horizontal line at b

Lemma C. 1 ([ 6 ,
Theorem 2.3]).Let e be a standard Brownian excursion and U a uniform variable on [0, 1].Then, the process M t = e t 1 {t≤U } + (e U + e 1−(t−U ) )1 {t>U } is a Brownian meander on [0, 1].In particular, there exists a coupling of the Brownian meander M and the Brownian excursion e on [0, 1] such that M t = e t if t ≤ U .

P 5 ))
I a→b x→y (T ) = P ∀r ∈ [0, T ], 0 < W x→y T Observe that (C.5) is exactly the probability for the Brownian bridge W x→y,a→b T to stay in the cone C := (x, y) ∈ R 2 : 0 ≤ x ≤ y for a time T , meaning P I a→b x→y (T ) = P ∀t ∈ [0, T ], W x→y,a→b T (t) ∈ C .The isotropy of Brownian motion allows us to consider instead Ĉ := re iθ , 0 ≤ θ ≤ π 4 .
4.2 ([11, Theorem 2.2.4]).Let (θ i ) be i.i.d.centered variables, and assume that E [|θ 1 | p ] < ∞ for a real number p ∈ (2, 4).Then, if the underlying probability space is rich enough, there is a Brownian motion W such that 2, we can repeat the proof of Proposition 3.2 and restrict the trajectories.This leads to studying 9B ∆nn −2/9 .
s∈R W s − c h,β s 2 .(5.3)We define the process Y s := B s − c h,β s 2 which is a Brownian motion with quadratic drift, and s * the point at which it attains its maximum on R. (5.3) can thus be rewritten as exp βn 1/9 (Y kn −2/9 − Y s * ) .