A NOTE ON DIRECTED POLYMERS IN GAUSSIAN ENVIRONMENTS

We study the problem of directed polymers in gaussian environments in (cid:90) d from the viewpoint of a gaussian family indexed by the set of random walk paths. In the zero-temperature case, we give a numerical bound on the maximum of the Hamiltonian, whereas in the ﬁnite temperature case, we establish an equivalence between the very strong disorder and the growth rate of the entropy associated to the model.


Finite temperature case
Let (g(i, x)) i≥0,x∈ d be i.i.d. standard real-valued gaussian variables. We denote by P and E the corresponding probability and expectation with respect to g(·, ·). Let {S k , k ≥ 0} be a simple symmetric random walk on d , independent of g(·, ·). We denote by x the probability measure of (S n ) n∈ starting at x ∈ d and by x the corresponding expectation. We also write = 0 and = 0 .
The directed polymer measure in a gaussian random environment, denoted by 〈·〉 (n) , is a random probability measure defined as follows: Let Ω n be the set of nearest neighbor paths of length n: We refer to Comets, Shiga and Yoshida [3] for a review on directed polymers. It is known (see e.g. [2], [3]) that the so-called free energy, the limit of 1 n log Z n exists almost surely and in L 1 : p(β) is some constant and p(β) ≤ 0 by Jensen's inequality since EZ n = 1. A problem in the study of directed polymer is to determine the region of {β > 0 : p(β) < 0}, also called the region of very strong disorder. It is an important problem, for instance, p(β) < 0 yields interesting information on the localization of the polymer itself. By using the F-K-G inequality, Comets and Yoshida [4] showed the monotonicity of p(β), therefore the problem is to determine It has been shown by Imbrie and Spencer [8] that for d ≥ 3, β c > 0 (whose exact value remains unknown). Comets and Vargas [5] proved that (for a wide class of random environments) Recently, Lacoin [10], skilfully used the ideas developed in pinned models and solved the problem in the two-dimensional case: Moreover, Lacoin [10] gave precise bounds on p(β) when β → 0 both in one-dimensional and two-dimensional cases.
In this note, we study this problem from the point of view of entropy (see also Birkner [1]). Let e n (β) := E Z n log Z n − (EZ n ) log(EZ n ) = E Z n log Z n be the entropy associated to Z n (recalling EZ n = 1).
There is some numerical constant c d > 0, only depending on d, such that the following assertions are equivalent: The proof of the implication (b) =⇒ (c) relies on a criterion of p(β) < 0 (cf. Fact 3.1 in Section 3) developed by Comets and Vargas [5] in a more general settings. We can easily check (b) in the one-dimensional case: In fact, we shall show in the sequel (cf. (3.7)) that in any dimension and for any β > 0, where S 1 and S 2 are two independent copies of S and L n (γ, γ ) = n k=1 1 (γ k =γ k ) is the number of common points of two paths γ and γ . It is well known that (L n (S 1 , S 2 )) is of order n 1/2 when d = 1 and of order log n when d = 2. Therefore (b) holds in d = 1 and by the implication (b) =⇒ (c), we recover Comets and Vargas' result (1.1) in the one-dimensional gaussian environment case.

Zero temperature case
When β → ∞, the problem of directed polymers boils down to the problem of first-passage percolation. Let where as before Ω n = {γ : The problem is to characterize these paths γ which maximize H n (g, γ). See Johansson [9] for the solution of the Poisson points case. We limit here our attention to some explicit bounds on H * n . An easy subadditivity argument (see Lemma 2.2) shows that both a.s. and in L 1 .
By Slepian's inequality ( [12]), where (Y γ ) γ∈Ω n is a family of i.i.d. centered gaussian variables of variance 1. Since #Ω n = (2d) n , it is a standard exercise from extreme value theory that Hence c * d ≤ 2 log(2d). It is a natural problem to ask whether this inequality is strict; In fact, a strict inequality means that the gaussian family {H n (g, γ), γ ∈ Ω n } is sufficiently correlated to be significantly different from the independent one, exactly as the problem to determine whether p(β) < 0. We prove that the inequality is strict by establishing a numerical bound: x −∞ e −u 2 /2 du is the partition function of a standard gaussian variable. The proofs of Theorems 1.1 and 1.2 are presented in two separate sections.

Proof of Theorem 1.2
We begin with several preliminary results. Recall at first the following concentration of measure property of Gaussian processes (see Ibragimov and al. [7]).
where ||x|| denotes the euclidean norm of x.
a.s. and in L 1 .
Proof: We prove at first the concentration inequality. Define a function F : m → by By the Cauchy-Schwarz inequality, . By the Gaussian concentration inequality Fact 2.1, we get (2.2). Now we prove that n → EH * n is superadditive: for n, k ≥ 1, let γ * ∈ Ω n be a path such that H n (g, γ * ) = H * n , then hence by conditioning on σ{g(i, ·), i ≤ n}, we get that n .
Proof: Let τ n,x be the time and space shift on the environment: We have for any n, k, Write for simplification H * n,x := max γ∈Ω n :γ n =x H n (g, γ). Then for any λ ∈ , We get Ee λH * jn ≤ e jφ n (λ) , j, n ≥ 1, λ ∈ .
Hence ζ n ≤ c * d + a and the lemma follows.
In fact, the case E x Z m (x) log Z m (x) = 0 follows from their Remark 3.5 in Comets and Vargas (2006) We have where the probability measure Q = Q (β) is defined by dQ| g n = Z n dP| g n , ∀ n ≥ 1, for all m ≥ 1, as desired. Let µ be a Gaussian measure on m . The logarithmic Sobolev inequality says (cf. Gross [6], Ledoux [11]): for any f : m → , Using the above inequality, we have Lemma 3.3. Let S 1 and S 2 be two independent copies of S. We have Proof: Taking Note that Z n = f 2 (g) with g = (g(i, x), 1 ≤ i ≤ n, x ← i). Applying the log-Sobolev inequality yields the first estimate (3.5).
The another assertion follows from the integration by parts: for a standard gaussian variable g and any derivable function ψ such that both gψ(g) and ψ (g) are integrable, we have E(gψ(g)) = E(ψ (g)).
Elementary computations based on the above formula yield (3.6). The details are omitted. From If P and Q are two probability measures on (Ω, ), the relative entropy is defined by where the expression has to be understood to be infinite if Q is not absolutely continuous with respect to P or if the logarithm of the derivative is not integrable with respect to Q. The following entropy inequality is well-known: This inequality is useful only if Q(A) ∼ 1. Recall (3.3) for the definition of Q. Note that for any δ > 0, Q Z n ≥ δ 1+δ ≥ 1 1+δ , it follows that  On the other hand, the concentration of measure (cf. [2]) says that P 1 n log Z n − p n (β) > u ≤ exp − nu 2 2β 2 , ∀u > 0.