Some percolations involving the Gaussian free fields

: Consider an inﬁnite, connected, locally ﬁnite graph with vertex set V . Intuitively a simple point process on V with attractive properties, should percolate more easily than a Bernoulli point process with the same marginales. Although it seems wrong to imagine that it could be true in general, we conﬁrm this intuition on several examples involving Gaussian free ﬁelds and permanental free ﬁelds.


Introduction
Consider a non-oriented, infinite, connected, locally finite, graph G, with vertex set V and edge set E. Given a family of Bernoulli variables (Y x , x ∈ V), one may ask whether the random subgraph of G with vertex set {x ∈ V : Y x = 1} and edge set {[x, y] ∈ E : Y x = 1 and Y y = 1}, contains an infinite connected component. In short does {x ∈ V : Y x = 1} percolate?
There is a general answer to this question in the case when the variables Y x , x ∈ V are i.i.d. One sets: p = P[Y x = 1]. There exists a critical probability p site c (G) in [0, 1], such that for p > p site c (G), {x ∈ V : Y x = 1} percolates and for p < p site c (G), {x ∈ V : Y x = 1} does not.
When the Bernoulli variables are not independent, it is much more difficult to solve this question. As an example of dependent Bernoulli variables, take: Y x = 1 {ηx>h} , x ∈ Z d , where h is a fixed real level and (η x , x ∈ Z d ) is a centered Gaussian process. One particular case has been intensively studied: the case when η is the Gaussian free field on Z d for d ≥ 3, which means that η is a centered Gaussian process with the following covariance: where (U n ) n≥0 is a simple random walk on Z d . By definition, this covariance is the Green function of (U n ) n≥0 . There exists a critical level h * such that * [5], that h * ≥ 0 and that moreover in dimension 3: h * < ∞. Then Rodriguez and Sznitman [20] have shown that h * < ∞ in any dimension and that h * > 0 in high dimension. Finally Drewitz, Prévost and Rodriguez [8] have shown that in any dimension h * > 0 (i.e. P(η 0 > h * ) < 1/2). In [8], the authors suggest that their result could be the consequence of the following conjecture: based on the intuition that positive correlation should help in forming clusters and hence an infinite cluster. Indeed the Gaussian free field η is positively correlated in the sense that its covariance is positive. But thanks to Pitt [19], this fact implies the much stronger following property called "positive association": for any couple (F, H) of increasing functionals on functions from V into R (increasing with respect to each coordinate). For example, one has for every x 1 ,..,x n in Z d : which legitimates (1.1).
We test this intuition on another positively associated process: [11]) and is hence positively associated in the sense of (1.2) thanks to [6].
Rodriguez [21] has proved that there exists h c < ∞ such that for h > h c a.s. the set {x ∈ Z d : |η x | > h} does not percolate. One obviously has: h * ≤ h c . Consequently: 0 < h c < ∞. In view of the above intuition, one would expect that: We have the following result: which gives the following lower bound. 0 ] ≥ 1, the above result is weaker than its intuition. But as shown in section 2 (Theorem 2.1), this result is "universal" in the sense that it is available for every centered Gaussian process associated to a symmetric transient homogenous Markov chain on any infinite connected graph. In Theorem 2.1 the jumps of the Markov chain are not limited to the closest neighbors. One can extend Theorem 2.1 from absolute value of Gaussian free fields to permanental free fields. This extension consists in relaxing the assumption of symmetry for the associated Markov chain. We remind that the permanental free fields are also positively associated [12]. The result is presented in section 2 (Theorem 2.2).
Finally, in section 3, we extend the result of Bricmont, Lebowitz and Maes [5] to transient simple symmetric random walks on any regular graph. This extension has been already noticed by Abächerli and Sznitman (Proposition A2 in [1]). In [9], Drewitz, Prévost and Rodriguez go further by showing that h * > 0 for a large class of graphs. The interest of our proof is located in the use of a basic Dynkin isomorphism type theorem.

Extension to all Gaussian and Permanental free fields
For G non-oriented, locally finite, infinite connected graph determined by (V, E), let U = (U n ) n≥0 be a transient homogenous Markov chain on V, with transition matrix P = (P (x, y)) (x,y)∈V×V . The Markov chain U is allowed to have jumps from x to y with [x, y] outside E. The paths of (U n ) n≥0 are not necessarily subgraphs of G. Actually U lives on the graph G(U ) with edge set E(U ) = {[x, y] ∈ V × V : P (x, y) + P (y, x) > 0}. We draw attention on a result of Benjamini and Hermon (Theorem 2 in [4]) according to which if U is irreducible, transient and E(U ) = E (i.e. G(U ) = G) then the simple random walk on G might not be transient. More precisely, if U has the additional property to cover G with positive probability, then the simple random walk on G is recurrent.
Since we are dealing only with homogenous Markov chain, we will omit the term homogenous.
Theorem 2.1. Let G be an infinite connected graph with a locally finite vertex set V and edge set E. Let h G be the nonnegative number such that: Then for every h < h G , for every centered Gaussian field (η x ) x∈V with covariance the Green function of a transient symmetric Markov chain on V, the subgraph of G with edge set {[x, y] ∈ E : |η x | > h and |η y | > h} has a.s. an infinite connected component.
In case G is such that p site c (G) = 1, Theorem 2.1 does not bring any information on the percolation properties of the absolute value of the Gaussian free fields. One can extend Theorem 2.1 from absolute value of Gaussian free fields to permanental free fields. This extension consists in relaxing the assumption of symmetry for the associated Markov chain.
We first recall that a permanental process (ϕ(x), x ∈ V) with index β > 0 and a kernel k = (k(x, y), (x, y) ∈ V × V) is a nonnegative process with finite dimensional Laplace transforms satisfying, for every x 1 , x 2 ,..,x n in V: where α is the diagonal matrix with diagonal entries (α i ) 1≤i≤n , I is the n × n-identity matrix and K is the matrix (k(x i , x j )) 1≤i,j≤n .
Note that the kernel of a permanental process is not unique.
In case β = 1/2 and k is symmetric positive semi-definite, (ϕ x , x ∈ V) equals in law (η 2 x , x ∈ V) where (η x , x ∈ V) is a centered Gaussian process with covariance k. Consider a Markov chain with state space V and finite Green function (g(x, y), (x, y) ∈ V × V). For every β > 0, there exists a permanental process with index β and kernel (g(x, y), (x, y) ∈ V × V) (see [12]). The permanental processes obtained that way, can be called, by analogy with the Gaussian free fields, permanental free fields. Note that the permanental free fields are infinitely divisible and hence positively associated (by using again [6]). Theorem 2.2. Let G be an infinite connected graph with a locally finite vertex set V and edge set E. Let h G be the nonnegative number such that: Then for every h < h 2 G , for every permanental field (ϕ x ) x∈V with index 1/2 admitting for kernel the Green function of a transient Markov chain on V, the sub-graph {[x, y] ∈ E : ϕ x > h and ϕ y > h} has a.s. an infinite connected component.
Proof of Theorem 2.1. Let (η x , x ∈ V) be a centered Gaussian process with covariance the Green function of a transient symmetric Markov chain on G, U = (U n ) n≥0 . The law of U is characterized by its transition matrix P = (P (x, y)) (x,y)∈V 2 . By assumption: P (x, y) = P (y, x), ∀x, y ∈ V. Denote by g = (g(x, y), (x, y) ∈ V 2 ) its Green function i.e.
Note that for every x in V: g(x, x) ≥ 1.
As usual, one can associate to U a time continuous Markov chain X = (X t ) t≥0 by spending at each site of V an exponential time with parameter 1, independently of the jumps that are then performed according to P to leave the site. Then X is a transient symmetric Markov process with the same Green function as U and admitting a local time process (L x t , x ∈ V, t ≥ 0). Fix a finite subset J = {x 1 , x 2 , .., x n } of V and set: G = (g(x i , x j )) 1≤i,j≤n . We now compute an alternative expression of G. To do so we reproduce an argument used in a particular case in [11] (proof of Theorem 2.1).
The time σ may be infinite in that case the value of X σ is a cemetery point. Set Thanks to the Markov property we have: Let B and Q be the matrices defined by: B = (b ij ) 1≤i,j≤n and Q = (q ij ) 1≤i,j≤n . The above computation shows that: G = B + QG, equivalently: B = (I − Q)G. Since X spends an exponential time with parameter 1 at each site before leaving it, we have: b ii ≥ 1, for every i = 1, .., n. Consequently B is invertible and hence so are (I − Q) and G. Rinott [16], that: where (N x ) x∈V is a family of i.i.d. standard Gaussian variables and the symbol "≺" denotes a relation of stochastic domination.
Since the above stochastic domination is true for every finite subset J, by a simple limiting argument, one obtains: Consequently by Strassen Theorem, there exists a coupling (|η|, |Ñ |) of |η| and |N | such that for every x in V: |η x | ≥ |Ñ x |. One hence has for every h ≥ 0 We use now an extension of the result of Karlin and Rinott (Theorem 2.1 in [16]) to permanental vectors established by Marcus and Rosen [17]. They call it the permanental inequality. For simplicity, we enunciate it in the particular case we are interested in.
Permanental inequality Let φ be a permanental vector with index 1/2 admitting for kernel a non-singular matrix K = (K ij ) 1≤i,j≤n . Assume that K −1 = I − Q with Q ii = 0 for every i = 1, .., n. Then: where the variables N i , i = 1, .., n, are i.i.d. centered real standard Gaussian variables.
One obtains: where (N x ) x∈V is a family of i.i.d. standard Gaussian variables. Since for every i = 1, .., n: b ii ≥ 1, one has and hence to the conclusion. Remark 2.3. Actually Theorem 2.2 holds for any index β > 0. More precisely, define h G,β as the number such that: P[Γ β > h G,β ] = p site c (G), where Γ β is a gamma random variable with shape parameter β and scale parameter 1 (i.e. with density x β−1 e −x Γ(β) 1 x≥0 ). Then for every h < h G,β , and every permanental process (ϕ x , x ∈ V) with index β admitting for kernel the Green function of a transient random walk on G: {x ∈ V : ϕ x > h} has a.s. an infinite connected component with respect to G. This is obtained similarly thanks to the general version of Marcus and Rosen's permanental inequality [17].

Critical level sets for the Gaussian free fields
The graphs that we are considering are always non-oriented, locally bounded, infinite and connected. We assume that they are subsets of R d for some d ≥ 1. The next proposition extends the result of Bricmont et al [5] from the simple symmetric random walk on Z d to any transient simple random walk on an infinite graph. By simple random symmetric walk on a graph G = (V, E), one means a homogenous Markov chain such that its transition matrix P = (P (x, y)) (x,y)∈V 2 satisfies: P (x, y) = P (y, x), P (x, y) > 0 if [x, y] ∈ E and P (x, y ) = P (x, y ) for any y , y such that [x, y ], [x, y ] are in E. The existence of such a process on G implies that G is regular in the sense that the vertices all have the same degree. One can hence assume without changing the matrix P that the edges all have the same length. Such a graph is said to be transient when the simple symmetric random walk on G is transient.
Given a graph G = (V, E) and a real valued process (η x , x ∈ V), to mean that the subgraph of G with vertex set {x ∈ V : η x > h} and edge set {[x, y] ∈ E : η x > h and η y > h} contains an infinite connected subset of G, we just write: {η > h} percolates.
Proof. Denote by (g(x, y), (x, y) ∈ V 2 ) the Green function of the simple random walk on G. Let (η x , x ∈ V) be a centered Gaussian field with covariance (g(x, y), (x, y) ∈ V 2 ). We extend (η x , x ∈ V) to the whole graph G. To do so we make use of B, Brownian motion on the graph G. We refer to Varopoulos [23], Chacon and Baxter [3], Barlow, Pitman and Yor [2] or Enriquez and Kifer [13], for various ways to construct B. This process can be roughly described as follows.
Starting from a point in the interior of an edge of G, B moves along this edge as a real Brownian motion until it reaches one of its end points, call it x o . At this time B chooses uniformly an edge coming out of x o , independently of the past, and moves like a real valued Brownian motion on the chosen edge until it reaches one of its end points.
Since the simple symmetric random walk on G is transient, B is transient. Denote bỹ g = (g(x, y), (x, y) ∈ G) the Green function of B. The restriction ofg to V × V coincides with g. Then define the extension (η x , x ∈ G) of (η x , x ∈ V) to G, by E[η xηy ] =g(x, y), for x, y in G. One choosesη independent of B.
One defines a distance d on G by using the Lebesgue distance inside any given edge and by defining the distance between two vertices of V as the minimal sum of length of edges necessary to connect one to the other.
The process B admits a local time process (L x t , x ∈ G, t ≥ 0) which is continuous with respect to d × Leb(R + ). Indeed, the local time process is obviously continuous at each point of G \ V. What about continuity at a point of V? Since it is a local question, it is equivalent to study the continuity at 0 of the local time process of a Brownian motion on a graph with one single vertex 0 and n infinite edges coming out of 0, with equal probability to be chosen starting from 0. According to Theorem 2.1 (2.18) in [15], the local time process at 0 of this process is continuous. Consequently the local time process of B is continuous on G. By Theorem 1 in [18], we hence know that (η x , x ∈ G) is continuous with respect to d.

Some percolations involving the Gaussian free fields
Fix an element a of V. Condition on (B 0 = a), the laws of (L x ∞ , x ∈ G) and (η x , x ∈ G) can be connected thanks to a so-called isomorphism theorem established in [10], as follows:  Let C be an infinite connected subset of G such that: |η x + r| > 0, ∀x ∈ C. Then, because of the continuity ofη, either:η x < −r, ∀x ∈ C; either:η x > −r, ∀x ∈ C. Using To show Proposition 3.1 one could have used Sznitman's interlacement isomorphism Theorem [22] instead of (3.1). The interest of (3.1) lies on the fact that it involves more elementary notions.