An isomorphism theorem for random interlacements

We consider continuous-time random interlacements on a transient weighted graph. We prove an identity in law relating the field of occupation times of random interlacements at level u to the Gaussian free field on the weighted graph. This identity is closely linked to the generalized second Ray-Knight theorem, and uniquely determines the law of occupation times of random interlacements at level u.


Introduction
In this note we consider continuous-time random interlacements on a transient weighted graph E. We prove an identity in law, which relates the field of occupation times of random interlacements at level u to the Gaussian free field on E. The identity can be viewed as a kind of generalized second Ray-Knight theorem, see [2], [4], and characterizes the law of the field of occupation times of random interlacements at level u.
We now describe our results and refer to Section 1 for details.We consider a countable, locally finite, connected graph, with vertex set E, endowed with non-negative symmetric weights c x,y = c y,x , x, y ∈ E, which are positive exactly when x, y are distinct and {x, y} is an edge of the graph.We assume that the induced discrete-time random walk on E is transient.Its transition probability is defined by (0.1) p x,y = c x,y λ x , where λ x = z∈E c x,y , for x, y ∈ E.
In essence, continuous-time random interlacements consist of a Poisson point process on a certain space of doubly infinite E-valued trajectories marked by their duration at each step, modulo time-shift.A non-negative parameter u plays the role of a multiplicative factor of the intensity of this Poisson point process, which is defined on a suitable canonical space (Ω, A, P).The field of occupation times of random interlacements at level u is then defined for x ∈ E, u ≥ 0, ω ∈ Ω, by (see (1.8) for the precise expression) x × the total duration spent at x by the trajectories modulo time-shift with label at most u in the cloud ω. (0.2) The Gaussian free field on E is the other ingredient of our isomorphism theorem.Its canonical law P G on R E is such that (0.3)Under P G , the canonical field ϕ x , x ∈ E, is a centered Gaussian field with covariance E P G [ϕ x ϕ y ] = g(x, y), for x, y ∈ E, where g(•, •) stands for the Green function attached to the walk on E, see (1.3).The main result of this note is the next theorem: x x∈E under P ⊗ P G , has the same law as This theorem provides for each u an identity in law very much in the spirit of the so-called generalized second Ray-Knight theorems, see Theorem 1.1 of [2] or Theorem 8.2.2 of [4].Remarkably, although we are in a transient set-up, (0.4) corresponds to the recurrent case in the context of generalized Ray-Knight theorems.Let us underline that (0.4) uniquely determines the law of (L x,u ) x∈E under P, as the consideration of Laplace transforms readily shows.We also refer to Remark 3.1 for a variation of (0.4).
The proof of Theorem 0.1 involves an approximation argument of the law of (L x,u ) x∈E stated in Theorem 2.1, which is of independent interest.This approximation has a similar flavor to what appears at the end of Section 4.5 of [7], when giving a precise interpretation of random interlacements as "loops going through infinity", see also [3], p. 85.The combination of Theorem 2.1 and the generalized second Ray-Knight theorem readily yields Theorem 0.1.As an application of Theorem 0.1 we give a new proof of Theorem 5.1 of [6] concerning the large u behavior of (L x,u ) x∈E , see Theorem 4.1.
We now explain how this note is organized.
In Section 1, we provide precise definitions and recall useful facts.Section 2 develops the approximation procedure for (L x,u ) x∈E .We give two proofs of the main Theorem 2.1, and an extension appears in Remark 2.2.The short Section 3 contains the proof of Theorem 0.1, and a variation of (0.4) in Remark 3.1.In Section 4, we present an application to the study of the large u behavior of (L x,u ) x∈E , see Theorem 4.1.

Notation and useful results
In this section we provide additional notation and recall some definitions and useful facts related to random walks, potential theory, and continuous-time interlacements.
We consider the spaces W + and W of infinite, and doubly infinite, E × (0, ∞)-valued sequences, such that the E-valued sequences form an infinite, respectively doubly-infinite, nearest-neighbor trajectory spending finite time in any finite subset of E, and such that the (0, ∞)-valued components have an infinite sum in the case of W + , and infinite "forward" and "backward" sums, when restricted to positive and negative indices, in the case of W .
We write Z n , σ n , with n ≥ 0, or n ∈ Z, for the respective E-and (0, ∞)-valued coordinates on W + and W .We denote by P x , x ∈ E, the law on W + , endowed with its canonical σ-algebra, under which Z n , n ≥ 0, is distributed as simple random walk starting at x, and σ n , n ≥ 0, are i.i.d.exponential variables with parameter 1, independent from the Z n , n ≥ 0. We denote by E x the corresponding expectation.Further, when ρ is a measure on E, we write P ρ for the measure x∈E ρ(x)P x , and E ρ for the corresponding expectation.
We denote by X t , t ≥ 0, the continuous-time random walk on E, with constant jump rate 1, defined for t ≥ 0, w ∈ W + , by (1.1) (by convention the term bounding t from below vanishes when k = 0).
Given U ⊆ E, we write H U = inf{t ≥ 0; X t ∈ U}, H U = inf{t > 0; X t ∈ U, and for some s ∈ (0, t), X s = X 0 }, and T U = inf{t ≥ 0; X t / ∈ U}, for the entrance time in U, the hitting time of U, and the exit time from U. We denote by g U (•, •) the Green function of the walk killed when exiting U The function g U (•, •) is known to be symmetric and finite (due to the transience assumption we have made).When U = E, no killing takes place (i.e.T U = ∞), and we simply write for the Green function.
Given a finite subset K of U, the equilibrium measure and capacity of K relative to U are defined by When U = E, we simply drop U from the notation, and refer to e K and cap(K), as the equilibrium measure and the capacity of K. Further, the probability to enter K before exiting U can be expressed as We now turn to the description of continuous-time random interlacements on the transient weighted graph E. We write W * for the space W (introduced at the beginning of this section), modulo time-shift, i.e.W * = W/ ∼, where for w, w We denote by π * : W → W * the canonical map, and endow W * with the σ-algebra consisting of sets with inverse image under π * belonging to the canonical σ-algebra of W .
The continuous-time interlacement point process is a Poisson point process on the space W * × R + .Its intensity measure has the form ν(d w * )du, where ν is the σ-finite measure on W * such that for any finite subset K of E, the restriction of ν to the subset of W * consisting of those w * for which the E-valued trajectory modulo time-shift enters are independent, respectively distributed as simple random walk starting at x, as simple random walk starting at x conditioned never to return to K, and as a doubly infinite sequence of i.i.d.exponential variables with parameter 1.
As in [6], the canonical continuous-time random interlacement point process is then constructed similarly to (1.16) of [5], or (2.10) of [8], on a space (Ω, A, P), with ω = i≥0 δ ( w * i ,u i ) denoting a generic element of Ω.A central object of interest in this note is the random field of occupation times of random interlacements at level u ≥ 0: The Laplace transform of (L x,u ) x∈E has been computed in [6].More precisely, given a function f : One knows from Theorem 2.1 and Remark 2.4 4) of [6], that when V : E → R + has finite support and one has the identity where the notation f, g stands for x∈E f (x) g(x), when f, g are functions on E such that the previous sum converges absolutely, and 1 E denotes the constant function identically equal to 1 on E.

An approximation scheme for random interlacements
In this section we develop an approximation scheme for (L x,u ) x∈E in terms of the fields of local times of certain finite state space Markov chains.The main result is Theorem 2.1, but Remark 2.2 states a by-product of the approximation scheme concerning the random interlacement at level u.This has a similar flavor to Theorem 4.17 of [7], where one gives one of several possible meanings to random interlacements viewed as "Markovian loops going through infinity", see also Le Jan [3], p. 85.
We consider a non-decreasing sequence U n , n ≥ 1, of finite connected subsets of E, increasing to E, as well as x * some fixed point not belonging to E. We introduce the sets E n = U n ∪ {x * }, for n ≥ 1, and endow E n with the weights c n x,y , x, y ∈ E n , obtained by "collapsing U c n on x * ", that is, for any n ≥ 1, and x, y ∈ U n , we set and otherwise set c n x,y = 0 (i.e.c n x * ,x * = 0).We also write We tacitly view U n as a subset of both E and E n .We consider the canonical simple random walk in continuous time on E n , attached to the weights c n x,y , x, y ∈ E n , with jump rate equal to 1.We write X n t , t ≥ 0, for its canonical process, P n x for its canonical law starting from x ∈ E n , and E n x for the corresponding expectation.
The local time of this Markov chain is defined by 1{X n s = x} ds, for x ∈ E n and t ≥ 0.
The function t ≥ 0 → ℓ n,x t ≥ 0 is continuous, non-decreasings, starts at 0, and P n y -a.s.tends to infinity, as t goes to infinity (the walk on E n is irreducible and recurrent).By convention, when x ∈ E\U n , we set ℓ n,x t = 0, for all t ≥ 0. We introduce the rightcontinuous inverse of ℓ n,x * . (2.4) τ n u = inf{t ≥ 0; ℓ n,x * t > u}, for any u ≥ 0.
We are now ready for the main result of this section.We tacitly endow R E with the product topology, and convergence in distribution, as stated below (and in the sequel), corresponds to convergence in law of all finite dimensional marginals.Proof.We give two proofs.
First proof: We denote by T the set of piecewise-constant, right-continuous, E ∪ {x * }valued trajectories, which at a finite time reach x * , and from that time onwards remain equal to x * .We endow T with its canonical σ-algebra.
Under P n x * , one has almost surely two infinite sequences R ℓ , ℓ ≥ 1 and D ℓ , ℓ ≥ 1, . to x * , and departures D ℓ from x * , which tend to infinity.
One introduces the random point measure on T which collects the successive excursions of X n . (out of x * until first return to x * ) that start before τ n u .By classical Markov chain excursion theory we know that where T Un stands for the exit time of X n . from U n and κ n for the measure on U n (2.9) When starting in U n ,the Markov chains X on E, and X n on E n , have the same evolution strictly before the exit time of U n .Denoting by (X . )0≤•<T Un the random element of T , which equals X s , for 0 ≤ s < T Un , and x * for s ≥ T Un , we see that Let K be a finite subset of E, and assume n large enough so that K ⊆ U n .We introduce the point measure on T obtained by selecting the excursions in the support of Γ n u that enter K, and only keeping track of their trajectory after they enter K, that is where θ t , t ≥ 0, stands for the canonical shift on T , and we use similar notation on T as below (1.1).By (2.8), (2.10) it follows that (2.12) µ n K,u is a Poisson point measure on T with intensity measure where ρ n K is the measure supported by K such that (2.13) where the last equality follows from (1.60) in Proposition 1.8 of [7].Note that e K,Un and e K are concentrated on K, and for x ∈ K, (1.4) Consider V : E → R + supported in K, and Φ: T → R + , the map The measure µ n K,u contains in its support the pieces of the trajectory X n . up to time τ n u , where X n .visits K, see (2.11), and we have (2.15) where we used (2.14) and the fact that T Un ↑ ∞, P x -a.s., for x in E, for the limit in the last line, and a similar calculation as in (2.5) of [6] for the last equality.Since K and the function V : E → R + , supported in K, are arbitrary, the claim (2.5) follows.
Second Proof: We will now make direct use of (1.11).The argument is more computational, but also of interest.We consider K and V as above, as well as a positive number λ.We assume n large enough so that K ⊆ U n .We further make a smallness assumption on the non-negative function V (supported in K): We define the operator G n on R En attached to the kernel g n (•, •) in a similar fashion to (1.9), where we use the notation (2.17) and we have set where we have set V (x * ) = 0, by convention, so that the operator I + G n V is invertible.
We introduce the positive number where we recall that ℓ n,x t = 0, when x ∈ E\U n .Using (2.93), (2.41), (2.71) of [7], or by (8.44) and Remark 3.10.3 of Marcus-Rosen [4], we know that We then define the function h n on E n and the real number b n : (2.20) We let G * Un be the operator on R En attached to the kernel g Un (•, •) (on E n × E n ), in a similar fashion to (1.9).By (2.17) and (2.20), we have (2.21) noting that the above inverse is well defined by the same argument used below (2.17).By the second equality in (2.20) it follows that where we refer to below (1.11) for notation, G Un is the operator on R E attached to the kernel g Un (•, •) on E × E, and the last equality follows by writing the Neumann series for (I + G * Un V ) −1 and (I + G Un V ) −1 .We can now solve for b n .Noting that a n = h n (x * ) = 1 − bn λ , by (2.21), we find (2.23) Using the Neumann series for (I +G Un V ) −1 , and applying dominated convergence together with the fact that g Un (•, •) ↑ g(•, •) on E × E, we see that Taking the identity (1.11) into account, we have shown that under (2.16), (2.25) lim du.
Note that when V : E → R + is supported in K and sup x∈E GV (x) < 1, then (2.16) holds for λ large (depending on V ).The expectation under the integral in the left-hand side of (2.25) is non-increasing in u, whereas the expectation under the integral in the right-hand side of (2.25) is continuous in u by (1.11).It then follows from [1], p. 193-194, that for V as above, , for u ≥ 0.
This readily implies the tightness of the laws of (ℓ n,x τ n u ) x∈K under P n x * , and uniquely determines the Laplace transform of their possible limit points, see Theorem 6.6.5 of [1].Letting K vary, the claim (2.5) follows.
Remark 2.2.The approximation scheme introduced in this section can also be used to approximate the random interlacement at level u, as we now explain.We let I n u stand for the trace left on U n by the walk on E n up to time τ n u : (2.27) τ n u > 0}.By (2.12), (2.14), it follows that for any finite subset K of E and u ≥ 0, where I u stands for the random interlacement at level u, that is, the trace on E of doubly infinite trajectories modulo time-shift in the Poisson cloud ω with label at most u.By an inclusion-exclusion argument, see for instance Remark 4.15 of [7] or Remark 2.2 of [5], it follows that, as n → ∞, (2.29) I n u under P n x * , converges in distribution to I u under P, for any u ≥ 0, where the above distributions are viewed as laws on {0, 1} E endowed with the product topology.

Proof of the isomorphism theorem
In this short section we combine Theorem 2.1 and the generalized second Ray-Knight theorem of [2] to prove Theorem 0.1.We also state a variation of (0.4) in Remark 3.1.
Proof of Theorem 0.1: For U ⊆ G we denote by P G,U the law on R E of the centered Gaussian field with covariance E G,U [ϕ x ϕ y ] = g U (x, y), x, y ∈ E (in particular ϕ x = 0, P G,U -a.s., when x ∈ E\U).It follows from the generalized second Ray-Knight theorem, see Theorem 8.2.2 of [4], or Theorem 2.17 of [7], that for n ≥ 1, u ≥ 0, in the notation of Section 2, (ϕ x + a) 2 x∈Un under P n x * ⊗ P G,Un , has the same law as x∈Un under P G,Un .
Letting n tend to infinity, the same argument as above shows that for u ≥ 0, and a ∈ R,

An application
We illustrate the use of Theorem 0.1 and show how one can study the large u asymptotics of (L x,u ) x∈E and in particular recover Theorem 5.1 of [6], see also Remark 5.2 of [6].We denote by x 0 some fixed point of E.

Theorem 4 . 1 .
As u → ∞, to the constant field equal to 1, (4.1)L x,u − u √ 2u x∈E converges in distribution to (ϕ x ) x∈E under P G .(4.2)In particular, as u → ∞,L x,u − L x0,u √ 2u x∈E converges in distribution to (ϕ x − ϕ x 0 ) x∈E under P G .(4.3) [2] G,Un , has the same law as Since g Un (•, •) ↑ g U (•, •), we see that P G,Un converges weakly to P G (looking for instance at characteristic functions of finite dimensional marginals).Taking Theorem 2.1 into account we thus see letting n tend to infinity that Remark 3.1.Let us mention a variation on (0.4) of Theorem 0.1.By Theorem 1.1 of[2], one knows that for u ≥ 0, a ∈ R, n ≥ 1,