Representations of the Vertex Reinforced Jump Process as a mixture of Markov processes on $\mathbb{Z}^d$ and infinite trees

This paper concerns the Vertex Reinforced Jump Process (VRJP) and its representations as a Markov process in random environment. We show that all possible representations of the VRJP as a mixture of Markov processes can be expressed in a similar form, using a random potential and harmonic functions for an associated operator. This allows to show that the VRJP on $\mathbb{Z}^d$ (with certain initial conditions) has a unique representation, by proving that an associated Martin boundary is trivial. Moreover, on infinite trees, we construct a family of representations, that are all different when the VRJP is transient and the tree is $d$-regular (with $d\geq 3$).


Introduction
This paper concerns the Vertex Reinforced Jump Process (or VRJP) on infinite graphs and its representations as a Markov process in a random environment. In particular, we are interested in knowing if the VRJP admits several different representations, and what form they can take.
Let G = (V, E) be a non-directed locally finite graph, i.e. each vertex i ∈ V has finite degree. For i, j ∈ V , we write i ∼ j if i and j are neighbors, i.e. if {i, j} ∈ E. We endow G with positive conductances (W e ) e∈E , and denote W i,j = 1 {i,j}∈E W {i,j} . The VRJP on G, with respect to W , is the self-interacting random process (Y s ) s∈R+ on V defined as follows: the process starts at some vertex i 0 ∈ V at time 0, and conditionally on the past at time s, jumps to a neighbor j of i = Y s at rate W i,j L j (s), where L j (s) = 1 + s 0 1 {Yu=j} du.
In other words, as the local time s 0 1 {Yu=i} du spent by the process at i increases, the vertex i becomes more attractive. This process was introduced in [8].
In [21], Sabot and Tarrès introduced a time change for the VRJP, by defining the increasing function D(s) = i∈V (L i (s) 2 − 1), and taking (Z t ) t≥0 = (Y D −1 (t) ) t≥0 . On finite graphs, this time-changed VRJP Z started at a vertex i 0 is then a mixture of Markov processes, in the following sense: there exists a random field (u i0 (i)) i∈V , whose distribution is explicit, such that the law of Z is the same as that of a Markov process in a random environment given by jump rates 1 2 W i,j e ui 0 (j)−ui 0 (i) from i to j. The idea behind this time change is that the VRJP (Y s ) s≥0 jumps faster and faster as the vertices become more attractive, and that the time change D is such that (Z t ) t≥0 has more stationary jumping times, which is necessary for it to be a mixture of Markov processes.
They also showed that the VRJP was related to another self-interacting process, the Edge Reinforced Random Walk (or ERRW), introduced in [6] by Coppersmith and Diaconis. On finite graphs, thanks to a de Finetti type theorem for Markov chains (see [10]), it can be seen as a mixture of Markov chains. This interpretation of the ERRW as a mixture of random walks was studied in [16], [17], [13], [14], [3]. The link between VRJP and ERRW proven in [21] gives an explicit representation of the ERRW as a mixture of random walks on finite graphs.
Finally, the explicit distribution of the random field (u i0 (i)) i∈V is related to a statistical mechanics model: the supersymmetric hyperbolic sigma model. It was studied by Disertori, Spencer and Zirnbauer in [11] and [12], in which they showed localization and delocalization theorems for the field u i0 . This provided results on the recurrence and transience of the VRJP and ERRW on lattices Z d .
In [20], Sabot, Tarrès and Zeng showed that the distributions of fields u i0 can be coupled for i 0 ∈ V , using a potential β = (β i ) i∈V on V , and a random Schrödinger operator associated with β. Let us denote by H β = 2β − W the random Schrödinger operator, i.e. the |V | × |V | symmetrical matrix such that (H β ) i,j = 2β i 1 i=j − W i,j for i, j ∈ V . Moreover, we define by G = (H β ) −1 the associated Green function. Then u i0 can be defined by This representation using the β field allows a generalization to infinite graphs: in [23], Sabot and Zeng used a similar potential β on infinite graphs to show that the VRJP is still a mixture of Markov processes. If we still denote by H β = 2β − W the operator associated with β, we can define the Green functionĜ = (H β ) −1 in a certain sense. Moreover, there exists ψ, a H β -harmonic function on V (i.e. H β ψ = 0), obtained as the limit of a martingale. Then if we define G(i, j) =Ĝ(i, j) + 1 2γ ψ(i)ψ(j), where γ is a random Gamma variable independent from β, the time-changed VRJP (Z t ) is still a mixture of Markov processes, with jump rates from i to j given by Representations of the VRJP as a mixture of Markov processes The term 1 2γ corresponds to a boundary term. Indeed, to show the result for infinite graphs, the VRJP is first studied on finite subgraphs, endowed with a wired boundary condition. This representation also gave results for the ERRW on infinite graphs. In [18], Poudevigne used a coupling of potentials β on graphs with different weights W to show monotonicity results, which gave the existence of a phase transition between recurrence and transience of the VRJP.
In the case of infinite trees, there is another representation of (Z t ) as a mixture of Markov processes. This representation is obtained by using free boundary conditions on restrictions of the tree, since the representation of the VRJP on finite trees has a simpler expression. The particular structure of the tree already gave results for the ERRW (see [16]) and the VRJP (see [9], [4]). We show that in some cases, the representation of the VRJP obtained this way on the tree differs from the one defined in [23]. This raises the question of the classification of all possible representations of the VRJP as a mixture of Markov processes. This issue is related to the behavior of the VRJP at infinity, which was also studied by Merkl, Rolles and Tarrès in [15], using the point of view of random interlacements.
In this paper, we give several partial answers to the question of the classification of representations of the VRJP. We first show that any such representation can be expressed in the same form as before, using a β field, i.e. the random jump rates are given by where G(i 0 , i) =Ĝ(i 0 , i) + h(i), with h a random H β -harmonic function.
In the case where the graph is the lattice Z d , this allows us to show that for certain initial conductances W , there is only one representation of the VRJP as a mixture of Markov processes. This is true for strong reinforcement (i.e. small W ), since the VRJP is recurrent, but also for weak reinforcement (i.e. large W ). In this last case, we use a local limit theorem for random walks in random environment to show that the only H β -harmonic functions are constants, by proving that the associated Martin boundary is trivial.
In the case where the graph is an infinite tree, we already know of two different representations of the VRJP. Using new boundary conditions, we construct a family of representations, that are all different if the tree is regular enough.

Previous results
Let G = (V, E) be a finite connected non-directed graph, endowed with conductances (W e ) e∈E . We describe (W e ) e∈E with a matrix (W i,j ) i,j∈V , where In [21], Sabot and Tarrès proved that the time-changed VRJP on G with respect to W could be represented as a mixture of Markov processes, i.e. as a random walk in random environment. With Zeng, they showed in [20] that this environment could be related to a random Schrödinger operator H β , constructed from a random potential β = (β i ) i∈V , in the following way.
For β ∈ R V , we will denote by H β = 2β − W the |V | × |V | symmetrical matrix such that (H β ) i,j = 2β i 1 i=j − W i,j for i, j ∈ V . Let us define the set D W V = {β ∈ R V , H β > 0}, where H β > 0 means that the matrix H β is positive definite. Note that if β ∈ D W V , then EJP 25 (2020), paper 108. β i > 0 for all i ∈ V . The following proposition describes the probability distribution of the random potential that will be used to represent the VRJP. [20], Lemma 4 in [23]).
Then ν W,η V is a probability distribution. Its Laplace transform is for λ ∈ R V + . When η = 0, we will write ν W V = ν W,0 V .
(ii) Let us denote by d G the graph distance in G. Under ν W,η V (dβ), if V 1 , V 2 ⊂ V are such that d G (V 1 , V 2 ) ≥ 2, then (β i ) i∈V1 and (β j ) j∈V2 are independent. We will say that the potential with distribution ν W V is 1-dependent.
Let C r (R + , V ) be the space of right-continuous functions from R + to V . This will be the space of trajectories of the random processes we will study in this paper. These processes will be described by probability distributions on C r (R + , V ). Let us denote by (Z t ) the canonical process in C r (R + , V ), where Z t (ω) = ω(t) for ω ∈ C r (R + , V ). Moreover for i 0 ∈ V , let P V RJP (i0) denote the distribution of the time-changed VRJP on (G, W ), in the exchangeable time scale described in the introduction. The following theorem describes how to represent this process as a mixture of Markov processes, using a random environment that can be constructed from the β field under ν W V (dβ). Theorem 2.2 (Theorem 2 in [21], Theorem 3 in [20]). Let G = (V, E) be a finite graph, endowed with conductances W . We fix a vertex i 0 ∈ V . For β ∈ D W V , we denote by G = (H β ) −1 the Green function associated with β, and by P β,i0 x the distribution of the Markov jump process started at x ∈ V , with jump rate from i to j given by 1 2 Then for all i 0 ∈ V , the time-changed VRJP on (G, W ), started at i 0 , is a mixture of these Markov jump processes under the distribution ν W V (dβ). In other words, Representations of the VRJP as a mixture of Markov processes For n ∈ N, we introduce a new vertex δ n , and define a new graph G (n) = (Ṽ (n) ,Ẽ (n) ), The graph G (n) is called the restriction of G to V n with wired boundary condition. We endow this graph with the conductancesW (n) defined byW i ) i∈Ṽ (n) be a random potential on the graph G (n) distributed according to νW In fact, for a fixed n ∈ N and any n ≥ n, the restrictions β (n ) Vn have the same distribution ν W (n) ,η (n)

Vn
. By Kolmogorov extension theorem, this allows the construction of a Proposition 2.4 (Proposition 1 in [23]). Let G = (V, E) be an infinite locally finite graph, endowed with conductances W . There exists a unique probability distribution ν Its Laplace transform is The wired boundary condition is not only useful to define ν W V on infinite graphs, but also to link this distribution to representations of the VRJP, by applying Theorem 2.2 to the graph G (n) . Indeed from Proposition 2.4, for any n ∈ N, under ν W V (dβ) we have β Vn ∼ ν W (n) ,η (n) Vn . Hence, from Proposition 2.3, we can extend β Vn into a potential From Theorem 2.2, we know that G (n) gives a representation of the VRJP on G (n) .

Definition 2.5.
It is possible, using a decomposition of the Green function as a sum over paths (see [23], or Proposition 3.4), to write EJP 25 (2020), paper 108. for i, j ∈ V n . Under ν W V (dβ), G (n) (δ n , δ n ) is independent of β Vn , and 1/2G (n) (δ n , δ n ) is always distributed according to a Gamma(1/2, 1) distribution (see Proposition 3.1 (ii)). The following theorem describes how taking n → ∞ in this previous expression gives a representation of the VRJP on infinite graphs. Theorem 2.6 (Theorem 1 in [23]).
(ii) Let F n be the σ-field generated by β Vn . Then under ν W V (dβ), for all i ∈ V , ψ (n) (i) is a nonnegative (F n )-martingale which converges almost surely to an integrable random variable ψ(i).
and denote by P β,γ,i0 x the distribution of the Markov jump process started at x ∈ V , where the jump rate from i to j is 1 2 Then the time-changed VRJP on (G, W ), started at i 0 , is a mixture of these Markov jump processes under ν W V (dβ, dγ), i.e.
(iv) For ν W V -almost all β and all i 0 ∈ V , we have: -The Markov process with law P β,γ,i0 x is recurrent if and only if ψ(i) = 0 for all i ∈ V .
-The Markov process with law P β,γ,i0 x is transient if and only if ψ(i) > 0 for all i ∈ V .
Note that for i 0 ∈ V fixed, in this representation of the VRJP started at i 0 , the β field cannot be expressed as a function of the random jump rates Wi,j 2 G(i0,j) G(i0,i) that define the environment. However, we can define theβ field rooted at i 0 , whereβ i is the rate of the exponential holding time at i for the associated Markov process.

A common form for all representations
We still consider G = (V, E) to be an infinite connected graph, locally finite and endowed with conductances (W i,j ) i,j∈V . Thanks to Theorem 2.6, we already know that the time-changed VRJP with distribution P V RJP (i0) can be written as a mixture of Markov jump processes, using the distribution ν W V . We will refer to this as the standard representation. We are now interested in other possible random environments, that would represent the VRJP in the same sense, and whether they can be expressed in a form similar to the standard representation.
We will denote by J E V the set of jump rates on G, i.e. the set of (r Definition 2.8. Let R(dr) be a probability distribution on J E V . For i 0 ∈ V fixed, we will say that R(dr) is the distribution of a random environment representing the time-changed VRJP started at i 0 if where for r ∈ J E V , P r is the distribution of the Markov jump process with jump rate from i to j given by r i,j . We will also say that R(dr) defines a representation of P V RJP (i0) .
The following result tells us that in fact, any representation of the VRJP can be expressed in a similar form as the standard representation, using a β field as well as H β -harmonic functions.
For i ∈ V and r ∈ J E V , we define r i = j∼i r i,j . Theorem 2.9. Let i 0 ∈ V be fixed, and let R(dr) be the distribution of a random environment representing the time-changed VRJP with law P V RJP (i0) . We write R(dr, Then under R(dr, dγ), β ∼ ν W V , and there exists a random H β -harmonic function h : V → R + , such that for all i ∼ j, where G(i 0 , i) =Ĝ(i 0 , i)+h(i) for i ∈ V , andĜ is the function of β defined in Theorem 2.6.
In order to try and classify all representations of the VRJP, we now need to identify H β -harmonic functions, and to determine which ones can appear in the expression of a representation, as in Theorem 2.9. Two interesting cases arise, depending on (G, W ): when the VRJP is almost surely recurrent, or almost surely transient.
In the first case, we can use the law of large numbers to show that the representation of the VRJP as a mixture of Markov processes is unique. Proposition 2.10. If (G, W ) is such that the VRJP is almost surely recurrent, then the representation of the time-changed VRJP started at i 0 as a mixture of Markov processes is unique, i.e. if R(dr) and R (dr) define representations of P V RJP (i0) , then R(dr) = R (dr).
Note that in this case, according to Theorem 2.6 (iv), under ν W V (dβ), we have a.s. ψ(i) = 0 for all i ∈ V , and the jump rates in the standard representation are given by . Therefore, the H β -harmonic function associated with the unique representation (by Theorem 2.9) is h ≡ 0.
In the second case, i.e. when the VRJP is almost surely transient, we can introduce a random conductance model, associated with ψ. EJP 25 (2020), paper 108.
(ii) We define the random conductances c ψ i,j = W i,j ψ(i)ψ(j) for all i, j ∈ V . Then the associated reversible random walk is a.s. transient.
(iii) Let ∆ ψ be the discrete Laplacian associated with the random conductances c ψ i,j .
Then a function ϕ : Remark 2.12. The introduction of the operator ∆ ψ allows a more convenient expression of representations in the transient case. Indeed, if R(dr) defines a representation of P V RJP (i0) , Theorem 2.9 allows us to construct a β field distributed according to ν W V , and to express the jump rates r i,j using β and a H β -harmonic function h. According to Proposition 2.11 (iii), we have h = ψϕ, where ϕ is a ∆ ψ -harmonic function, i.e. harmonic for a transient random walk. As a result, ϕ can be expressed using the Martin boundary associated with ∆ ψ , as described below.
The notion of Martin boundary is a useful tool to represent harmonic functions with respect to a transient random walk on a graph G = (V, E). Indeed, V admits a boundary M so that V ∪ M is compact for a certain topology, and there is a kernel K : V × M so that any positive harmonic function h can be written as ). M is called the Martin boundary of V with respect to the random walk, and K is the Martin kernel, which is defined using the Green kernel associated with the random walk. For more details on Martin boundaries, see Section 3.3.
In order to study representations of the VRJP in the transient case, we want to describe ∆ ψ -harmonic functions, according to Remark 2.12. We will therefore need to identify the Martin boundary M ψ associated with ∆ ψ . This will be possible when G is Z d , or an infinite tree.

Representations of the VRJP on Z d
Let us consider the case where G is the lattice Z d , i.e. G = (V, E) with where |x| is the Euclidean norm of x. Let us endow G with constant initial conductances W . Since this model is invariant by isometries of Z d , we will only consider the VRJP started at 0. We can identify several situations in which the representation is unique. For d = 2, or if W is small enough, the VRJP is almost surely recurrent (see [19], and Corollary 1 in [21]), so that the representation of P V RJP (0) is unique according to Proposition 2.10. For d ≥ 3 and W large enough, the VRJP is almost surely transient (see Corollary 3 in [21]), hence we can introduce the operator ∆ ψ defined in Proposition 2.11. Since (G, W ) is vertex transitive, from Proposition 3 of [23], under ν W V (dβ), ψ is stationary and ergodic.
This allows us to apply a local limit theorem for random walks in random conductances (from [1]), and show that the Martin boundary M ψ associated with ∆ ψ is almost surely trivial for W large enough. These cases are regrouped in the following result.
Theorem 2.13. Let G be the Z d lattice, endowed with constant edge weights, i.e. W i,j = W > 0 for all i ∼ j. We consider representations of P V RJP (0) as a mixture of Markov processes. Then: EJP 25 (2020), paper 108.
• If d ∈ {1, 2}, there is a unique representation of P V RJP (0) . • If d ≥ 3, there are constants 0 < W < W such that for 0 < W < W or for W > W , there is a unique representation of P V RJP (0) .

A family of representations on infinite trees
Let us now consider the case where the graph is an infinite tree T = (T, E), that we assume to be locally finite, and endow with conductances W . In [5], Chen and Zeng described a representation of the time-changed VRJP with a different expression than the standard representation. Indeed, if (T n ) n∈N is an increasing and exhausting sequence of finite connected subsets of T , the subgraphs T (n) = (V n , E n ) of G are finite trees (where E n = {{i, j} ∈ E, i, j ∈ V n }). These are called restrictions of G with free boundary conditions. Moreover, on finite trees, Theorem 2.2 gives a representation of the VRJP where jump rates are independent. Therefore, a representation of the VRJP on T can be obtained from representations on T (n) , using independent jump rates.
Theorem 2.14 (Theorem 3 in [5]). Let φ be an arbitrary root for T . For all i ∈ T \{φ}, we denote by i the parent of i. Let also (A i ) i∈T \{φ} be independent random variables where A i is an inverse Gaussian random variable with parameter (W i,i , 1), i.e. ds.
Then the process with law P V RJP (φ) on T is a mixture of Markov jump processes, in which the jump rate from i to i is 1 2 W i,i A i , and the jump rate from i to i is 1 Ai , for all i ∈ T \{φ}.
In some cases, this representation is different from the standard representation.
Proposition 2.15. Let T = (T, E) be an infinite d-regular tree with d ≥ 3, i.e. such that each vertex in T has exactly d neighbors. We endow T with constant conductances W . Then for W large enough, the distribution of the random environment described in Theorem 2.14 is different from the distribution of the standard representation.
We now know two ways of constructing representations of the VRJP on T , that are associated with different boundary conditions on restrictions to finite graphs, and can have distinct distributions. This leads us to introduce new boundary conditions in order to construct a family of different representations of the VRJP, following the same method as for the standard representation.
Let us start by giving a few notations on trees. For all x, y ∈ T , we denote by d(x, y) the graph distance between x and y, and by [x, y] the unique shortest path between x and y: Note that any path σ from x to y necessarily crosses all vertices [x, y] k for 0 ≤ k ≤ d(x, y). Let us fix an arbitrary root φ in T . Then, for all x ∈ T , we denote by |x| = d(φ, x) the depth of the vertex x ∈ T . If x = φ, we also denote by x = [φ, x] |x|−1 the parent of x. Finally, for any x ∈ T , we define the set of x's children S(x) = {y ∈ T, x = y}, and the set of its descendants T x = {y ∈ T, ∃k ≥ 0, [φ, y] k = x}. For x, y ∈ T , we denote by x ∧ y the "closest common ancestor" of x and y, i.e.
For n ∈ N, we denote by D (n) = {x ∈ T, |x| = n} the tree's nth generation. Let us then define T (n) = 0≤k≤n D (k) , as well as E (n) = {i, j} ∈ E, i, j ∈ T (n) . The restriction of EJP 25 (2020), paper 108. the tree to the first n generations, with free boundary conditions, is the graph (T (n) , E (n) ), that we endow with the induced conductances W (n) = W T (n) ,T (n) . For x ∈ T and n ≥ |x|, we also denote T (n) x = T x ∩ T (n) the set of descendants of x in T (n) . Finally, we define the set Ω of ends of T , i.e. the set of infinite self-avoiding paths (or rays) in T starting at φ. For x ∈ T , we denote by Ω x the subset of Ω corresponding to the branch T x , i.e. the set of rays in T that cross x. Note that the Martin boundary associated with a transient walk on a tree is always Ω, which depends only on the geometry of the tree. This will be convenient to express ∆ ψ -harmonic functions, where ∆ ψ is the random Laplace operator introduced in Proposition 2.11.
With these definitions out of the way, let us present new boundary conditions on trees, and the associated representations for the VRJP.
In the construction of the standard representation, the wired boundary condition was defined by adding a single boundary point δ to a finite graph, where δ could be interpreted as a point at infinity for the graph. We will now introduce a variant of this boundary condition, by adding multiple boundary points, each being a point at infinity for a different branch of the tree.
Let us first fix a generation m ≥ 0, and to each vertex x ∈ D (m) , we associate a boundary point δ x , that will be the point at infinity for T x . We denote by B m = {δ x , x ∈ D (m) } the boundary set associated to this generation. For all n ≥ m, let us then define the graph This graph is the restriction of T to T (n) with a variant of the wired boundary condition. Note that we get the standard wired boundary condition by taking m = 0. We endow G  = W e if e ∈ E (n) EJP 25 (2020), paper 108.
As with the wired boundary condition, these weights are defined so that for n ≥ m, the weights coming out of T (n) are given by W T (n) ,(T (n) ) c 1 (T (n) ) c = η (n) . This will allow for the compatibility of β (n) m fields defined on G (n) m for n ≥ m. Note that these weights do not depend on m, i.e. do not depend on the choice of the boundary condition.
For β ∈ D W T , we still define H β = 2β − W and take V n = T (n) for all n ∈ N in Definition 2.5. We then getĜ (n) = ((H β ) T (n) ,T (n) ) −1 and ψ (n) =Ĝ (n) η (n) , which converge ν W V -a.s. toĜ and ψ respectively, according to Theorem 2.6. Moreover, for all n ≥ m, under ν W T (dβ), we can extend β T (n) into a potential β Some new terms, defined below, appear in this expression.
Once again, we will study the limit of this expression when n → ∞, to obtain a representation of the VRJP on (T , W ). However under ν W T (dβ), contrary to ψ (n) , χ (n) m (·, δ x ) is not a martingale when m = 0. Moreover, the term (G (n) m ) Bm,Bm is not independent of β T (n) for m = 0. Therefore, we cannot use the same argument as in the proof of Theorem 2.6. However, we will still be able to show the almost sure convergence of χ (n) m , using the structure of the tree and the associated Martin boundary, and deduce the convergence in distribution of (G (n) m ) Bm,Bm conditionally on β. Let us give a few more details on the Martin boundary: we expect χ (n) m (·, δ x ) to converge to a H β -harmonic function on T , for all x ∈ D (m) and ν W V -almost all β. When ψ > 0, we can introduce the operator ∆ ψ in order to study H β -harmonic functions (see Proposition 2.11), thanks to the associated Martin boundary M ψ . Since the graph is a tree, the Martin boundary is equal to the set Ω of ends of T , which is deterministic. Note that the boundary condition used to define G (n) m corresponds to identifying Ω x to a single point δ x , for all x ∈ D (m) . We will see that the limit of χ (n) m (·, δ x ) can be expressed with the family of harmonic measures (µ ψ i ) i∈T defined, on the Martin boundary M ψ = Ω, as the exit measure of the transient walk associated with ∆ ψ started at i (for more details, see Section 3.3). For β ∈ D W T such that ψ ≡ 0, we adopt the convention that µ ψ i is the null measure on Ω for all x ∈ T .
The following theorem states the almost sure convergence of χ (n) m , and the existence of a family of representations constructed as previously described. otherwise.
From now on, let us write: and denote by P β,ρm,i0 x the distribution of the Markov jump process started at x ∈ V , where the jump rate from i to j is 1 2 Then the process with law P V RJP (i0) is a mixture of these Markov jump processes, under the mixing measure ν W T,Bm (dβ, dρ m ), i.e.
Gm(i0,i) ) i∼j converges weakly to the distribution of jump rates in the representation described in Theorem 2.14.
Let us now consider the case where T is a d-regular tree, with d ≥ 3, endowed with constant conductances, i.e. W e = W > 0 for all e ∈ E. Then (T , W ) is vertex transitive, and from Proposition 3 of [23], we know that under ν W T (dβ), ψ is stationary and ergodic. Therefore, depending on d and W , we either have P[∀i ∈ T, ψ(i) = 0] = 1, or P[∀i ∈ T, ψ(i) > 0] = 1.
In the first case, from Theorem 2.6 (iv), this means that the VRJP is a.s. recurrent, and therefore admits a unique representation (see Proposition 2.10). Note that in Theorem 2.18, we have a.s. G m =Ĝ for all m ∈ N, so that all the corresponding representations are indeed equal. The following proposition describes the second case, i.e. when the VRJP is a.s. transient. According to a result from [9], this is true for large enough initial weights W .

Open questions
A first question concerns the case of Z d with constant conductances W : is it possible to show that the Martin boundary associated with ∆ ψ is a.s. trivial for any W such that the VRJP is transient? In this case, it would prove the uniqueness of the representation of the VRJP on Z d for any constant initial conductances W .
Another question concerns a possible classification of all representations on trees using partitions of the Martin boundary. We have constructed a family of representations from different boundary conditions on the tree, corresponding to some finite partitions of the Martin boundary Ω, more precisely the partition Ω = x∈D (m) Ω x for m ∈ N. It should be possible to define more representations using the same method, with boundary conditions associated with other finite partitions of Ω, where each set in the partition can be written as a finite union of sets Ω x . To generalize this, we can ask if it is possible to determine which partitions give us a valid representation, and whether all representations can be written in this form, or as a limit of such representations, as in Theorem 2.18 (iii).

Organization of the paper
Section 3 exposes some useful technical results concerning the β field, as well as basic definitions and properties of the Martin boundary. In Section 4, we prove how all representations of the VRJP have a common form, i.e. Theorem 2.9. We use these results in Section 5 to study the case of the graph Z d , and show Theorem 2.13 using a local limit theorem in random environment. In Section 6, we construct a family of new representations of the VRJP on infinite trees (Theorem 2.18). Section 7 presents several properties of this family, in particular that the representations are all different in the case of a regular tree (Proposition 2.19).

The random potential β on finite graphs
Let G = (V, E) be a finite connected non-directed graph, endowed with conductances (W e ) e∈E . Let us give some useful properties on the distribution ν W V . Proposition 3.1 (Proposition 2, Theorem 3 in [20]). For β ∈ D W V , let G = (H β ) −1 be the Green function associated with β. We define F : Then under ν W V (dβ), for all i 0 ∈ V , we have the following properties: , then γ is a Gamma random variable with parameter (1/2, 1). Moreover, γ is independent of (β i ) i =i0 , and therefore independent of (F (i, i 0 )) i∈V .
This proposition explains the presence of γ in the expression on G in Theorem 2.6.
Moreover, it allows us to prove Proposition 2.7, describing the distribution of theβ field.
Proof of Proposition 2.7. Let G = (V, E) be an infinite connected non-directed graph, and (V n ) n∈N an increasing exhausting sequence of finite connected subsets of V . For n ∈ N, let G (n) = (Ṽ (n) ,Ẽ (n) ) be the restriction of G to V n with wired boundary condition, endowed with conductancesW (n) , defined as in section 2.1. Moreover, for n ∈ N, we still defineĜ (n) and ψ (n) as in Definition 2.5.

Representations of the VRJP as a mixture of Markov processes
The proof of Theorem 2.6 (iii) uses the fact that under ν W V (dβ, dγ), there exists a coupling of random fields (β (n) ) n∈N , such that for all n ∈ N: We can then apply Proposition 3.1 to β (n) at i 0 : since Since the Laplace transform of a Gamma(1/2, 1) variable is, for t ≥ 0, On finite graphs, the distribution ν W V , and more generally ν W,η V for η ∈ R V + , behaves well with respect to restriction, as shown in the next proposition, which is a generalization of Proposition 2.3. [23]). Let us fix U ⊂ V and η ∈ (R + ) V . Then, under ν W,η V (dβ), we have:

Green function and sums over paths
Let us still consider a finite connected non-directed graph G = (V, E) endowed with conductances W . For β ∈ D W V , it will be useful to express the Green function G = H −1 β as a sum over paths in G. We first introduce some notations for sets of paths.

Definition 3.3.
(i) For i, j ∈ V , we denote by P V i,j the set of paths σ from i to j in V , i.e. the set of finite We denote by |σ| = l the length of the path σ.
(iv) For i, j ∈ V and σ ∈ P V i,j , we define the following notations: We get the following expressions, in terms of sums over paths, for G and related quantities. [20]). Let β ∈ D W V . Then: we defineĜ (n) =Ĝ Vn = ((H β ) Vn,Vn ) −1 as in Definition 2.5. Then from Proposition 3.4 (i), we getĜ

Proposition 3.4 (Proposition 6 in
, for all i, j ∈ V and n ≥ max(|i|, |j|). Then, from Proposition 3.4 (ii) we havê where the convergence is true ν W V -almost surely.

Martin boundary and harmonic functions
Let us give more details about the theory of Martin boundaries. The following results can be found in Chapter IV of [24].
we assume that (X n ) is a nearest-neighbor random walk). Moreover, we assume that (X n ) is transient.
Let us denote by P x the distribution of the random walk started at x ∈ V , and by g the associated Green function, i.e.
We also denote For all y ∈ V , g(·, y) is harmonic at any x ∈ V \{y}, i.e. for all x = y, we have g(x, y) = z∼x P x,z g(z, y). This is still true for f (·, y). The Martin Kernel, defined below using f , as well as the Martin boundary, will allow us to represent all positive harmonic functions for the random walk.
Definition 3.6. Let us fix a reference point φ ∈ V .
(i) The Martin kernel is the function K : (ii) The Martin compactification is the smallest compactificationV of V , with respect to the discrete topology, so that K(·, ·) extends continuously to V ×V . It is unique up to a homeomorphism. The Martin boundary is defined as M =V \V .
For the proof that this compactification exists, see Chapter IV in [24].
In the following, we still denote by K(·, ·) the extension of the Martin kernel to V ×V . For all α ∈ M, the function K(·, α) : V → R + is harmonic with respect to the random walk. Conversely, the following representation theorem states that all positive harmonic functions can be expressed using the Martin kernel.
Remark 3.8. If, for all x ∈ V and for all sequences (y n ) n≥1 going to infinity, we have K(x, y n ) → 1, then the Martin boundary is trivial, i.e. reduced to a single point. According to Theorem 3.7, in this case, all positive harmonic functions are constant.
Since (X n ) is transient, we almost surely have X n → ∞, in the following sense: for all finite subset U ⊂ V , {n ∈ N, X n ∈ U } is almost surely finite. Thanks to the Martin boundary, we can now describe this convergence more precisely.
The space (M, B(M), (µ x ) x∈V ) is called Poisson boundary. Moreover, we call harmonic measures, or exiting measures, the family (µ x ) x∈V .
In the case where T = (T, E) is an infinite tree, the Martin compactification will coincide with another, which does not depend on the random walk defined by P , but simply on the geometry of the tree T . (i) We call infinite ray in T an infinite self-avoiding path starting at φ, i.e. a sequence ω = (ω k ) k∈N of distinct vertices in T , such that ω k ∼ ω k+1 for k ∈ N and ω 0 = φ. The set of infinite rays, also called the set of ends of T , is denoted by Ω.
(iii) We define the end topology on T ∪ Ω, which is discrete on T , and such that (O k ω ) k∈N is a basis of neighborhoods at ω ∈ Ω.
Recall also that for x, y ∈ T , we define N x,y in a similar way to Definition 3.10(ii), as the depth of the closest common ancestor to x and y: The following proposition introduces the end compactification, which will coincide with the Martin compactification on the tree, as stated in Theorem 3.12.
Proposition 3.11. The end topology on T ∪ Ω does not depend on the choice of φ, and is induced by the following metric: for x, y ∈ T ∪ Ω. Moreover T ∪ Ω is compact, and called the end compactification. (i) Let (X n ) be a nearest-neighbor random walk on T , that we assume to be transient.
Then the Martin compactification coincides with the end compactification, and we can identify M to Ω, and setT = T ∪ Ω.
(ii) The Martin kernel on T ×T is locally constant, with We also have an expression of harmonic measures µ x on the tree. For x ∈ T , we denote by Ω x the set of ends for the subtree T x , i.e. Ω x = {ω ∈ Ω, ∃k ∈ N, ω k = x}. Moreover, we denote by U x = T x \{x}. Then: Remark 3.14. From Carathéodory's extension theorem, this entirely describes the expression of µ φ . From Theorem 3.9, we can then describe all harmonic measures using f .

Distributions of arbitrary representations
This section contains the proofs of the results from Section 2.2, stating that any representation of the time-changed VRJP can be expressed in a similar way to the standard representation.

A common expression for jump rates: proof of Theorem 2.9
Let G = (V, E) be a locally finite connected graph, endowed with conductances (W i,j ) i,j∈V such that W i,j = W j,i > 0 if {i, j} ∈ E, and W i,j = 0 otherwise. We still denote by P V RJP (i0) the law of the VRJP on (G, W ), in the exchangeable time scale, started at i 0 ∈ V .
Let us first show that the distribution of theβ field (see Proposition 2.7) appears in all representations of the VRJP. Recall that for all r ∈ J E V and i ∈ V , we define r i = j∼i r i,j . Proposition 4.1. Let i 0 ∈ V be fixed, and let R(dr) be the distribution of a random environment representing P V RJP (i0) , in the sense of Definition 2.8.
Then under R(dr), (r i ) i∈V has the same distribution as the fieldβ rooted at i 0 , i.e. its Laplace transform is Proof. Let i 0 ∈ V be fixed, let R(dr) be the distribution of a random environment representing P V RJP (i0) , i.e.
where P r is the distribution of the Markov jump process with jump rate from i to j given by r i,j . Let us prove that under R(dr), (r i ) i∈V has the same distribution as theβ field from the standard representation.

Lemma 4.2.
There exists a random field (u i ) i∈V ∈ R V such that R-almost surely, r i,j = Wi,j 2 e uj −ui for i ∼ j.

Remark 4.3.
Since the random field (u i ) i∈V is defined up to an additive constant, we can set u i0 = 0 a.s. without loss of generality.
Proof of Lemma 4.2. For r ∈ J E V , let us define t i,j = 2 Wi,j r i,j for all i ∼ j. Then to prove this lemma, it is enough to show that for any cycle σ = (σ 0 , ..., σ n ), we have R-a.s. t σ := n−1 k=0 t σ k ,σ k+1 = 1. Since G is connected, we only need to prove this for cycles σ such that σ 0 = i 0 .
Recall that we denote by (Z t ) t≥0 the canonical process on C r (R + , V ). Let P M JP be the distribution of the Markov jump process with jump rates 1 2 W i,j . Then, according to Theorem 3 from [22], for all T ≥ 0 the law of (Z t ) t≤T under P V RJP (i0) is absolutely continuous with respect to its law under P M JP i0 , and its Radon-Nikodym derivative is where W i = j∼i W i,j , and l i = T 0 1 {Zt=i} dt is the local time at i. Let σ be a cycle such that σ 0 = σ |σ| = i 0 , and let us denote by σ n the nth concatenation of σ. Moreover, for T ≥ 0, we define (Z) T as the discrete path taken by the trajectory (Z t ) t≤T . Then we have, for n ≥ 1 and T ≥ 0, However, since the random environment (r i,j ) i∼j gives a representation of the VRJP as a mixture of Markov processes, we also have Let us fix ε > 0, and define the event A σ,ε = {t σ ≥ 1 + ε}. Then we get On the other hand, we also have EJP 25 (2020), paper 108.
Taking n → ∞ for fixed T > 0 shows that P[A σ,ε ] = 0. As a result, we have almost surely t σ ≤ 1. For ε > 0, we now set A σ,ε = {t σ ≤ 1 − ε}. Using the same notations as before, and the fact that a.s. t σ ≤ 1, we get On the other hand, on the event {(Z) T = σ n }, we have l i ≤ T for all i ∈ {σ k , 0 ≤ k < |σ|} and l i = 0 for all other i ∈ V . As a result, for such trajectories, As before, this yields for all T > 0 and n ∈ N. Taking first n → ∞, then T → 0, we get that under R(dr), P[A σ,ε c ] = 1. Therefore, we can conclude that R-almost surely, t σ = 1.
In order to identify the distribution of (r i ) i∈V under R(dr), we obtain their Laplace transform as the density of cyclic trajectories of (Z t ) t≥0 under P V RJP (i0) with respect to P M JP i0 . Indeed, given a cyclic trajectory (z t ) t≥0 in G, started at i 0 , we denote by σ the associated cyclic path in G, and (l i ) i∈V the local times, so that T = i∈V l i , and l i > 0 if and only if i ∈ {σ k , 0 ≤ k < |σ|}. Then the Radon-Nikodym derivative at (z t ) t≥0 of P V RJP (i0) with respect to P M JP i0 is almost surely but also e − i∈V rili e − i∈V 1 2 Wili R(dr) since t σ = 1 R-almost surely. Therefore, for all finite connected subset U of V , and almost all (l i ) i∈V ∈ (R * Since these are continuous functions of (l i ) i∈V , this equality is true for all (l i ) i∈V ∈ R V + with finite support. As a result, under R(dr), (r i ) i∈V has the same Laplace transform as the fieldβ, associated with the standard representation of the VRJP started at i 0 (see Proposition 2.7), and therefore the same distribution.
Representations of the VRJP as a mixture of Markov processes Theorem 2.9 results easily from Proposition 4.1, since adding an independent Gamma variable to the previousβ field yields a potential with distribution ν W V . This provides the expression of the jump rates using the associated Green functionĜ and a H β -harmonic function.
Proof of Theorem 2.9. Let i 0 ∈ V be fixed, and R(dr) be the distribution of a random environment representing P V RJP (i0) . Thanks to Proposition 4.1, we know the distribution of (r i ) i∈V under R(dr). Note that the distribution of a Gamma(1/2, 1) variable is 1 {γ>0} √ πγ e −γ dγ, and that its Laplace transform is given by i.e. β is distributed according to ν W V (see Proposition 2.4). We can then define R(dr, dγ)a.s.Ĝ : V × V → R + and ψ : V → R + thanks to Theorem 2.6. Moreover, by analogy with the standard representation, let G(i 0 , ·) : V → R + be defined by: where (u i ) i∈V was introduced in Lemma 4.2. This way, under R(dr, dγ), for all i = j ∈ V the jump rate r i,j can be written as As a result, G(i 0 , ·) can be written, for all i ∈ V , as where h : V → R + is a non-negative H β -harmonic function.

The recurrent and transient cases: proofs of Propositions 2.10 and 2.11
We now consider two particular cases for the weighted graph (G, W ). The first one is when the VRJP on (G, W ) is a.s. transient, in which case the representation is unique, as proved below.
Proof of Proposition 2.10. We assume that (G, W ) is such that the VRJP is almost surely recurrent.
Let (r i,j ) i∼j be fixed jump rates on V , such that the associated Markov chain is recurrent. We denote by P r i0 its distribution when started at i 0 . Note that under P r i0 , the time spent at a vertex i before jumping is an exponential variable with parameter r i , and the probability to then jump to a specific neighbor j is ri,j ri .
Let us then define the following functions of the trajectory (Z t ): for i ∈ V and n ≥ 1, we define δt (n) i as the time spent by (Z t ) at the vertex i during its nth visit to i, and v (n) i the neighbor of i towards which the process jumps after its nth visit to i. Under P r i0 , since the process is recurrent, these random variables are well-defined for all i ∈ V and n ≥ 1. Moreover, the sequences (δt The second interesting case is when the VRJP is a.s. transient. In this case we introduce a random conductance model associated with ψ, which defines a transient random walk. We can then relate H β -harmonic functions to harmonic functions for this walk. This will be useful to study the H β -harmonic function appearing in the expression of any representation, according to Theorem 2.9. EJP 25 (2020), paper 108.
Let us now consider the random conductance model with conductances c ψ i,j = W i,j ψ(i)ψ(j). We denote by π ψ i = j∼i c ψ i,j the corresponding invariant measure, where π ψ i = ψ(i) j∼i W i,j ψ(j) = 2β i ψ(i) 2 since ψ is H β -harmonic. Let P ψ be the distribution of the associated random walk, whose transition probability from i to j is Moreover, let us denote by g ψ the Green kernel associated with P ψ , defined for i, j ∈ V as g ψ (i, j) = k∈N P ψ i [X k = j], where (X k ) k∈N denotes the canonical process on V N . Then we have where under ν W V (dβ),Ĝ(i, j) is a.s. finite for all i, j ∈ V , from Theorem 2.6 (i). As a result, we have almost surely g ψ (i, j) < ∞, therefore the random walk P ψ is transient almost surely, proving (ii).
Let ∆ ψ = (p ψ i,j − 1 {i=j} ) i,j∈V be the discrete Laplacian associated with P ψ . We will say that a function ϕ : V → R is ∆ ψ -harmonic if (∆ ψ ϕ)(i) = j∼i p ψ i,j ϕ(j) − ϕ(i) = 0 for all i ∈ V . Therefore, a function ϕ is ∆ ψ -harmonic if and only if for any i ∈ V , i.e. if and only if ψϕ is H β -harmonic, which concludes the proof of (iii).

Representations of the VRJP on Z d : proof of Theorem 2.13
Let us now consider the case where G = (V, E) is the Z d lattice, endowed with constant edge weights, i.e. W i,j = W > 0 for all i ∼ j. For x ∈ R d , we will denote by |x| its Euclidean norm. We fix i 0 = 0.
The aim of this section is to prove Theorem 2.13, concerning the uniqueness of the representation on (Z d , W ). Let us first distinguish two regimes for the weighted graph: when the VRJP is a.s. recurrent and when it is a.s. transient.

Recurrence and transience of the VRJP on Z d
For d = 2, the VRJP on (G, W ) is a.s. recurrent for all W > 0, according to Theorem 2 in [19]. Therefore, the representation of P V RJP (0) as a mixture of Markov jump processes is unique (see Proposition 2.10). If d ≥ 3, Corollary 1 in [21] tells us that for small enough W , the VRJP is a.s. recurrent, in which case the representation of P V RJP (0) is once again unique. Let us now show that for large enough W , even though the VRJP is almost surely transient, the representation is still unique.
EJP 25 (2020), paper 108. From Corollary 3 in [21], we know that for W large enough, the VRJP is a.s. transient. From now on, we consider such W . Then thanks to Proposition 2.11, under ν W V (dβ) we have a.s. ψ(i) > 0 for all i ∈ V . Moreover, we can define the Markov operator ∆ ψ and, for h : V → R + , h is H β -harmonic if and only if h/ψ is ∆ ψ -harmonic. In light of Remark 2.12, in order to show that the representation of the VRJP is unique, we need to show that the only positive ∆ ψ -harmonic functions are constants, i.e. that the Martin boundary M ψ associated with ∆ ψ is almost surely trivial. To do this, we will need a local limit theorem in random environment, found in [1].

Local limit theorem for random walk in random conductances
Let us consider a random conductances model on G = (Z d , E d ), with d ≥ 2. Let P be a distribution on the set of conductances (R * + ) E d , such that under P(dω), we have a.s. 0 < ω i,j < ∞ for all i ∼ j. For ω ∈ (R * + ) E d , let P ω be the distribution of the continuous-time constant speed random walk associated with ω. This is the Markov jump process with jump rate from i to j given by ωi,j π ω i , where π ω i = j∼i ω i,j . This way under P ω , the holding time of (Z t ) t≥0 at each point is an exponential variable of parameter 1, which justifies the term "constant speed". Finally, we denote by q ω the heat kernel, i.e. the transition density of the walk with respect to π ω : for x, y ∈ Z d and t ≥ 0, The following theorem from [1] is a local limit theorem for q ω , under ergodicity and integrability assumptions.
Theorem 5.1 (Theorem 1.11 in [1]). Let us assume that P(dω) is stationary and ergodic with respect to translations of Z d , and that there exist p, q ∈ (1, ∞] satisfying 1/p + 1/q < 2/d such that E[ω p i,j ] < ∞ and E[ω −q i,j ] < ∞ for all i ∼ j. Then for 0 < T 1 < T 2 and K > 0, we have P-a.s. Remark 5.2. If the distribution P(dω) is also invariant with respect to all permutations of coordinates in Z d , then the law of the limiting Brownian motion must be as well. Therefore its deterministic covariance matrix has the form Σ 2 = σ 2 I d , where σ 2 > 0.
This also provides a local limit theorem for the Green kernel g ω , defined for ω ∈ This result was also mentioned in [1], we give here the details for the proof of a slightly stronger result 1 , that insures the uniform convergence for x in an annulus. where g BM is the Green kernel associated with the Brownian motion with covariance matrix Σ 2 , i.e.
Proof. This result is obtained by integrating in Theorem 5.1. Moreover, we will need the following bounds on q ω , which are true almost surely.
Firstly, Theorem 1.6 in [2] gives a short-range bound, which also applies to k t : P-a.s. there are constants C, c 1 , c 2 > 0 such that for t ≥ Cn|x|, and for all t ≥ 0, Now, for a long-range bound: according to the proof of Theorem 10 from [7], for any λ > 0, and any t ≥ 0, x ∈ Z d and n ∈ N, we have where d G denotes the graph distance on Z d . Therefore, for |x| ≥ 1 and t ≤ 2Cn, and for n large enough so that d G (0, nx ) ≥ n/2, we have As a result, there exists λ 0 > 0 small enough so that for such x, n and t, Hence there are constants c 3 > 0 and N 0 ∈ N such that for |x| ≥ 1, n ≥ N 0 and t ≤ 2Cn, Note that the integrability assumption implies that Therefore, for |x| ≤ 2, ρ ω y , as well as and thanks to the ergodic theorem, there exist P-a.s. constants c 4 > 0 and N 1 ≥ N 0 , such that for n ≥ N 1 , This finally yields the following bound: for 1 ≤ |x| ≤ 2, n ≥ N 1 and t ≥ 2Cn, we have P-a.s. q ω (t, 0, nx ) ≤ c 5 n d e −c3n|x| .

Representations of the VRJP as a mixture of Markov processes
Using these bounds, we can now show that for n ≥ N 1 and 1 ≤ |x| ≤ 2, we have P-a.s. .
so that from Theorem 5.1, there exists N ≥ N 1 independent of x such that for n ≥ N , which is true P-almost surely.
Remark 5.4. Let us fix conductances ω ∈ (R * + ) E d . We denote by (Z n ) n∈N the discrete version of (Z t ) t≥0 . Then, for x, y ∈ Z d , is the Green kernel associated with (Z n ) n∈N under P ω x . Indeed, since under P ω x the holding time of Z at each point is an exponential variable of parameter 1, the expected time spent by (Z t ) t≥0 at y is exactly the expected number of visits of y by (Z n ) n∈N .

Martin boundary associated with ∆ ψ
We return to the VRJP on Z d , d ≥ 3, with constant initial conductances W large enough so that the VRJP is almost surely transient. From Proposition 2.11,under ν W V (dβ), we then have a.s. ψ(i) > 0 for all i ∈ V . Moreover, the random conductance model associated with conductances c ψ i,j = W i,j ψ(i)ψ(j) defines almost surely a transient EJP 25 (2020), paper 108. random walk. We still denote by ∆ ψ the discrete Laplacian, and define π ψ i = j∼i c ψ i,j = 2β i ψ(i) 2 , as well as g ψ the corresponding Green kernel: We want to identify the Martin boundary M ψ associated with ∆ ψ , by studying the behavior at infinity of the Martin kernel K ψ , defined by for all x, y ∈ Z d . In order to do this, we will use Theorem 5.3.
Proof. The proof is the same as for Lemma 7(i) in [23], which states the above result in the case p = 1. It uses a delocalization theorem for the supersymmetric hyperbolic sigma model, from [12]. Let us give an outline of the proof.
For n ∈ N, define V n = −n, n d , and let G (n) be the restriction of V to V n with wired boundary conditions, i.e. G (n) = (V n ∪{δ n },Ẽ (n) ) (see section 2.1 for more details). Recall that for i ∈ V n , ψ(i) was defined in Theorem 2.6 as the limit of ψ (n) (i) = G (n) (δn,i) G (n) (δn,δn) , where G (n) is the Green function associated with a β (n) potential on the graph (G (n) ,W (n) ).
δn has the same distribution as in the supersymmetric hyperbolic sigma model on (G (n) ,W (n) ), rooted at δ n .
This model was studied in [11] and [12] by Disertori, Spencer and Zirnbauer. In particular, Theorem 1 of [12] is a delocalization result for the model, and states that the fluctuations of u (n) 0 (rooted at 0) are uniformly bounded in n: for any m > 0, for all W ≥ m 8 , for all n ∈ N and x, y ∈ V n , In the proof of Lemma 7 (i) in [23], Sabot and Zeng showed that this is still true when rooting the model at δ n , and that this implies the existence of W 1 > 0 such that for Since the delocalization result is true for arbitrarily large m, the same proof can be adapted to show Lemma 5.6.
We define W = W d+1 . From now on, we assume that W > W , so that thanks to Lemma 5.6,under where a = 1/E[π ψ 0 ] and g BM is the Green kernel associated with a Brownian motion with covariance matrix σ 2 I d , i.e.
Using this result, we have ν W V -almost surely: g ψ (0, y n ) ∼ n→∞ aπ ψ yn g BM (0, y n ), for any sequence (y n ) n≥1 such that |y n | → ∞. Indeed, for such a sequence (y n ), let us define m n = |y n | and z n = y n /m n . Then, since 1 ≤ |z n | ≤ 2 for all n ≥ 1, we have Moreover, for x ∈ Z d fixed, let ψ x be the translated function defined by ψ x (y) = ψ(y − x). Then ψ x and ψ have the same distribution under ν W V (dβ), therefore we have ν W V -a.s., for all (y n ) n≥1 such that |y n | → ∞, g ψ (x, y n ) = g ψ x (0, y n − x) ∼ n→∞ aπ ψ yn g BM (0, y n − x), since |y n − x| → ∞ and π ψ x yn−x = π ψ yn . Let us denote by A x the ν W V -almost sure event where this is true. Since Z d is denumerable, x∈Z d A x is still ν W V -almost sure. Therefore, we have ν W V -a.s. that for all x ∈ Z d , for all (y n ) n≥1 such that |y n | → ∞, As a result, from Remark 3.8, the Martin boundary associated with ∆ ψ is ν W V -a.s. trivial.
Let R(dr) be the distribution of an environment representing P V RJP (0) on Z d endowed with constant initial conductances W > W .
For r ∈ J E V and γ > 0, we define β by β i = r i + 1 {i=0} γ. According to Theorem 2.9, under R(dr, dγ) we then have β ∼ ν W V . We defineĜ and ψ as functions of β, as in Theorem 2.6, and we can write where G(0, i) =Ĝ(0, i) + h(i) for all i ∈ Z d , with h a H β -harmonic function. Since W is large enough so that under ν W V (dβ), ψ(i) > 0 for all i ∈ Z d almost surely, the operator ∆ ψ is well-defined, and h/ψ is ∆ ψ -harmonic. However, according to Proposition 5.5 the Martin boundary associated with ∆ ψ is ν W V -a.s. trivial, therefore positive ∆ ψ -harmonic EJP 25 (2020), paper 108. g such that for all i ∈ Z d , we have R-a.s.
In particular, g = (G(0, 0) −Ĝ(0, 0))/ψ(0), so g can be written as a function of (β, 1 2G(0,0) ), and therefore has a function of ((r i ) i∈Z d , γ). Since according to Proposition 4.1, under R(dr, dγ) the distribution of ((r i ) i∈Z d , γ) does not depend on the chosen representation R, this shows that the distribution of the jump rates r i,j = Wi,j 2 G(0,j) G(0,i) is uniquely determined, i.e. that the representation is unique.
Remark 5.7. Note that we can identify the distribution of g using the standard representation. This shows that under R(dr), dγ, we have g = ψ(0)/2γ , where γ is a Gamma(1/2, 1) random variable independent from (β i ) i∈Z d .

Construction of new representations on trees
Let now T = (T, E) be an infinite tree, that is locally finite. We fix an arbitrary root φ, and endow the edges of T with positive weights (W e ) e∈E . The aim of this section is to prove points (i) and (ii) of Theorem 2.18, which introduces a family of new representations for the VRJP on infinite trees.
We will proceed as in the construction of the standard representation, by considering the VRJP and the associated β potential on finite restrictions of the tree, with the new boundary conditions introduced in Section 2.3.2. Theorem 2.2 provides an expression for the representation on these finite graphs G (n) m , and we will show that the associated jump rates converge to a representation on the infinite graph.

Representation of the VRJP on G
We endow this graph with edge weightsW According to Theorem 2.2, the mixing measure for the VRJP on (G . As with the standard representation, we will construct a coupling of such potentials for all n ∈ N, using the potential defined on the whole infinite tree with distribution ν W V . For β ∈ D W T , we still define H β = 2β − W . For n ∈ N, let us take V n = T (n) in Definition 2.5, so that we getĜ (n) =Ĝ T (n) = ((H β ) T (n) ,T (n) ) −1 and ψ (n) =Ĝ (n) η (n) .
m as the Schrödinger operator associated with β (n) m . Let us first simplify the expressions of these parameters. Given the definition ofW (n) m , note that we haveŴ has the same expression as in the statement of the lemma: indeed, Let us now show that under ν W T (dβ)νW m has the right distribution. From Moreover, conditionally on (β Bm (dβ ), where the jump rate from i to j is .
In order to obtain a representation on the infinite tree T , we will need to show that the Green function (G Lemma 6.2. For i, j ∈ T , let n 0 ≥ m be such that i, j ∈ T (n0) . Then for n ≥ n 0 , Proof. We show this result by expressing G (n) m as a sum over paths in T . We will use notations and results presented in Section 3.2.
For n ≥ n 0 , by applying Proposition 3.4 (i) to β EJP 25 (2020), paper 108. This sum over paths can be decomposed as follows: a path σ ∈ PT (n) m i,j can either hit some vertex in B m , in which case σ ∈ PT (n) m i,Bm,j , or never hit any vertex in B m , in which case σ ∈ P T (n) i,j . As a result, we have PT from Proposition 3.4. Note that for any σ ∈ P T (n) i,j , . As a result, Moreover, note that for x ∈ D (m) and y ∈ T (n) , from Definition 2.16. As a result, we have We will show that the distribution of G Bm (dβ ) converges when n → ∞. From Theorem 2.6 (i), we already know thatĜ (n) (i, j) converges a.s. toĜ(i, j), let us now study the respective limits of χ

(i)
Let us still consider a fixed generation m ∈ N. We will show that χ (n) m (i, δ x ) converges a.s. for all x ∈ D (m) and i ∈ T . Moreover, the limit has a simple expression in terms of the harmonic measures associated with the random Markov operator ∆ ψ . We will first describe the Martin boundary associated with ∆ ψ , and the harmonic measures (µ ψ i ) i∈T . First, let us fix β ∈ D W T , and consider the function ψ defined in Theorem 2.6. We either have ψ(i) > 0 for all i ∈ T , or ψ ≡ 0. In the first case, we can define the conductances (c ψ i,j ) i∼j as in Proposition 2.11, as well as the corresponding Markov operator ∆ ψ . Recall that a function h is ∆ ψ -harmonic if and only if ψh is H β -harmonic. The associated random walk is transient, since the associated Green function g = g ψ is given by for i, j ∈ T . This allows us to apply results regarding the Martin boundary of a tree.
From Theorem 3.12, the Martin boundary M ψ associated with ∆ ψ is the set Ω of ends of T . Note that it does not depend on β. We also get the Martin kernel K = K ψ : for x ∈ T and ω ∈ Ω, Moreover, we denote by (µ ψ i ) i∈T the associated family of harmonic measures on Ω. From Proposition 3.13, we have, for i, x ∈ T , .
Note that we have only defined (µ ψ y ) y∈T for β ∈ D W T such that ψ > 0. In the other case, we adopt the convention that µ ψ y is the null measure on Ω for all y ∈ T . Let us now show that almost surely, for all x ∈ D (m) and i ∈ T , χ (n) Proof of Theorem 2.18 (i). From Theorem 2.6, we know that ν W T (dβ)-almost surely, for all i, j ∈ T ,Ĝ (n) (i, j) converges toĜ(i, j) and ψ (n) (i) converges to ψ(i). Let β ∈ D W T be such that these convergences hold. Let us show that for such β, and for all x ∈ D (m) and i ∈ T , χ (n) m (i, δ x ) converges to ψ(i)µ ψ i (Ω x ), and we will have shown that this convergence holds ν W T -almost surely.
. We now assume that β is such that ψ(i) > 0 for all i ∈ T .
Let us fix i ∈ T and x ∈ D (m) . Recall that for n ≥ max(|i|, m), Let us decompose the paths σ ∈ P T (n) i,y , in order to write χ (n) m (i, δ x ) as a function ofF (n) and ψ (n) . We will distinguish two cases.
The first case is when i / ∈ U x = T x \{x}. Then for all y ∈ T x ∩ D (n) , any path from i to y in T (n) necessarily visits x, i.e. P T (n) i,y = P T (n) i,{x},y . Therefore, from Proposition 3.4 (iii), Let us expressĜ (n) (x, y) in a more convenient way.
Then we haveĜ x,y is such that c x (σ) = C ≥ 1, then it has to visit x at least once. As a result, σ can be written as the concatenation of a path σ 1 ∈ P T (n) \{ x} x, x with a path σ 1 ∈ P T (n) x,y such that c x (σ 1 ) = C − 1. Since x / ∈ U x , the path σ 1 has to visit x, so it can be written as the concatenation of a path σ 2 ∈ P T (n) \{x} x,x with a path σ 3 ∈ P T (n) x,y such that c x (σ 3 ) = C − 1. Therefore, for all C ≥ 1, Moreover, note that the paths σ ∈ P T (n) x,y such that c x (σ) = 0 are those that stay in the subtree T (n) x , i.e. the set P T (n) x x,y . By induction, we get: SinceĜ (n) (x, y) < ∞, we haveF (n) (x, x)F (n) ( x, x) < 1, which gives the expected result.
From Lemma 6.3, we get In order to express this last sum, recall that where we have separated the paths that go from x to y by visiting x, and those that stay in T (n) x , since P T (n) x,y = P T (n) x,{ x},y ∪ P x,y . From Proposition 3.4 (iii), we have EJP 25 (2020), paper 108.
Moreover, if y ∈ D (n) \T x , then P x,y is empty. As a result, we get which finally gives .
In the second case, i.e. if i ∈ U x , then for y ∈ T x ∩ D (n) , there are paths from i to y in T (n) that do not visit x. More precisely, we have the following partition: . As a result, we have y .
In the same way we did above, we can show that In conclusion, we have established the following: for all i, j ∈ T . As a result, we finally have Let us also define, for all i ∈ T , the measure χ(i, ·) = ψ(i)µ ψ i . Note that χ(i, ·) is absolutely continuous with respect to χ(φ, ·), and its Radon-Nikodym derivative is ω →F (i,i∧ω)

Convergence to a representation on T : proof of Theorem 2.18 (ii)
To show that the distribution of jump rates 1 Moreover, for ρ m ∈ DČ m Bm , we defineǦ m = (2ρ m −Č m ) −1 . Then the distribution of (G Bm (dβ ) converges weakly, when n → ∞, to the distribution ofǦ m under ν W T (dβ)νČ m Bm (dρ m ), which we will also denote as ν W T,Bm (dβ, dρ m ).
Proof. We can write (G (n) m ) Bm,Bm as the inverse of a Schur complement. Indeed, m . We apply the following change of variables: for β ∈ DW  Let us fix β ∈ D W T , as well as x = y ∈ D (m) , and i ∼ δ x , j ∼ δ y . A path from i to j in T (n) necessarily crosses x ∧ y, since i ∈ T x and j ∈ T y . Therefore, P T (n) Proof. Let η ∈ R V + be fixed. We will compute the Laplace transform of η, Gη : for is a probability measure. Let us now compute, for γ ∼ Gamma(1/2, 1), the Laplace transform of 1 dv is the density of an Inverse Gaussian . Therefore, for , which proves the result.
Proof of Theorem 2.18 (iii). We will use Lemma 7.1 to prove that the sequence of random jump rates (r (m),φ ) m∈N is tight, then identify the only possible limit distribution for each converging subsequence, which will provide both the weak convergence and the expression of the limit distribution.
For m ≥ 0 and β ∈ D W T , let us define, for i ∈ T and m ≥ 0, the vectorμ for all x ∈ D (m) . Then, for ρ m ∈ DČ m Bm and i, j ∈ T , We denote, for m ≥ 0 and i, j ∈ T , a (m) Therefore, we can write (G m (i, j)) i,j∈T = Φ (Ĝ(i, j)) i,j∈T , (ψ(i)) i∈T , (a (m) i,j ) i,j∈T , where Φ is a continuous function.
As a result, there is an extraction (m k ) k∈N such that (Z m k ) k∈N converges in distribution under ν W T (dβ, dρ). Since G m k = Φ(Z m k ) where Φ is continuous, (G m k (i, j)) i,j∈T also converges in distribution under under ν W T (dβ, dρ), as do the random jump rates (r (m k ),φ i,j ) i,j∈T . Let us show that the limit distribution of the environment does not depend on the extraction, which will mean that ((r Proof. Let us fix 1 ≤ m ≤ n. For i ∈ T (m) \{φ}, we denote g i = . Since |i| ≤ m, any path in G (n) m from φ to i crosses i, so from Proposition 3.4 (ii) and (iii), EJP 25 (2020), paper 108. For i ∈ T (m) \{φ}, let us denote byT i the connected component of i inT m . This way, we get To prove that (g i ) i∈T (m) \{φ} are independent, it will be enough to see that for i ∈ T (m) \{φ}, g i is independent of g U (m) i , and that for x ∈ T (m−1) , the restrictions (gT i ) i∈S(x) are independent.
(i, i) as a Schur complement, we see that, if we setŨ i = T i \{i}, so by a change of variables, the distribution of g i conditionally on (β Moreover, for x ∈ T (m−1) , the sets (T i ) i∈S(x) are all at distance 2 from one another in G (n) m . Since β (n) is 1-dependent, the restrictions (β (n) Ti ) i∈S(x) are independent. For j ∈T i , we haveT j ⊂T i , so g j is β (n) Ti -measurable. Therefore the restrictions (gT i ) i∈S(x) are independent, which concludes the proof.
We can now use Lemma 7.3, to show that any converging subsequence of (G m ) m∈N has the same limit in distribution, which corresponds to the representation from Theorem 2.14.
For m ≥ 1, the distribution of G ∼ IG(W i,i , 1) for i ∈ T (m) \{φ}. Recall that the random environment associated with G m is given by the following jump rates: for all i ∈ T \{φ}, and r β,ρm,φ EJP 25 (2020), paper 108.
Let now (m k ) k∈N be an extraction such that under ν W T (dβ, dρ), (r β,ρm k ,φ i,j ) i,j∈T converges in distribution to a limit environment (r which implies that (g The random environment given by these jump rates is in fact the one described in Theorem 2.14, hence its distribution does not depend on the extraction (m k ) k∈N . Since the sequence of jump rates ((r β,ρm,φ i,j ) i,j∈T ) m≥1 is tight, this implies that under ν W T (dβ, dρ), it converges in distribution to the random environment given in Theorem 2.14.

Distinct representations on a regular tree: proofs of Propositions 2.15 and 2.19
Let us start by proving that on regular trees where the VRJP is transient, the standard representation and the one given in Theorem 2.14 are different.
Proof of Proposition 2.15. Let T = (T, E) be a d-regular tree, where d ≥ 3. It was shown in [9] that there exists a W > 0 such that for W > W , the VRJP on T endowed with constant conductances W is almost surely transient. Note that the VRJP is defined in a slightly different manner in [9], but it can be related to the definition used here, thanks to a time rescaling described in Appendix B of [20]. From now on, we take W > W .
We consider jump rates (r i,j ) i∼j on the tree T . Let φ be an arbitrary root for T , and let (i k ) k≥0 be an infinite self-avoiding path (or ray) in T , such that for k ≥ 0, |i k | = k. Let us define S n = n k=1 2 W r i k−1 ,i k . We will compare the distribution of S n under two distribution of jump rates.
Let R ind (dr) be the distribution of jump rates in the representation described in Theorem 2.14. Under R ind (dr), we know that S n has the distribution of n i=1 A i k , where A i k are independent inverse Gaussian variables with parameter (W, 1). Note that E[A 1 ] = 1, so by Jensen's inequality, E[log(A 1 )] < 0. By the law of large numbers, we then have a.s. that Let now R st (dr) be the distribution of jump rates in the standard representation of the VRJP started at φ = i 0 . Under R st (dr), Theorem 2.6 tells us that S n has the same distribution as n k=1 G(i 0 , i k ) G(i 0 , i k−1 ) = G(i 0 , i n ) G(i 0 , i 0 ) =Ĝ (i 0 , i n ) + 1 2γ ψ(i 0 )ψ(i n ) G(i 0 , i 0 ) under ν W V (dβ, dγ), where according to Proposition 2.11, ψ(i) > 0 a.s. for all i ∈ T . Moreover, since the distribution of ψ under ν W V (dβ) is stationary for the group of transformations of T (see Proposition 3 in [23]), ψ(i n ) has the same distribution as ψ(i 0 ) for all n ∈ N, and cannot tend to 0 a.s. when n → ∞. Therefore, neither can S n under R st (dr), which proves that R st and R ind are different.
Proof of Proposition 2.19. Let T be a d-regular tree, with d ≥ 3, endowed with constant conductances W such that P[∀i ∈ T, ψ(i) > 0] = 1. Note that (T , W ) is vertex transitive, so it is enough to show the proposition for i 0 = φ. The following lemma is a consequence of the symmetries of (T , W ), and guarantees that almost surely, the exiting measure gives weight to the whole boundary Ω.
Note that y is β Tx -measurable. Therefore, taking the limit when n → ∞ shows thatχ x is also β Tx -measurable. As a result, given a fixed m ≤ 1, the random variables (χ x ) x∈D (m) are independent, since ν W V is 1-dependent, and have the same distribution, since ν W . Let x ∈ D (m) be fixed, note that x = φ. We define the following events: Let us first show that the event A (m) x is r β,ρm,φ -measurable. Note that the exiting measure µ (m) is measurable with respect to the corresponding environment r β,ρm,φ . Moreover, for i = φ, β i =β i = j∼i r β,ρm,φ i,j is r β,ρm,φ -measurable. Therefore, we just have to show that x ] = 0. This will prove that the distributions of r β,ρm,φ under ν W T,Bm (dβ, dρ m ) and r β,ρ m ,φ under ν W T,B m (dβ, dρ m ) are different. Since |x| = m > m , we have | x| ≥ m , so there exists z ∈ D (m ) such that x ∈ T z , i.e. Ω x ⊂ Ω z . Then for all τ ∈ Ω x , Ω χ(φ, dω)ǧ m (ω, τ ) = b∈B m χ m (φ, b)Ǧ m (b, δ z ). As a EJP 25 (2020), paper 108. result, Let us denote, for y ∈ D (m) , u y = b∈BmǦ m (δ y , b)χ m (φ, b). Then χ(φ,Ωy) χ(φ,Ω x ) v y − v x is a linear form conditionally on β, which has almost surely rank 1 according to Lemma 7.5, so that ker(f β ) is a hyperplane of R |D (m) | . Let us show that conditionally on β, the distribution of (u y ) y∈D (m) is absolutely continuous with respect to the Lebesgue measure on R |D (m) | , and therefore P[A For all ρ m such that 2ρ m −Č m > 0, Φ is differentiable, and its differential is which is invertible, with (d ρm Φ) −1 (w) = − (Ǧ −1 m w)y 2uy y∈D (m) . Note that this is welldefined since u y > 0 for all y ∈ D (m) , thanks to Lemma 7.5. As a result, Φ is a local diffeomorphism. Therefore, the distribution of u = Φ(ρ m ), conditionally on β, admits a density with respect to the Lebesgue measure on R |D (m) | . We deduce that almost surely, P[A (m) x |β] = P[u ∈ ker(f β )|β] = 0, and therefore P[A (m) x ] = 0, which concludes the proof.