Random stable looptrees

We introduce a class of random compact metric spaces L(\alpha) indexed by \alpha \in (1,2) and which we call stable looptrees. They are made of a collection of random loops glued together along a tree structure, and can be informally be viewed as dual graphs of \alpha-stable L\'evy trees. We study their properties and prove in particular that the Hausdorff dimension of L(\alpha) is almost surely equal to \alpha. We also show that stable looptrees are universal scaling limits, for the Gromov-Hausdorff topology, of various combinatorial models. In a companion paper, we prove that the stable looptree of parameter 3/2 is the scaling limit of cluster boundaries in critical site-percolation on large random triangulations.

Figure 1: An α = 1.1 stable tree, and its associated looptree L 1.1 , embedded non isometrically in the plane (this embedding of L 1.1 contains intersecting loops, even though they are disjoint in the metric space).

Introduction
In this paper, we introduce and study a new family (L α ) 1<α<2 of random compact metric spaces which we call stable looptrees (in short, looptrees).Informally, they are constructed from the stable tree of index α introduced in [17,26] by replacing each branch-point of the tree by a cycle of length proportional to the "width" of the branchpoint and then gluing the cycles along the tree structure (see Theorem 2.3 below).We study their fractal properties and calculate in particular their Hausdorff dimension.We also prove that looptrees naturally appear as scaling limits for the Gromov-Hausdorff topology of various discrete random structures, such as Boltzmann-type random dissections which were introduced in [23].
Perhaps more unexpectedly, looptrees appear in the study of random maps decorated with statistical physics models.More precisely, in a companion paper [15], we prove that the stable looptree of parameter 3  2 is the scaling limit of cluster boundaries in critical site-percolation on large random triangulations and on the uniform infinite planar triangulation of Angel & Schramm [2].We also conjecture a more general statement for O(n) models on random planar maps.
Stable looptrees as limits of discrete looptrees.In order to explain the intuition leading to the definition of stable looptrees, we first introduce them as limits of random discrete graphs (even though they will be defined later without any reference to discrete objects).To this end, with every rooted oriented tree (or plane tree) τ , we associate a graph denoted by Loop(τ ) and constructed by replacing each vertex u ∈ τ by a discrete cycle of length given by the degree of u in τ (i.e.number of neighbors of u) and gluing all these cycles according to the tree structure provided by τ , see Figure 2 (by discrete cycle of length k, we mean a graph on k vertices v 1 , . . ., v k with edges v 1 v 2 , . . ., v k−1 v k , v k v 1 ).We endow Loop(τ ) with the graph distance (every edge has unit length).
Fix α ∈ (1, 2) and let τ n be a Galton-Watson tree conditioned on having n vertices, whose offspring distribution µ is critical and satisfies µ([k, ∞)) ∼ |Γ(1 − α)| −1 • k −α as k → ∞.The stable looptree L α then appears (Theorem 4.1) as the scaling limit in distribution for the Gromov-Hausdorff topology of discrete looptrees Loop(τ n ): where c • M stands for the metric space obtained from M by multiplying all distances by c > 0. Recall that the Gromov-Hausdorff topology gives a sense to convergence of (isometry classes) of compact metric spaces, see Section 3.2 below for the definition.
It is known that the random trees τ n converge, after suitable scaling, towards the so-called stable tree T α of index α (see [16,17,26]).It thus seems natural to try to define L α directly from T α by mimicking the discrete setting (see Figure 1).However this construction is not straightforward since the countable collection of loops of L α does not form a compact metric space: one has to take its closure.In particular, two different cycles of L α never share a common point.To overcome these difficulties, we define L α by using the excursion X exc,(α) of an α-stable spectrally positive Lévy process (which also codes T α ).
Properties of stable looptrees.Stable looptrees possess a fractal structure whose dimension is identified by the following theorem: Theorem 1.1 (Dimension).For every α ∈ (1, 2), almost surely, L α is a random compact metric space of Hausdorff dimension α.
The proof of this theorem uses fine properties of the excursion X exc, (α) .We also prove that the family of stable looptrees interpolates between the circle of unit length C 1 := (2π) −1 •S 1 and the 2-stable tree T 2 which is the Brownian Continuum Random Tree introduced by Aldous [1] (up to a constant multiplicative factor).

Theorem 1.2 (Interpolation loop-tree).
The following two convergences hold in distribution for the Gromov-Hausdorff topology See Figure 3 for an illustration.The proof of (i) relies on a new "one big-jump principle" for the normalized excursion of the α-stable spectrally positive Lévy process which is of independent interest: informally, as α ↓ 1, the random process X exc,(α) converges towards the deterministic affine function on [0, 1] which is equal to 1 at time 0 and 0 at time 1.We refer to Theorem 3.6 for a precise statement.Notice also the appearance of the factor 1 2 in (ii). 1 INTRODUCTION 4 Figure 3: On the left L 1.01 , on the right L 1.9 .
fig: 1et2 where deg(f) is the degree of the face f, that is the number of edges in the boundary of f, and Z n is a normalizing constant.Under mild assumptions on µ, this definition makes sense for every n large enough.Let D µ n be a random dissection sampled according to P µ n .In [18], when µ has a heavy tail, the second author studied the asymptotic behavior of D µ n viewed as a random closed subset of the unit disk when n ! 1.In this case, the limiting object (the so called stable lamination of index ↵) is a random compact subset of the disk which is the union of infinitely many non-intersecting chords and has faces of infinite degree.Its Hausdorff dimension is a.s. 2 -↵ -1 .
In this paper, instead of considering D µ n as a random compact subset of the unit disk, we view D µ n as a metric space by endowing D µ n with the graph distance d gr (every edge of D µ n has length one).From this perspective, it turns out the scaling limit of the random Boltzmann dissections D µ n is a stable looptree: scretencstable Corollary 1. Fix ↵ 2 (1, 2) and let µ be a probability measure supported on {0, 2, 3, . ..} of mean 1 such Boltzmann dissections of [23].Before giving a precise statement, we need to introduce some notation.For n ≥ 3, let P n be the convex polygon inscribed in the unit disk of the complex plane whose vertices are the n-th roots of unity.By definition, a dissection is the union of the sides of P n and of a collection of diagonals that may intersect only at their endpoints, see Figure 11.The faces are the connected components of the complement of the dissection in the polygon.Following [23], if µ = (µ j ) j≥0 is a probability distribution on {0, 2, 3, 4, . ..} of mean 1, we define a Boltzmann-type probability measure P µ n on the set of all dissections of P n+1 by setting, for every dissection ω of P n+1 : where deg(f ) is the degree of the face f , that is the number of edges in the boundary of f , and Z n is a normalizing constant.Under mild assumptions on µ, this definition makes sense for every n large enough.Let D µ n be a random dissection sampled according to P µ n .In [23], the second author studied the asymptotic behavior of D µ n viewed as a random closed subset of the unit disk when n → ∞ in the case where µ has a heavy tail.Then the limiting object (the so-called stable lamination of index α) is a random compact subset of the disk which is the union of infinitely many non-intersecting chords and has faces of infinite degree.Its Hausdorff dimension is a.s. 2 − α −1 .In this paper, instead of considering D µ n as a random compact subset of the unit disk, we view D µ n as a metric space by endowing the vertices of D µ n with the graph distance (every edge of D µ n has length one).From this perspective, the scaling limit of the random Boltzmann dissections D µ n is a stable looptree (see Figure 4): Then the following convergence holds in distribution for the Gromov-Hausdorff topology Looptrees in random planar maps.Another area where looptrees appear is the theory of random planar maps.The goal of this very active field is to understand large-scale properties of planar maps or graphs, chosen uniformly in a certain class (triangulations, quadrangulations, etc.), see [2,11,27,25,30].In a companion paper [15], we prove that the scaling limit of cluster boundaries of critical site-percolation on large random triangulations and the UIPT introduced by Angel & Schramm [2] is L 3/2 (by boundary of a cluster, we mean the graph formed by the edges and vertices of a connected component which are adjacent to its exterior; see [15] for a precise definition and statement).
We also give a precise conjecture relating the whole family of looptrees (L α ) α∈( cluster boundaries of critical O(n) models on random planar maps.We refer to [15] for details.
Looptrees in preferential attachment.As another motivation for introducing looptrees, we mention the subsequential work [13], which studies looptrees associated with random trees built by linear preferential attachment, also known in the literature as Barabási-Albert trees or plane-oriented recursive trees.As the number of nodes grows, it is shown in [13] that these looptrees, appropriately rescaled, converge in the Gromov-Hausdorff sense towards a random compact metric space called the Brownian looptree, which is a quotient space of Aldous' Brownian Continuum Random Tree.
Finally, let us mention that stable looptrees implicitly appear in [27], where Le Gall and Miermont have considered scaling limits of random planar maps with large faces.
The limiting continuous objects (the so-called α-stable maps) are constructed via a distance process which is closely related to looptrees.Informally, the distance process of Le Gall and Miermont is formed by a looptree L α where the cycles support independent Brownian bridges of the corresponding lengths.However, the definition and the study of the underlying looptree structure is interesting in itself and has various applications.Even though we do not rely explicitly on the article of Le Gall and Miermont, this work would not have been possible without it.
Outline.The paper is organized as follows.In Section 2, we give a precise definition of L α using the normalized excursion of the α-stable spectrally positive Lévy process.
Section 3 is then devoted to the study of stable looptrees, and in particular to the proofs of Theorems 1.1 and 1.2.In the last section, we establish a general invariance principle concerning discrete looptrees from which Theorem 1.3 will follow.

Defining stable looptrees
This section is devoted to the construction of stable looptrees using the normalized excursion of a stable Lévy process, and to the study of their properties.In this section, α ∈ (1, 2) is a fixed parameter.

The normalized excursion of a stable Lévy process
We follow the presentation of [16] and refer to [5] for the proof of the results mentioned here.By α-stable Lévy process we will always mean a stable spectrally positive Lévy process X of index α, normalized so that for every λ > 0 The process X takes values in the Skorokhod space D(R + , R) of right-continuous with left limits (càdlàg) real-valued functions, endowed with the Skorokhod topology (see [8,Chap. 3]).The dependence of X in α will be implicit in this section.Recall that X enjoys the following scaling property: For every c > 0, the process (c −1/α X ct , t ≥ 0) has the same law as X.Also recall that the Lévy measure Π of X is (2.1) Following Chaumont [12] we define the normalized excursion of X above its infimum as the re-normalized excursion of X above its infimum straddling time 1.More precisely, set Note that X d 1 = X g 1 since a.s.X has no jump at time g 1 and X has no negative jumps .Then the normalized excursion X exc of X above its infimum is defined by for every s ∈ [0, 1]. (2.2) We shall see later in Section 3.1.2another useful description of X exc using the Itô excursion measure of X above its infimum.Notice that X exc is a.s. a random càdlàg function on [0, 1] such that X exc 0 = X exc 1 = 0 and X exc s > 0 for every s ∈ (0, 1).If Y is a càdlàg function, we set ∆Y t = Y t − Y t− , and to simplify notation, for 0 < t ≤ 1, we write and set ∆ 0 = 0 by convention.

The stable Lévy tree
We now discuss the construction of the α-stable tree T α , which is closely related to the α-stable looptree.Even though it possible to define L α without mentioning T α , this sheds some light on the intuition hiding behind the formal definition of looptrees.

The stable height process
By the work of Le Gall & Le Jan [26] and Duquesne & Le Gall [17,18], it is known that the random excursion X exc encodes a random compact R-tree T α called the α-stable tree.To define T α , we need to introduce the height process associated with X exc .We refer to [17] and [18] for details and proofs of the assertions contained in this section.
X exc .
The height process H exc associated with X exc is defined by the approximation formula where the limit exists in probability.The process (H exc t ) 0≤t≤1 has a continuous modification, which we consider from now on.Then H exc satisfies H exc 0 = H exc 1 = 0 and H exc t > 0 for t ∈ (0, 1).It is standard to define the R-tree coded by H exc as follows.For every h : [0, 1] → R + and 0 ≤ s, t ≤ 1, we set h.
The random stable tree T α is then defined as the quotient metric space [0, 1]/ , d H exc , which indeed is a random compact R-tree [18,Theorem 2.1].Let π : [0, 1] → T α be the canonical projection.The tree T α has a distinguished point ρ = π(0), called the root or the ancestor of the tree.If u, v ∈ T α , we denote by [[u, v]] the unique geodesic between u and v.This allows us to define a genealogical order on T α : For every u, v

Genealogy of T α and X exc
The genealogical order of T α can be easily recovered from X exc as follows.We define a partial order on [0, 1], still denoted by , which is compatible with the projection π : [0, 1] → T α by setting, for every s, t ∈ [0, 1], where by convention X exc 0− = 0.It is a simple matter to check that is indeed a partial order which is compatible with the genealogical order on T α , meaning that a point a ∈ T α is an ancestor of b if and only if there exist s t ∈ [0, 1] with a = π(s) and b = π(t).For every s, t ∈ [0, 1], let s ∧ t be the most recent common ancestor (for the relation on [0, 1]) of s and t.Then π(s ∧ t) also is the most recent common ancestor of π(s) an π(t) in the tree T α .
We now recall several well-known properties of T α .By definition, the multiplicity (or degree) of a vertex u ∈ T α is the number of connected components of T α \{u}.Vertices of T α \{ρ} which have multiplicity 1 are called leaves, and those with multiplicity at least 3 are called branch-points.By [18,Theorem 4.6], the multiplicity of every vertex of T α belongs to {1, 2, ∞}.In addition, the branch-points of T α are in one-to-one correspondence with the jumps of X exc [29, Proposition 2].More precisely, a vertex u ∈ T α is a branch-point if and only if there exists a unique s ∈ [0, 1] such that u = π(s) and ∆X exc s = ∆ s > 0. In this case ∆ s intuitively corresponds to the "number of children" (although this does not formally make sense) or width of π(s).
We finally introduce a last notation, which will be crucial in the definition of stable looptrees in the next section.If s, t ∈ [0, 1] and s t, set Roughly speaking, x t s is the "position" of the ancestor of π(t) among the ∆ s "children" of π(s).

Definition of stable looptrees
Informally, the stable looptree L α is obtained from the tree T α by replacing every branch-point of width x by a metric cycle of length x, and then gluing all these cycles along the tree structure of T α (in a very similar way to the construction of discrete looptrees from discrete trees explained in the Introduction, see Figures 1 and 2).But making this construction rigorous is not so easy because there are countably many loops (non of them being adjacent).
Recall that the dependence in α is implicit through the process X exc .For every t ∈ [0, 1] we equip the segment [0, ∆ t ] with the pseudo-distance δ t defined by Note that if ∆ t > 0, ([0, ∆ t ), δ t ) is isometric to a metric cycle of length ∆ t (this cycle will be associated with the branch-point π(t) in the looptree L α , as promised in the previous paragraph).
For s ≤ t ∈ [0, 1], we write s ≺ t if s t and s = t.It is important to keep in mind that ≺ does not correspond to the strict genealogical order in T α since there exist s ≺ t with π(s) = π(t).The stable looptree L α will be defined as the quotient of [0, 1] by a certain pseudo-distance d involving X exc , which we now define.First, if s t, set d 0 (s, t) = s≺r t δ r (0, x t r ).In the last sum, only jump times give a positive contribution, since δ r (0, x t r ) = 0 when ∆ r = 0. Note that even if t is a jump time, its contribution in (2.4) is null since δ t (0, x t t ) = 0 and we could have summed over s ≺ r ≺ t.Deliberately, we do not allow r = s in (2.4).Also, it could happen that there is no r ∈ (s, t] such that both s ≺ r and r t (e.g. when s = t) in which case the sum (2.4) is equal to zero.Heuristically, if s ≺ r t, the term δ r (0, x t r ) represents the length of the portion of the path going from (the images in the looptree of) s to t belonging to the loop coded by the branch-point r (see Figure 5).Then, for every s, t ∈ [0, 1], set d(s, t) = δ s∧t x s s∧t , x t s∧t + d 0 (s ∧ t, s) + d 0 (s ∧ t, t). (2.5) Loop corresponding to the branch point s ∧ t δ r (0, x t r ) corresponding to the branch point π(r) π( ) Figure 5: Illustration of the definition of d.The geodesic between the images of s and t in the looptree is in bold.Here, s ∧ t ≺ r ≺ t.This is a simplified picture since in stable looptrees no loops are adjacent.
Let us give an intuitive meaning to this definition.The distance d(s, t) contains contributions given by loops which correspond to branch-points belonging to the geodesic [[π(s), π(t)]] in the tree: the third (respectively second) term of the right-hand side of (2.5) measures the contributions from branch-points belonging to the interior of ), while the term δ s∧t (x s s∧t , x t s∧t ) represents the length of the portion of the path going from (the images in the looptree of) s to t belonging to the (possibly degenerate) loop coded by π(s ∧ t) (this term is equal to 0 if π(s ∧ t) is not a branch-point), see Figure 5.
In particular, if s t, note that (2.6) Proof.The first assertion is obvious from the definition of d : ( where for the last inequality we have used the fact that I t s ≤ X exc r0− since s < r 0 < t.Since d 0 (s, t) = s≺r t δ r (0, x t r ), this gives (2.7).Let us return to the proof of (ii).Let s < t.If s ≺ t, then by (2.6) and treating the jump at s separately we can use (2.7) to get Then by (2.5) and (2.7) we have This completes the proof.
Proposition 2.2.Almost surely, the function d(•, Proof.By definition of d and Theorem 2.1, for every s, t ∈ [0, 1], we have d(s, t) ≤ 2 sup X exc < ∞.The fact that d satisfies the triangular inequality is a straightforward but cumbersome consequence of its definition (2.5).We leave the details to the reader.
Let us now show that the function d(•, By symmetry, it is sufficient to show that d(s, s n ) → 0 as n → ∞.Suppose for a moment that s n ↑ s and s n < s, then by Theorem 2.1 (ii) we have The other case when s n ↓ s and s n < s is treated similarly.This proves the proposition.
We are finally ready to define the looptree coded by X exc .
The random stable looptree of index α is defined as the quotient metric space We will denote by p the canonical projection p : continuous by Theorem 2.2, it immediately follows that p : [0, 1] → L α is a.s.continuous.The metric space L α is thus a.s.compact, as the image of a compact metric space by an a.s.continuous map.
With this definition, it is maybe not clear why L α contains loops.For sake of clarity, let us give an explicit description of these.Fix s ∈ [0, 1] with ∆ s > 0, and for u ∈ It is easy to check that the image of {s u } u∈[0,∆s] by p in L α is isometric to a circle of length ∆ s , which corresponds to the loop attached to the branch-point π(s) in the tree T α .
To conclude this section, let us mention that it is possible to construct L α directly from the stable tree T α in a measurable fashion.For instance, if u = π(s), one can recover the jump ∆ s as follows (see [29, Eq. ( 1)]): where Mass is the push-forward of the Lebesgue measure on [0, 1] by the projection π : [0, 1] → T α .However, we believe that our definition of L α using Lévy processes is simpler and more amenable to computations (recall also that the stable tree is itself defined by the height process H exc associated with X exc ).

Properties of stable looptrees
The goal of this section is to prove Theorems 1.1 and 1.2.Before doing so, we introduce some more background on spectrally positive stable Lévy processes.This will be our toolbox for studying fine properties of looptrees.The interested reader should consult [4,5,12] for additional details.
Let us stress that, to our knowledge, the limiting behavior of the normalized excursion of α-stable spectrally positive Lévy processes as α ↓ 1 (Theorem 3.6) seems to be new.

Excursions above the infimum
In Section 2.1, the normalized excursion process X exc has been introduced as the normalized excursion of X above its infimum straddling time 1.Let us present another definition X exc using Itô's excursion theory (we refer to [5, Chapter IV] for details).If X is an α-stable spectrally positive Lévy process, denote by X t = inf{X s : 0 ≤ s ≤ t} its running infimum process.Note that X is continuous since X has no negative jumps.The process X − X is strong Markov and 0 is regular for itself, allowing the use of excursion theory.We may and will choose −X as the local time of X − X at level 0. Let (g j , d j ), j ∈ I be the excursion intervals of X − X away from 0. For every j ∈ I and s ≥ 0, set ω j s = X (gj +s)∧dj − X gj .We view ω j as an element of the excursion space E, defined by: If ω ∈ E, we call ζ(ω) the lifetime of the excursion ω.From Itô's excursion theory, the point measure is a Poisson measure with intensity dtn(dω), where n(dω) is a σ-finite measure on the set E called the Itô excursion measure.This measure admits the following scaling property.For every λ > 0, define S (λ) : E → E by S (λ) (ω) = λ 1/α ω(s/λ), s ≥ 0 .Then (see [12] or [5,Chapter VIII.4] for details) there exists a unique collection of probability measures (n (a) , a > 0) on the set of excursions such that the following properties hold: (ii) For every λ > 0 and a > 0, we have S (λ) (n (a) ) = n (λa) .
(iii) For every measurable subset A of the set of all excursions: In addition, the probability distribution n (1) , which is supported on the càdlàg paths with unit lifetime, coincides with the law of X exc as defined in Section 2.1, and is also denoted by n(•|ζ = 1).Thus, informally, n(•|ζ = 1) is the law of an excursion under the Itô measure conditioned to have unit lifetime.

Absolute continuity relation for X exc
We will use a path transformation due to Chaumont [12] relating the bridge of a stable Lévy process to its normalized excursion, which generalizes the Vervaat transformation in the Brownian case.If U is a uniform variable over [0, 1] independent of X exc , then the process X br defined by is distributed according to the bridge of the stable process X, which can informally be seen as the process (X t ; 0 ≤ t ≤ 1) conditioned to be at level zero at time one.See [5, Chapter VIII] for definitions.In the other direction, to get X exc from X br we just re-root X br by performing a cyclic shift at the (a.s.unique) time u (X br ) where it attains its minimum.
We finally state an absolute continuity property between X br and X exc .Fix a ∈ (0, 1).Let F : D([0, a], R) → R be a bounded continuous function.We have (see [5, Chapter VIII.3, Formula (8)]): where p t is the density of X t .Note that by time reversal, the law of (X satisfies the same property.The previous two results will be used in order to reduce the proof of a statement concerning X exc to a similar statement involving X (which is usually easier to obtain).More precisely, a property concerning X will be first transferred to X br by absolute continuity, and then to X exc by using the Vervaat transformation.

Descents
Let Y : R → R be càdlàg function.We will describe the law of the descents (from a typical point) in an α-stable Lévy process by using excursion theory.To this end, denote X t = sup{X s : 0 ≤ s ≤ t} the running supremum process of X.The process X − X is strong Markov and 0 is regular for itself.Let (L t , t ≥ 0) denote a local time of X −X at level 0, normalized in such a way that E exp(−λX L −1 (t) ) = exp(−tλ α−1 ).Note that by [5, Chapter VIII, Lemma 1], L −1 is a stable subordinator of index 1 − 1/α.Finally, to simplify notation, set x s = X s − X s− and u s = Xs−Xs− Xs−Xs− for every s ≥ 0 such that X s > X s− .In order to describe the law of descents from a fixed point in an α-stable process we need to introduce the two-sided stable process.If X 1 and X 2 are two independent stable processes on R + , set X t = X 1 t for t ≥ 0 and X t = −X 2 (−t)− for t < 0.
(i) Let (X t : t ∈ R) be a two-sided spectrally positive α-stable process.Then the collection has the same distribution as (ii) The point measure Proof.The first assertion follows from the fact that the dual process X, defined by Xs = −X (−s)− for s ≥ 0, has the same distribution as X and that for every s ≥ 0 such that −s 0, or equivalently Xs > Xs− .For (ii), denote by (g j , d j ) j∈J the excursion intervals of X − X above 0. We now state a technical but useful consequence of the previous proposition, which will be required in the proof of the lower bound of the Hausdorff dimension of stable looptrees.
The conclusion follows since P(L −1 We conclude this section by a lemma which will be useful in the proof of Theorem 4.1.See also [27, Proof of Proposition 7] for a similar statement.Lemma 3.3.Almost surely, for every t ≥ 0 we have Proof.The left-hand side of the equality appearing in the statement of the lemma is clearly a càdlàg function.It also simple, but tedious, to check that the right-hand side is a càdlàg function as well.It thus suffices to prove that (3.3) holds almost surely for every fixed t ≥ 0.
Set Xs = X (t−s)− − X t− for 0 ≤ s ≤ t, and to simplify notation set S u = sup [0,u] X.In particular, (X s , 0 ≤ s ≤ t) and ( Xs , 0 ≤ s ≤ t) have the same distribution.Hence X, s t s≥0 x t s (X) . (3.4) Then notice that ladder height process (S L −1 t , t ≥ 0) is a subordinator without drift [5, Chapter VIII, Lemma 1], hence a pure jump-process.This implies that S t is the sum of its jumps, i.e. a.s S t = 0≤s≤t ∆S s .This completes the proof of the lemma.
The following result is the analog statement for the normalized excursion.Corollary 3.4.Almost surely, for every t ∈ [0, 1] we have Proof.This follows from the previous lemma and the construction of X exc as the normalized excursion above the infimum of X straddling time 1 in Section 2.1.We leave details to the reader.In particular Theorem 3.4 implies that almost surely, for every 0 ≤ t ≤ 1, (3.5)By (2.6), a similar equality, which will be useful later, holds almost surely for every In this section we study the behavior of X exc as α → 1 or α → 2. In order to stress the dependence in α, we add an additional superscript (α) , e.g.X (α) , X br,(α) , X exc,(α) will respectively denote the α-stable spectrally positive process, its bridge and normalized excursion, and Π (α) , n (α) will respectively denote the Lévy measure and the excursion measure above the infimum of X (α) .
Limiting case α ↑ 2. We prove that X exc,(α) converges, as α ↑ 2, towards a multiple of the normalized Brownian excursion, denoted by e (see Figure 6 for an illustration).This is standard and should not be surprising, since the α = 2 stable Lévy process is just √ 2 times Brownian motion.
Proposition 3.5.The following convergence holds in distribution for the topology of uniform convergence on every compact subset of R + X exc,(α) Proof.We first establish an unconditioned version of this convergence.Specifically, if B is a standard Brownian motion, we show that where the convergence holds in distribution for the uniform topology on D([0, 1], R).
Since B is almost surely continuous, by [33, Theorems V.19, V.23] it is sufficient to check that the following three conditions hold as α ↑ 2: (a) The convergence X (c) For every δ > 0, there exist η, > 0 such that for 0 ≤ s ≤ t ≤ 1: It is clear that Condition (a) holds.The scaling property of X (α) entails that X 1 .On the other hand, for every u ∈ R, we have .
The convergence (3.7) is then a consequence of the construction of X exc,(α) from the excursion of X (α) above its infimum straddling time Similarly, define g (2)  1 , d 1 when X 1 is not a local minimum of B (this follows from the Markov property applied at the stopping time d (2) 1 ) we get that d 1 in distribution as α ↑ 2. The desired convergence (3.7) then follows from (2.2).
Limiting case α ↓ 1.The limiting behavior of the normalized excursion X exc,(α) as α ↓ 1 is very different from the case α ↑ 2. Informally, we will see that in this case, X exc,(α) converges towards the deterministic affine function on [0, 1] which is equal to 1 at time 0 and 0 at time 1.Some care is needed in the formulation of this statement, since the function x → 1 0<x≤1 (1 − x) is not càdlàg.To cope up with this technical issue, we reverse time: Proposition 3.6.The following convergence holds in distribution in D([0, 1], R): Remark 3.7.Let us mention here that the case α ↓ 1 is not (directly) related to Neveu's branching process [32] which is often considered as the limit of a stable branching process when α → 1.Indeed, contrary to the latter, the limit of X exc,α when α ↓ 1 is deterministic.The reason is that Neveu's branching process has Lévy measure r −2 1 (0,∞) dr, but recalling our normalization (2.1), in the limit α ↓ 1, the Lévy measure Π (α) does not converge to r −2 1 (0,∞) dr.
Theorem 3.6 is thus a new "one-big jump principle" (see Figure 6 for an illustration), which is a well-known phenomenon in the context of subexponential distributions (see [20] and references therein).See also [3,19] for similar one-big jump principles.The strategy to prove Theorem 3.6 is first to establish the convergence of X exc,(α) on every fixed interval of the form [ε, 1] with ∈ (0, 1) and then to study the behavior near 0.
Proof of Theorem 3.8.Following the spirit of the proof of Theorem 3.5, we first establish an analog statement for the unconditioned process X (α) by proving that where the convergence holds in distribution for the uniform convergence on every compact subset of R + .To establish (3.9), we also rely on [33,  To prove the lemma, we show that for every ε > 0 we have By the scaling property of the measure n (α) (see property (iii) in Section 3.1.1),it is sufficient to show that (3.10) For t ≥ 0, denote by q (α) t (dx) the entrance measure at time t under n (α) , defined by relation for every measurable function f : R + → R + Then, using the fact that, for every t > 0, under the conditional probability measure n (α) ( • |ζ > t), the process (ω t+s ) s≥0 is Markovian with entrance law q (α) t (dx) and transition kernels of X (α) stopped upon hitting 0, we get where P (α) x denotes the distribution of a standard α-stable process X (α) started from x and stopped at the first time τ when it touches 0. From (3.9) it follows that for every δ ∈ (0, ε) the convergence ≥ 1. (3.12) On the other hand, we can write provided that 2δ < ε (notice that 1 − ε + 2δ > ε) Convergence (3.9) then entails that g(x, α) := P (α) Finally, as g(x, α) is bounded by 1 we get by dominated convergence and the last display that lim inf Combining (3.12) and (3.13) with (3.11) we deduce that ) by property (iii) in Section 3.1.1,it follows that the right-hand side of (3.14) tends to 1 as δ → 0. This completes the proof.
We have seen in Theorem 3.8 that X exc,(α) converges to the deterministic function x → 1 − x over every interval [ε, 1] for every ε > 0. Still, this does not imply Theorem 3.6 because, as α ↓ 1, the difference of magnitude roughly 1 between times 0 and could be caused by the accumulation of many small jumps of total sum of order 1 and not by a single big jump of order 1.We shall show that this is not the case by using the Lévy bridge X br,(α) and and a shuffling argument.
Applying the Vervaat transformation to X br,(α) , we deduce from Theorem 3.8 that for every ε > 0 we have P J(X br,(α) , ε) −−→ α↓1 1. (3.15) We then rely on the following result: Lemma 3.9.For every α ∈ (1, 2), let (B and such that the following two conditions hold: (i) For every ε > 0, we have P J(B (α) , ε) → 1 as α ↓ 1; (ii) For every α ∈ (1, 2) and every n ≥ 1, the increments where the convergence holds in distribution for the Skorokhod topology on D([0, 1], R) and where U is an independent uniform variable over [0, 1].If we assume for the moment this lemma, the proof of Theorem 3.6 is completed as follows.The Lévy bridges X br,(α) satisfy the assumptions of Lemma 3.9.Indeed, (i) is satisfies thanks to (3.15) and (ii) follows from absolute continuity.Lemma 3.9 entails that X br,(α) → 1 {U ≤t} − t; 0 ≤ t ≤ 1 the convergence holds in distribution for the Skorokhod topology as α ↓ 1.It then suffices to apply the Vervaat transform to the latter convergence to get the desired result.
It remains to establish Lemma 3.9.
First step: at most one large jump.We first show that for every δ > 0, the probability that there are two jumps in B (α) larger than δ tends to 0 as α ↓ 1.To this end, argue by contradiction and assume that there exists η > 0 such that along a subsequence α k ↓ 1 with probability at least η the bridge B (α k ) has two jump times T But, conditionally on the event {|T with probability tending to one as k → ∞, these two jumps will fall in different time intervals of the form [i/n k , (i + 1)/n k ] in the shuffled process B (α k ),n k .Hence, we deduce that with probability asymptotically larger than η/100 (this value is not optimal), there exist two jump times T If one chooses ε ∈ (0, δ ∧ 1/4), this contradicts the fact that P(J( B (α k ),n k , ε)) → 1 as k → ∞.
Second step: one jump of size roughly 1.We only sketch the argument and leave the details to the reader.Denote by T α the time when B (α) achieves its largest jump.Let α k be a sequence such that α Then let n k → ∞ be a sequence of integers such that the following three converges hold in probability as k → ∞: Indeed, this is possible since, by the first step, we know that all the jumps of B (α k ) , its largest jump excluded, converge in probability to 0 as k → ∞.Denote by B (α k ),n k the function on [0, 1] obtained by doing a random shuffle of B (α k ) of length 1/n k after discarding the time interval that contains T α k , and then scaling time by a factor n k /(n k − 1) so that B (α k ),n k is defined on [0, 1].The proof is completed if we manage to check that B (α k ),n k converges in probability towards the function t → −t and δ k → 1 in probability.
To do so, let us introduce the empirical variance of the small increments We shall first establish that Σ k → 0 in probability as k → ∞.To this end, suppose by contradiction that Σ k does not converge to 0 in probability as k → ∞.Then, up to extraction, there exists a fixed c > 0 such that P(Σ k ≥ c) ≥ c for every k large enough.Then consider the family of n k − 1 increments .
Observe that we have (3.17)For the second and third convergences, we use (ii) and (iii).
Then the proofs of [7,Theorems 24.1 and 24.2] give that the random function converges in probability towards the constant function equal to 0 on [0, 1], denoted by 0. As before, using (iii), we deduce that ( B → 1 in probability.It follows that B (α k ),n k indeed converges to t → −t in probability.The details are left to the reader.

Others lemmas
Denote by ∆ * (Y ) the size of the largest jump of a càdlàg function Y .This quantity is of interest since by construction the length of the longest cycle in the stable looptree L α is equal to ∆ * (X exc,(α) ).
The conclusion immediately follows.
We prove Theorem 1.2 concerning the limiting behavior of L α as α ↓ 1 and α ↑ 2. Since L α is coded by X exc,(α) , it should not be surprising that these results are consequences of Theorems 3.5 and 3.6 which describe the limiting behavior of X exc,(α) as α ↓ 1 and α ↑ 2. We will see this is indeed the case when α → 1, but that some care is needed when α → 2 because of the presence of an additional factor 1 2 .Before proving Theorem 1.2 we briefly recall the definition of the Gromov-Hausdorff topology.We refer to [10] for additional details.
The Gromov-Hausdorff topology.If (E, d) and (E , d ) are two compact metric spaces, the Gromov-Hausdorff distance between E and E is where the infimum is taken over all choices of the metric space (F, δ) The Gromov-Hausdorff distance can be expressed in terms of correspondences by the formula where the infimum is over all correspondences R between E and E .The Gromov-Hausdorff distance is indeed a metric on the space of all isometry classes of compact metric spaces, making it separable and complete.
We now establish (ii).Recall from (2.3) the definition of the pseudo-distance d h for a function h : [0, 1] → R + .We will prove that we have the following convergence in distribution •), a density and continuity argument shows that in order to identify the limit of any convergent subsequence of (d (α) ), by [8,Theorem 7.3] (this reference covers the case of [0, 1] but the extension to [0, 1] 2 is straightforward), it is sufficient to check that where U, V are independent random uniform variables on [0, 1].We claim that it suffices to prove that Indeed, the reader may either strengthen the following proof by splitting at the most common ancestor U ∧V , or invoke a re-rooting property of X exc,(α) at a uniform location which gives see Theorem 4.6.We now establish (3.22).For a càdlàg function . By (3.5) and (3.6), we have: By using the Vervaat transformation (recall Section 3.1.2),we get that = Q 1 0 (X br,(α) ).It is thus sufficient to show that the last quantity converges in probability to 1/2 as α ↑ 2.
As usual, we replace the bridge X br,(α) by the α-stable process X (α) and first prove that To this end, note that by Theorem 3.1, the collection {u 1 s (X (α) ) : s ∈ [0, 1], s 1} is an i.i.d.collection of uniform variables also independent of {∆X B).
On the other hand, we have for ε > 0 which converges to 0 as α ↑ 2 by (2.1).Setting S = {∆ s (X (α) ); 0 ≤ s, s 1}, it follows that sup S converges in probability towards 0 as α ↑ 2, and the sum of all the elements of S converges in probability towards a positive random variable as α ↑ 2. We are thus in position to apply a classic weak law of large numbers (for example by using an L 2 estimate) and get the following two convergences: This proves (3.24).We now complete the proof of (3.22) by showing that by using an absolute continuity argument.For a càdlàg function We claim that there exists η ∈ (0, 1) such that for every α ∈ (1, 2) sufficiently close to 2 we have ) by Theorem 3.12, it follows that there exists a constant C > 0 (depending on η) such that, for every α ∈ ( 3 2 , 2), Thus, putting the pieces together, for every α sufficiently close to 2 we have A minor adaptation of (3.24) shows that Q 1 η (X br,(α) ) converges in probability to 1 2 as α ↑ 2. This completes the proof of Theorem 1.2 (ii).

Hausdorff dimension of looptrees
In this section, we study fractal properties of looptrees, and prove in particular Theorem 1.1 which identifies the Hausdorff dimension of L α (see [28,Sec. 4] for the definition and background on Hausdorff dimension).Recall the definition of L α using X exc in Section 2.3.In this section, the dependence of X exc in α is implicit.
Instead of proving (3.27) directly, we will first prove a similar statement involving the unconditioned process X.Let (t (ε), * i ) i≥1 be an increasing enumeration of the times where X makes a jump larger than ε 1/α (with the convention t (ε), * 0 = 0), and set . By standard arguments involving continuity relations between X and the Lévy bridge X br as well as the Vervaat transformation between X br and X exc (see Section 3.1.2),(3.27) holds if we manage to prove that The advantage of dealing with the unconditioned process is that now N * ε is distributed according to a Poisson random variable of parameter Π(ε 1/α , ∞), that is, using (2.1), Furthermore, by the Markov property of the process X, the random variables Page 24/35 ejp.ejpecp.orgare independent and identically distributed.By the scaling property of X, their common distribution can be written as ε 1/α • A, where where X is the Lévy process X conditioned not to make jumps larger than 1, that is with Lévy measure given by Π(dx)1 (0,1) (x), and E is an independent exponential variable of parameter (α − 1)/Γ(α − 2).
We claim that E [exp(λA)] < ∞ for a certain λ > 0. To this end, it is sufficient to check that for a certain λ > 0 we have both The first inequality is a consequence of the discussion of [5, p. 188] applied to the spectrally negative process −X.For the second one, we slightly adapt these arguments: Since ∆ Xs < 1 for every s ≥ 0, by the Markov property applied at T [1,∞] = inf{t > 0 : Xt ≥ 1} and by lack of memory of the exponential law, we have X > a , . To establish (3.28), write Since A has exponential moments and by (3.29), the right-hand side of the last display vanishes as ε → 0. This implies (3.28) and completes the proof of the upper bound.

Lower bound
Proof.Denote by ν the probability measure on L α obtained as the push-forward of the Lebesgue measure on [0, 1] by the projection p.We will show that for every δ ∈ (0, α), almost surely, for ν-almost every u we have lim sup where B r (u) is the ball of center u and radius r > 0 in the metric space L α .By standard density theorems for Hausdorff measures [28,Theorem 8.8] (this reference covers the case of measures on R n , but the proof remains valid here), this implies that dim H (L α ) ≥ α − δ, almost surely.The lower bound will thus follow.Fix δ ∈ (0, α).Let U be a uniform variable over [0, 1] independent of L α .We shall prove that almost surely, for every r > 0 sufficiently small we have ν(B r (p(U ))) ≤ 2r α−δ .By Fubini's theorem, this indeed implies (3.30).We will use the following lemma: Lemma 3.13.Fix η > 0. Almost surely, as ε → 0, there exists a jump time T ε of X exc such that the following three conditions hold:  (iii) inf [U,U +ε 1−η ] X exc < X exc Tε− .
Assuming (i), (ii) and (iii), let us show that which, together with the statement of the lemma, will imply our goal.Indeed, it is sufficient to check that whenever sn[U − ε, U + ε 1−η ] then we have d(s, U ) ≥ ε 1/α+η .To this end, note that if sn[U − ε, U + ε 1−η ] then (iii) and (i) show that s ∧ U < T ε and hence s ∧ U ≺ T ε ≺ U .By the definition of d and Theorem 2.1 (i) we get It thus remains to show Theorem 3.13.Since the statement we intend to prove is a local statement around the point U in X exc , by standard arguments involving continuity relations between X and the Lévy bridge X br as well as the Vervaat transformation between X br and X exc (see Section 3.1.2) it suffices to prove Theorem 3.13 when X exc is replaced by a two-sided Lévy process (X t ) t∈R and the point U by the point 0. Recall from the statement of Theorem 3.2 the definition of the event By Theorem 3.2, there exist C, γ > such that P (A c ) < C γ .Borel-Cantelli's implies that a.s.A 2 −k holds for every k sufficiently large.This proves (i) and (ii) (with a slightly larger η).Next, by [5, Chapter VIII, Theorem 6 (i)], a.s.there exists c > 0 such that for every sufficiently small sup [0,ε 1−η ] (−X) ≥ c (1−η/2)/α , and by the last line of the proof of Theorem 5 in [5, Chapter VIII], a.s.there exists C > 0 such that for every sufficiently small, sup [0,ε] (−X) ≤ C (1−η/3)/α .It follows that a.s.for every sufficiently small we have Combined with (i), this implies (iii) and completes the proof.We briefly recall the formalism of plane trees, which can for instance be found in [31,24].Let N = {0, 1, . ..} be the set of nonnegative integers, N * = {1, . ..} and let U be the set of labels where by convention (N * ) 0 = {∅}.An element of U is a sequence u = u 1 • • • u m of positive integers, and we set |u| = m, which represents the "generation" or heightof u.
for the concatenation of u and v. Finally, a plane tree τ is a finite subset of U such that: 1. ∅ ∈ τ , 2. if v ∈ τ and v = uj for some j ∈ N * , then u ∈ τ , 3. for every u ∈ τ , there exists an integer k u (τ ) ≥ 0 (the number of children of u) such that, for every j ∈ N * , uj ∈ τ if and only if 1 ≤ j ≤ k u (τ ).
In the following, by tree we will always mean plane tree.We denote the set of all trees by T .We will often view each vertex of a tree τ as an individual of a population whose τ is the genealogical tree.If u, v ∈ τ we denote by [[u, v]] the discrete geodesic path between u and b in τ .The total progeny of τ , which is the total number of vertices of τ , will be denoted by |τ |.The number of leaves (vertices u of τ such that k u (τ ) = 0) of the tree τ is denoted by λ(τ ) and the height of the tree (which is the maximal generation) is denoted by H(τ n ).
We now recall the classical coding of plane trees by the so-called Lukasiewicz path.This coding is crucial in the understanding of scaling limits of discrete looptrees associated with large trees.Let τ be a plane tree whose vertices are listed in lexicographical order ∅ = u(0) < u(1) < • • • < u(|τ | − 1).

Invariance principles for discrete looptrees
Recall from the Introduction that a discrete looptree Loop(τ ) is associated with every plane tree τ = ∅ (see Figure 2).In this section, we give a sufficient condition on a sequence of trees (τ n ) n≥1 that ensures that the associated looptrees (Loop(τ n )) n≥1 , appropriatly rescaled, converge towards the stable looptree L α .Theorem 4.1 (Invariance principle).Let (τ n ) n≥1 be a sequence of random trees such that there exists a sequence (B n ) n≥0 of positive real numbers satisfying where the first convergence holds in distribution for the Skorokhod topology on D([0, 1], R) and the second convergence holds in probability.Then the convergence holds in distribution for the Gromov-Hausdorff topology.
Of course, the main applications of this result concern Galton-Watson trees.If ρ is a probability measure on N such that ρ(1) < 1, we denote by GW ρ the law of a Galton-Watson tree with offspring distribution ρ.We say that ρ is critical if it has mean equal to 1.
If ρ is a critical offspring distribution in the domain of attraction of a stable law 1 of index α ∈ (1, 2), Duquesne [16] showed that GW ρ trees conditioned to have n vertices (provided this conditioning makes sense) satisfy the assumptions of Theorem 4.1 ((i) follows from Proposition 4.3 and the proof of Theorem 3.1 in [16], and (ii) follows from the fact that H(τ n ) • B n /n converges in distribution to a positive real valued random variable as n → ∞ by [16,Theorem 3.1]).Recently, the second author [22] proved the same result for GW ρ trees conditioned to have n leaves.Remark 4.2.Let us mention that a different phenomenon happens when the offspring distribution ρ is critical and has finite variance: in this case, if τ n denotes a GW ρ tree conditioned to have n vertices, it is shown in [14] that Loop(τ n )/ √ n converges in distribution towards a constant times the Brownian CRT, and the constant depends this time on the offspring distribution in a rather complicated fashion (in [14] this is actually established under the condition that ρ has a finite exponential moment).The main difference is that in the finite variance case, B n is a constant times √ n, and H(τ n )/B n does not converge in probability to 0 any more, but converges in distribution to a positive real-valued random variable.
Remark 4.3.Condition (ii) of the above theorem ensures that the height of τ n is negligeable compared to the typical size of loops in Loop(τ n ), so that asymptotically distances in τ n do not contribute to the distances in Loop(τ n ).Also observe that, in the boundary case α = 2, when ρ has infinite variance (so that ρ is in the domain of attraction of the Gaussian law), we still have H(τ n )/B n → 0 (by the same argument that follows (4.9)).In analogy with Theorem 1.2 (ii) we believe that, in this case, B −1 n • Loop(τ n ) converges in distribution as n → ∞ towards 1 2 • T 2 .An immediate corollary of Theorem 4.1 is that L α is a length space (see [10,Chapter 2] for the definition of a length space): 1 Recall that this means that µ([j, ∞)) = j −α L(j), where L : R + → R + is a function such that L(x) > 0 for x large enough and limx→∞ L(tx)/L(x) = 1 for all t > 0 (such a function is called slowly varying).We refer to [9] for details.Proof.This is a consequence of [10, Theorem 7.5.1],since by Theorem 4.1, the space L α is a Gromov-Hausdorff limit of finite metric spaces.
Proof of Theorem 4.1.Let (τ n ) n≥1 be a sequence of random trees and (B n ) n≥1 a sequence satisfying the assumptions (i) and (ii).Note that necessarily B n → ∞ as n → ∞.The Skorokhod representation theorem allows us to assume that the convergences (i) and (ii) hold almost surely and we aim at proving an almost sure convergence of B −1 n • Loop(τ n ) towards L α .We first define a sequence of finite metric spaces denoted by Loop (τ n ) which are slightly different from Loop(τ n ), but more convenient to work with.Let u n 0 , u n 1 , . . ., u n |τn|−1 be the vertices of τ n listed in lexicographical order, then Loop (τ n ) is by definition the graph on the set of vertices of τ n such that two vertices u and v are joined by an edge if and only if one of the following three conditions are satisfied in τ : u and v are consecutive siblings of a same parent, or u is the first sibling (in the lexicographical order) of v, or u is the last sibling of v.In particular, if u has a unique child v in τ , then u and v are joined by two edges in Loop (τ n ).See Figure 9 for an example.We equip Loop (τ n ) with the graph metric.It is easy to check that Loop (τ n ) is at Gromov-Hausdorff distance at most 2 from Loop(τ n ) (compare Figures 2 and 9).Since B n → ∞ as n → ∞, it is thus sufficient to show that Recall that p : [0, 1] → L α denotes the canonical projection.For every n ≥ 1, we let R n be the correspondence between L α and B −1 n • Loop (τ n ) made of all the pairs (p(s), u n i ) such that i = |τ n |s ± 1 where s ∈ [0, 1] and i ∈ {0, 1, 2, . . ., |τ n | − 1}.It is easy to check that R n is indeed a correspondence and we will show that, under our assumptions, its distortion vanishes as n → ∞.
To do so, we shall first see that the graph distance d n of Loop (τ n ) can be expressed in a very similar way to (2.5).To simplify notation, we denote by (W n k ) 0≤k≤|τn| the Lukasiewicz path associated with τ n .By definition of W n , the vertex u n i has children.In addition, the discrete genealogical order (also denoted by ) on u n 0 , . . ., u Furthermore, when u n i ≺ u n j , that is when u n i u n j and i = j, the quantity informally gives the "position" of the ancestral line of u n j with respect to u n i ; more precisely the (∆W n i − x j n,i + 1)-th child of u n i (in the lexicographical order) is an ancestor of u n j .Similarly to the continuous setting, one checks that the distance between u n i u n j in Loop (τ n ) is given by where by definition δ i is not an ancestor of u n j , then the distance between u n i and u n j in Loop (τ n ) can be computed by breaking in three parts the geodesic between u n i and u n j at their most recent common ancestor as in the continuous case (see (  By compactness, we may assume without loss of generality that i n /|τ n | → s and j n /|τ n | → t.Because i n = s n |τ n | ± 1, we also have s n → s and similarly t n → t.We make the additional assumption that u n in u n jn and s n t n for every n sufficiently large.Note that this entails s t.The general case is more tedious and can be solved by breaking at the most recent common ancestor and using (4.3) instead of (4.2).We leave details to the reader.
The idea is now clear: On the one hand, jumps of W n converge after scaling towards the jumps of X exc and on the other hand d and d n have similar expressions involving their jumps (compare (2.6) and (4.2)).Thus, intuitively, the inequality (4.4) cannot hold for n sufficiently large.Let us prove this carefully.Since {r ∈ [0, 1]; s r ≺ t and ∆ r > 0} is countable, by (2.6) there exists η > 0 such that s r≺t δr(0,x t r )>η δ r (0, x t r ) ≥ d(s, t) − ε 4 . (4.5) Note that the sum appearing in the last expression contains a finite number of terms.

1
B n δ n,kn(r) (0, x jn n,kn(r) ) → δ r (0, x t r ) as n → ∞, (ii) {k n (r 0 ), . . ., k n (r m )} = k n ; i n k n j n such that δ n,k (0, x jn n,k ) > η • B n .This implies (4.6).In (i), when r = r 0 , we use the fact that s n t n for every n ≥ 1.In order to get the desired contradiction, we show that the second term in the last display can be made less than ε/4 provided that η > 0 is small enough.Indeed, we have The following equality will be useful In the case W n jn /B n → X exc t− , the same argument applies after replacing every occurrence of X exc t by X exc t− and every occurrence of r t by r ≺ t.This completes the proof of the claim and of Theorem 4.1.

Application to scaling limit of discrete non-crossing configurations
We now give an application of the invariance principle established in the previous section by showing that stable looptrees appear as Gromov-Hausdorff limits of random Boltzmann dissections of [23].
For every integer n ≥ 3, recall from the Introduction that a dissection of the regular polygon P n is the union of the sides of P n and of a collection of diagonals that may intersect only at their endpoints, see Figure 11 Duality with trees.The main tool is to use a bijection with trees.Indeed, the dual tree of D µ n is a Galton-Watson tree as we now explain.
Given a dissection D ∈ D n , we construct a (rooted ordered) tree φ(D) as follows: Consider the "dual" graph of D, obtained by placing a vertex inside each face of D and outside each side of the polygon P n+1 and by joining two vertices if the corresponding faces share a common edge, thus giving a connected graph without cycles.Then remove the dual edge intersecting the side of P n+1 which connects 1 to e 2iπ n+1 .Finally, root the tree at the corner adjacent to the latter side (see Figure 11).

Figure 3 : 9 .
Figure 3: On the left L 1.01 , on the right L 1.9 .

Figure 4 :
Figure 4: A large dissection and a representation of its metric space.

Figure 7 :
Figure 7: Setup of Theorem 3.13.The red line shows the ancestral path of U towards 0 and the loops encountered during this descent.

Figure 8 :
Figure 8: A tree and its Lukasiewicz path.

t s r 1 k 2 Figure 10 :
Figure 10: Illustration of the conditions (i) and (ii) above.In the figure in the right, the black process is W n /B n and the grey one is X exc .To simplify, here we have set k n (i) = k n (r i )/|τ n | and k n (t) = k n (t)/|τ n |.
>η•Bn .(4.7) . The faces are the connected components of the complement of the dissection in the polygon.Recall from the Introduction the Boltzmann probability measure P µ n on D n , the set of all dissections of P n+1 .Our goal is to study scaling limits of random dissections D µ n sampled according to P µ n and prove Theorem 1.3.Recall that D µ n is viewed as a metric space by endowing the vertices of D µ n with the graph distance.

Figure 11 :
Figure 11:  The dual tree of a dissection of P 8 , note that the tree has 7 leaves.
To this end, remark that if s r t and s r t, then r r or r r.It follows that if .7) EJP 19 (2014), paper 108.Page 8/35 ejp.ejpecp.org(Note that X exc t− − I t s ≥ 0 because s = t.) Y if s Y t and s = t.When there is no ambiguity, we write x t s instead of x t s (Y ), etc.For t∈ R, the collection {(x t s (Y ), u t s (Y )) : s t} is called the descent of t in Y .As the reader may have noticed, this concept is crucial in the definition of the distance involved in the definition of stable looptrees.
Since j n /n → t, it sufficient to treat the case where either W n jn /B n → X exc t or W n jn /B n → X exc t− .We first suppose that W n jn /B n → X exc t .At this point, we crucially use Corollary 3.4 and assume that η > 0 has been chosen sufficiently small such that Note that we have used the fact that W n jn /B n → X exc t in order to capture the term of the right-hand side corresponding to r = t.Consequently, combining the last display with (4.8) and Assumption (ii) of the theorem, we deduce that (4.7) becomes for every n sufficiently large r >η ≥ X exc t − ε/4.EJP 19 (2014), paper 108.