A new family of Markov branching trees: the alpha-gamma model

We introduce a simple tree growth process that gives rise to a new two-parameter family of discrete fragmentation trees that extends Ford's alpha model to multifurcating trees and includes the trees obtained by uniform sampling from Duquesne and Le Gall's stable continuum random tree. We call these new trees the alpha-gamma trees. In this paper, we obtain their splitting rules, dislocation measures both in ranked order and in sized-biased order, and we study their limiting behaviour.


Introduction
Markov branching trees were introduced by Aldous [3] as a class of random binary phylogenetic models and extended to the multifurcating case in [16].Consider the space T n of combinatorial trees without degree-2 vertices, one degree-1 vertex called the root and exactly n further degree-1 vertices labelled by [n] = {1, . . ., n} and called the leaves; we call the other vertices branch points.Distributions on T n of random trees T * n are determined by distributions of the delabelled tree T • n on the space T • n of unlabelled trees and conditional label distributions, e.g.exchangeable labels.A sequence (T • n , n ≥ 1) of unlabelled trees has the Markov branching property if for all n ≥ 2 conditionally given that the branching adjacent to the root is into tree components whose numbers of leaves are n 1 , . . ., n k , these tree components are independent copies of T • n i , 1 ≤ i ≤ k.The distributions of the sizes in the first branching of T • n , n ≥ 2, are denoted by q(n 1 , . . ., n k ), and referred to as the splitting rule of (T • n , n ≥ 1).Aldous [3] studied in particular a one-parameter family (β ≥ −2) that interpolates between several models known in various biology and computer science contexts (e.g.β = −2 comb, β = −3/2 uniform, β = 0 Yule) and that he called the beta-splitting model, he sets for β > −2: For exchangeably labelled Markov branching models (T n , n ≥ 1) it is convenient to set p(n 1 , . . ., n k ) := m 1 ! . . .m n !n n 1 ,...,n k q((n 1 , . . ., n k ) ↓ ), n j ≥ 1, j ∈ [k]; k ≥ 2 : n = n 1 + . . .+ n k , (1) where (n 1 , . . ., n k ) ↓ is the decreasing rearrangement and m r the number of rs of the sequence (n 1 , . . ., n k ).The function p is called exchangeable partition probability function (EPPF) and gives the probability that the branching adjacent to the root splits into tree components with label sets {A 1 , . . ., A k } partitioning [n], with block sizes n j = #A j .Note that p is invariant under permutations of its arguments.It was shown in [19] that Aldous's beta-splitting models for β > −2 are the only binary Markov branching models for which the EPPF is of Gibbs type and that the multifurcating Gibbs models are an extended Ewens-Pitman two-parameter family of random partitions, 0 ≤ α ≤ 1, θ ≥ −2α, or −∞ ≤ α < 0, θ = −mα for some integer m ≥ 2, w n j , where boundary cases by continuity.Ford [12] introduced a different binary model, the alpha model, using simple sequential growth rules starting from the unique elements T 1 ∈ T 1 and T 2 ∈ T 2 : (i) F given T n for n ≥ 2, assign a weight 1 − α to each of the n edges adjacent to a leaf, and a weight α to each of the n − 1 other edges; (ii) F select at random with probabilities proportional to the weights assigned by step (i) F , an edge of T n , say a n → c n directed away from the root; It was shown in [12] that these trees are Markov branching trees but that the labelling is not exchangeable.The splitting rule was calculated and shown to coincide with Aldous's betasplitting rules if and only if α = 0, α = 1/2 or α = 1, interpolating differently between Aldous's corresponding models for β = 0, β = −3/2 and β = −2.This study was taken further in [16,23].
In this paper, we introduce a new model by extending the simple sequential growth rules to allow multifurcation.Specifically, we also assign weights to vertices as follows, cf. Figure 1: (i) given T n for n ≥ 2, assign a weight 1 − α to each of the n edges adjacent to a leaf, a weight γ to each of the n − 1 other edges, and a weight (k − 1)α − γ to each vertex of degree k + 1 ≥ 3; (ii) select at random with probabilities proportional to the weights assigned by step (i), • an edge of T n , say a n → c n directed away from the root, • or, as the case may be, a vertex of T n , say v n ; (iii) to create T n+1 from T n , do the following: • if an edge a n → c n was selected, replace it by three edges a n → b n , b n → c n and b n → n + 1 so that two new edges connect the two vertices a n and c n to a new branch point b n and a further edge connects b n to a new leaf labelled n + 1; • if a vertex v n was selected, add an edge v n → n + 1 to a new leaf labelled n + 1.We call the resulting model the alpha-gamma model.These growth rules satisfy the rules of probability for all 0 ≤ α ≤ 1 and 0 ≤ γ ≤ α.They contain the growth rules of the alpha model for γ = α.They also contain growth rules for a model [18,20] based on the stable tree of Duquesne and Le Gall [7], for the cases γ = 1 − α, 1/2 ≤ α < 1, where all edges are given the same weight; we show here that these cases γ = 1 − α, 1/2 ≤ α ≤ 1, as well as α = γ = 0 form the intersection with the extended Ewens-Pitman-type two-parameter family of models (2).Proposition 1.Let (T n , n ≥ 1) be alpha-gamma trees with distributions as implied by the sequential growth rules (i)-(iii) for some 0 ≤ α ≤ 1 and 0 ≤ γ ≤ α.Then (a) the delabelled trees T • n , n ≥ 1, have the Markov branching property.The splitting rules are in the case 0 ≤ α < 1, where q PD * α,−α−γ is the splitting rule associated via (1) with p PD * α,−α−γ , the Ewens-Pitman-type EPPF given in (2), and LHS ∝ RHS means equality up to a multiplicative constant depending on n and (α, γ) that makes the LHS a probability function; For any function (n 1 , . . ., n k ) → q(n 1 , . . ., n k ) that is a probability function for all fixed n = n 1 + . . .+ n k , n ≥ 2, we can construct a Markov branching model (T • n , n ≥ 1).A condition called sampling consistency [3] is to require that the tree T • n,−1 constructed from T • n by removal of a uniformly chosen leaf (and the adjacent branch point if its degree is reduced to 2) has the same distribution as T • n−1 , for all n ≥ 2. This is appealing for applications with incomplete observations.It was shown in [16] that all sampling consistent splitting rules admit an integral representation (c, ν) for an erosion coefficient c ≥ 0 and a dislocation measure ν on S ↓ = {s = (s i ) i≥1 : s 1 ≥ s 2 ≥ . . .≥ 0, s 1 + s 2 + . . .≤ 1} with ν({(1, 0, 0, . ..)}) = 0 and S ↓ (1 − s 1 )ν(ds) < ∞ as in Bertoin's continuous-time fragmentation theory [4,5,6].In the most relevant case when c = 0 and ν({s ∈ S ↓ : where Z n = S ↓ (1 − i≥1 s n i )ν(ds), n ≥ 2, are the normalization constants.The measure ν is unique up to a multiplicative constant.In particular, it can be shown [20,17] that for the Ewens-Pitman EPPFs p PD * α,θ we obtain ν = PD * α,θ (ds) of Poisson-Dirichlet type (hence our superscript PD * for the Ewens-Pitman type EPPF), where for 0 < α < 1 and θ > −2α we can express for an α-stable subordinator σ with Laplace exponent − log(E(e −λσ 1 )) = λ α and with ranked sequence of jumps ∆σ [0,1] = (∆σ t , t ∈ [0, 1]) ↓ .For α < 1 and θ = −2α, we have This is the relevant range for this paper.For θ > −α, the measure PD * α,θ just defined is a multiple of the usual Poisson-Dirichlet probability measure PD α,θ on S ↓ , so for the integral representation of p PD * α,θ we could also take ν = PD α,θ in this case, and this is also an appropriate choice for the two cases α = 0 and m ≥ 3; the case α = 1 is degenerate q PD * α,θ (1, 1, . . ., 1) = 1 (for all θ) and can be associated with ν = PD * 1,θ = δ (0,0,...) , see [19].Theorem 2. The alpha-gamma-splitting rules q seq α,γ are sampling consistent.For 0 ≤ α < 1 and 0 ≤ γ ≤ α the measure ν in the integral representation can be chosen as The case α = 1 is discussed in Section 3.2.We refer to Griffiths [14] who used discounting of Poisson-Dirichlet measures by quantities involving i =j s i s j to model genic selection.
In [16], Haas and Miermont's self-similar continuum random trees (CRTs) [15] are shown to be scaling limits for a wide class of Markov branching models.See Sections 3.3 and 3.6 for details.This theory applies here to yield: be delabelled alpha-gamma trees, represented as discrete R-trees with unit edge lengths, for some 0 < α < 1 and 0 < γ ≤ α.Then in distribution for the Gromov-Hausdorff topology, where the scaling n γ is applied to all edge lengths, and T α,γ is a γ-self-similar CRT whose dislocation measure is a multiple of ν α,γ .
We observe that every dislocation measure ν on S ↓ gives rise to a measure ν sb on the space of summable sequences under which fragment sizes are in a size-biased random order, just as the GEM α,θ distribution can be defined as the distribution of a PD α,θ sequence re-arranged in size-biased random order [22].We similarly define GEM * α,θ from PD * α,θ .One of the advantages of size-biased versions is that, as for GEM α,θ , we can calculate marginal distributions explicitly.Proposition 4. For 0 < α < 1 and 0 ≤ γ < α, distributions ν sb k of the first k ≥ 1 marginals of the size-biased form ν sb α,γ of ν α,γ are given, for x = (x 1 , . . ., x k ), by The other boundary values of parameters are trivial here -there are at most two non-zero parts.
We can investigate the convergence of Corollary 3 when labels are retained.Since labels are non-exchangeable, in general, it is not clear how to nicely represent a continuum tree with infinitely many labels other than by a consistent sequence R k of trees with k leaves labelled [k], k ≥ 1. See however [23] for developments in the binary case γ = α on how to embed R k , k ≥ 1, in a CRT T α,α .The following theorem extends Proposition 18 of [16] to the multifurcating case.
Theorem 5. Let (T n , n ≥ 1) be a sequence of trees resulting from the alpha-gamma-tree growth rules for some 0 < α < 1 and 0 < γ ≤ α.Denote by R(T n , [k]) the subtree of T n spanned by the root and leaves [k], reduced by removing degree-2 vertices, represented as discrete R-tree with graph distances in T n as edge lengths.Then s. in the sense that all edge lengths converge, for some discrete tree R k with shape T k and edge lengths specified in terms of three random variables, conditionally independent given that T k has k + ℓ edges, as , where g γ is the Mittag-Leffler density, the density of σ −γ 1 for a subordinator σ with Laplace exponent λ γ ; am−1 ; here D k contains edge length proportions, first with parameter (1−α)/γ for edges adjacent to leaves and then with parameter 1 for the other edges, each enumerated e.g. by depth first search.
In fact, 1 − W k captures the total limiting leaf proportions of subtrees that are attached on the vertices of T k , and we can study further how this is distributed between the branch points, see Section 4.2.
We conclude this introduction by giving an alternative description of the alpha-gamma model obtained by adding colouring rules to the alpha model growth rules (i) F -(iii) F , so that in T col n each edge except those adjacent to leaves has either a blue or a red colour mark.
(iv) col To turn T n+1 into a colour-marked tree T col n+1 , keep the colours of T col n and do the following: • if an edge a n → c n adjacent to a leaf was selected, mark a n → b n blue; • if a red edge a n → c n was selected, mark both a n → b n and b n → c n red; • if a blue edge a n → c n was selected, mark a n → b n blue; mark b n → c n red with probability c and blue with probability 1 − c; When (T col n , n ≥ 1) has been grown according to (i) F -(iii) F and (iv) col , crush all red edges, i.e. (cr) identify all vertices connected via red edges, remove all red edges and remove the remaining colour marks; denote the resulting sequence of trees by ( T n , n ≥ 1); Proposition 6.Let ( T n , n ≥ 1) be a sequence of trees according to growth rules (i) F -(iii) F ,(iv) col and crushing rule (cr).Then ( T n , n ≥ 1) is a sequence of alpha-gamma trees with γ = α(1 − c).
The structure of this paper is as follows.In Section 2 we study the discrete trees grown according to the growth rules (i)-(iii) and establish Proposition 6 and Proposition 1 as well as the sampling consistency claimed in Theorem 2. Section 3 is devoted to the limiting CRTs, we obtain the dislocation measure stated in Theorem 2 and deduce Corollary 3 and Proposition 4.
In Section 4 we study the convergence of labelled trees and prove Theorem 5.
2 Sampling consistent splitting rules for the alpha-gamma trees

Notation and terminology of partitions and discrete fragmentation trees
For B ⊆ N, let P B be the set of partitions of B into disjoint non-empty subsets called blocks.Consider a probability space (Ω, F, P), which supports a P B -valued random partition Π B .If the probability function of Π B only depends on its block sizes, we call it exchangeable.Then where #A j denotes the block size, i.e. the number of elements of A j .This function p is called the exchangeable partition probability function (EPPF) of Π B .Alternatively, a random partition Π B is exchangeable if its distribution is invariant under the natural action on partitions of B by the symmetric group of permutations of B.
Let B ⊆ N, we say that a partition π ∈ P B is finer than π ′ ∈ P B , and write π π ′ , if any block of π is included in some block of π ′ .This defines a partial order on P B .A process or a sequence with values in P B is called refining if it is decreasing for this partial order.Refining partition-valued processes are naturally related to trees.Suppose that B is a finite subset of N and t is a collection of subsets of B with an additional member called the root such that • we have B ∈ t; we call B the common ancestor of t; • we have {i} ∈ t for all i ∈ B; we call {i} a leaf of t; • for all A ∈ t and C ∈ t, we have either If we equip t with the parent-child relation and also root → B, then t is a rooted connected acyclic graph, i.e. a combinatorial tree.We denote the space of such trees t by T B and also T n = T [n] .For t ∈ T B and A ∈ t, the rooted subtree s A of t with common ancestor A is given by s A = {root} ∪ {C ∈ t : C ⊆ A} ∈ T A .In particular, we consider the subtrees s j = s A j of the common ancestor B of t, i.e. the subtrees whose common ancestors A j , j ∈ [k], are the children of B. In other words, s 1 , . . ., s k are the rooted connected components of t \ {B}.
Let (π(t), t ≥ 0) be a P B -valued refining process for some finite B ⊂ N with π(0) = 1 B and π(t) = 0 B for some t > 0, where 1 B is the trivial partition into a single block B and 0 B is the partition of B into singletons.We define t π = {root} ∪ {A ⊂ B : A ∈ π(t) for some t ≥ 0} as the associated labelled fragmentation tree.Definition 1.Let B ⊂ N with #B = n and t ∈ T B .We associate the relabelled tree for any bijection σ : B → [n], and the combinatorial tree shape of t as the equivalence class We denote by T the collection of all tree shapes with n leaves, which we will also refer to in their own right as unlabelled fragmentation trees.
Note that the number of subtrees of the common ancestor of t ∈ T n and the numbers of leaves in these subtrees are invariants of the equivalence class t With this notation and terminology, a sequence of random trees T

Colour-marked trees and the proof of Proposition 6
The growth rules (i) F -(iii) F construct binary combinatorial trees T bin n with vertex set and an edge set E ⊂ V × V .We write v → w if (v, w) ∈ E. In Section 2.1, we identify leaf i with the set {i} and vertex b i with {j ∈ [n] : b i → . . .→ j}, the edge set E then being identified by the parent-child relation.In this framework, a colour mark for an edge v → b i can be assigned to the vertex b i , so that a coloured binary tree as constructed in (iv) col can be represented by , where 0 represents red and 1 represents blue.
Proof of Proposition 6.We only need to check that the growth rules (i) F -(iii) F and (iv) col for (T col n , n ≥ 1) imply that the uncoloured multifurcating trees ( T n , n ≥ 1) obtained from (T col n , n ≥ 1) via crushing (cr) satisfy the growth rules (i)-(iii).Let therefore t col n+1 be a tree with P(T col n+1 = t col n+1 ) > 0. It is easily seen that there is a unique tree t col n , a unique insertion edge a col n → c col n in t col n and, if any, a unique colour χ n+1 (c col n ) to create t col n+1 from t col n .Denote the trees obtained from t col n and t col n+1 via crushing (cr) by t n and t n+1 .If χ n+1 (c col n ) = 0, denote by k + 1 ≥ 3 the degree of the branch point of t n with which c col n is identified in the first step of the crushing (cr).
Because these conditional probabilities do not depend on t col n and have the form required, we conclude that ( T n , n ≥ 1) obeys the growth rules (i)-(iii) with γ = α(1 − c).

The Chinese Restaurant Process
An important tool in this paper is the Chinese Restaurant Process (CRP), a partition-valued process (Π n , n ≥ 1) due to Dubins and Pitman, see [22], which generates the Ewens-Pitman twoparameter family of exchangeable random partitions Π ∞ of N. In the restaurant framework, each block of a partition is represented by a table and each element of a block by a customer at a table.The construction rules are the following.The first customer sits at the first table and the following customers will be seated at an occupied table or a new one.Given n customers at k tables with n j ≥ 1 customers at the jth table, customer n + 1 will be placed at the jth table with probability (n j − α)/(n + θ), and at a new table with probability (θ + kα)/(n + θ).The parameters α and θ can be chosen as either α < 0 and θ = −mα for some m ∈ N or 0 ≤ α ≤ 1 and θ > −α.We refer to this process as the CRP with (α, θ)-seating plan.
In the CRP (Π n , n ≥ 1) with Π n ∈ P [n] , we can study the block sizes, which leads us to consider the proportion of each table relative to the total number of customers.These proportions converge to limiting frequencies as follows.
Lemma 7 (Theorem 3.2 in [22]).For each pair of parameters (α, θ) subject to the constraints above, the Chinese restaurant with the (α, θ)-seating plan generates an exchangeable random partition Π ∞ of N. The corresponding EPPF is boundary cases by continuity.The corresponding limiting frequencies of block sizes, in size-biased order of least elements, are GEM α,θ and can be represented as where the W i are independent, W i has beta(1 − α, θ + iα) distribution, and The distribution of the associated ranked sequence of limiting frequencies is Poisson-Dirichlet PD α,θ .
We also associate with the EPPF p PD α,θ the distribution q PD α,θ of block sizes in decreasing order via (1) and, because the Chinese restaurant EPPF is not the EPPF of a splitting rule leading to k ≥ 2 block (we use notation q PD * α,θ for the splitting rules induced by conditioning on k ≥ 2 blocks), but can lead to a single block, we also set q PD α,θ (n) = p PD α,θ (n).The asymptotic properties of the number K n of blocks of Π n under the (α, θ)-seating plan depend on α: if α < 0 and θ = −mα for some m ∈ N, then K n = m for all sufficiently large n a.s.; if α = 0 and θ > 0, then lim n→∞ K n / log n = θ a.s.The most relevant case for us is α > 0.
As an extension of the CRP, Pitman and Winkel in [23] introduced the ordered CRP.Its seating plan is as follows.The tables are ordered from left to right.Put the second table to the right of the first with probability θ/(α + θ) and to the left with probability α/(α + θ).Given k tables, put the (k + 1)st table to the right of the right-most table with probability θ/(kα + θ) and to the left of the left-most or between two adjacent tables with probability α/(kα + θ) each.
A composition of n is a sequence (n 1 , . . ., n k ) of positive numbers with sum n.A sequence of random compositions C n of n is called regenerative if conditionally given that the first part of C n is n 1 , the remaining parts of C n form a composition of n − n 1 with the same distribution as C n−n 1 .Given any decrement matrix (q dec (n, m), 1 ≤ m ≤ n), there is an associated sequence C n of regenerative random compositions of n defined by specifying that q dec (n, •) is the distribution of the first part of C n .Thus for each composition (n 1 , . . ., n k ) of n, Lemma 9 (Proposition 6 (i) in [23]).For each (α, θ) with 0 < α < 1 and θ ≥ 0, denote by C n the composition of block sizes in the ordered Chinese restaurant partition with parameters (α, θ).Then (C n , n ≥ 1) is regenerative, with decrement matix 2. 4 The splitting rule of alpha-gamma trees and the proof of Proposition 1 Proposition 1 claims that the unlabelled alpha-gamma trees (T • n , n ≥ 1) have the Markov branching property, identifies the splitting rule and studies the exchangeability of labels.In preparation of the proof of the Markov branching property, we use CRPs to compute the probability function of the first split of T • n in Proposition 10.We will then establish the Markov branching property from a spinal decomposition result (Lemma 11) for T • n .Proposition 10.Let T • n be an unlabelled alpha-gamma tree for some 0 ≤ α < 1 and 0 ≤ γ ≤ α, then the probability function of the first split of T • n is Proof.We start from the growth rules of the labelled alpha-gamma trees T n .Consider the spine . By joining together the subtrees of the spinal vertex v i we form the ith spinal bush S sp i = S sp i1 * . . .* S sp iK n,i .Suppose a bush S sp i consists of k subtrees with m leaves in total, then its weight will be m − kα − γ + kα = m − γ according to growth rule (i) -recall that the total weight of the tree T n is n − α.
Now we consider each bush as a table, each leaf n = 2, 3, . . .as a customer, 2 being the first customer.Adding a new leaf to a bush or to an edge on the spine corresponds to adding a new customer to an existing or to a new table.The weights are such that we construct an ordered Chinese restaurant partition of N \ {1} with parameters (γ, 1 − α).
Suppose that the first split of T n is into tree components with numbers of leaves n 1 ≥ . . .≥ n k ≥ 1.Now suppose further that leaf 1 is in the subtree with n i leaves in the first split, then the first spinal bush S sp 1 will have n − n i leaves.Notice that this event is equivalent to that of n − n i customers sitting at the first table with a total of n − 1 customers present, in the terminology of the ordered CRP.According to Lemma 9, the probability of this is Next consider the probability that the first bush S sp 1 joins together subtrees with n 1 ≥ . . .≥ n i−1 ≥ n i+1 ≥ . . .n k ≥ 1 leaves conditional on the event that leaf 1 is in a subtree with n i leaves.
The first bush has a weight of n − n i − γ and each subtree in it has a weight of n j − α, j = i.Consider these k − 1 subtrees as tables and the leaves in the first bush as customers.According to the growth procedure, they form a second (unordered, this time) Chinese restaurant partition with parameters (α, −γ), whose EPPF is Let m j be the number of js in the sequence of (n 1 , . . ., n k ).Based on the exchangeability of the second Chinese restaurant partition, the probability that the first bush consists of subtrees with n 1 ≥ . . .≥ n i−1 ≥ n i+1 ≥ . . .≥ n k ≥ 1 leaves conditional on the event that leaf 1 is in one of the m n i subtrees with n i leaves will be Thus the joint probability that the first split is (n 1 , . . ., n k ) and that leaf 1 is in a subtree with n i leaves is, Hence the splitting rule will be the sum of (7) for all different n i (not i) in (n 1 , . . ., n k ), but they contain factors m n i , so we can write it as sum over i ∈ [k]: We can use the nested Chinese restaurants described in the proof to study the subtrees of the spine of T n .We have decomposed T n into the subtrees S sp ij of the spine from the root to 1 and can, conversely, build T n from S sp ij , for which we now introduce notation We will also write i,j S • ij when we join together unlabelled trees S • ij along a spine.The following unlabelled version of a spinal decomposition theorem will entail the Markov branching property.

Lemma 11 (Spinal decomposition). Let (T •1
n , n ≥ 1) be alpha-gamma trees, delabelled apart from label 1.For all n ≥ 2, the tree T •1  n has the same distribution as i,j S • ij , where ) is a regenerative composition with decrement matrix q dec γ,1−α , • conditionally given • conditionally given also Proof.For an induction on n, note that the claim is true for n = 2, since T •1 n and i,j S • ij are deterministic for n = 2. Suppose then that the claim is true for some n ≥ 2 and consider T • n+1 .The growth rules (i)-(iii) of the labelled alpha-gamma tree T n are such that • leaf n + 1 is inserted into a new bush or any of the bushes S sp i selected according to the rules of the ordered CRP with (γ, 1 − α)-seating plan, • further into a new subtree or any of the subtrees S sp ij of the selected bush S sp i according to the rules of a CRP with (α, −γ)-seating plan, • and further within the subtree S sp ij according to the weights assigned by (i) and growth rules (ii)-(iii).
These selections do not depend on T n except via T •1  n .In fact, since labels do not feature in the growth rules (i)-(iii), they are easily seen to induce growth rules for partially labelled alpha-gamma trees T •1  n , and also for unlabelled alpha-gamma trees such as S • ij .From these observations and the induction hypothesis, we deduce the claim for T • n+1 .
Proof of Proposition 1.(a) Firstly, the distributions of the first splits of the unlabelled alphagamma trees T • n were calculated in Proposition 10, for 0 ≤ α < 1 and 0 ≤ γ ≤ α.Secondly, let 0 ≤ α ≤ 1 and 0 ≤ γ ≤ α.By the regenerative property of the spinal composition C n−1 and the conditional distribution of T •1 n given C n−1 identified in Lemma 11, we obtain that given , are independent alpha-gamma trees distributed as T • n 1j , also independent of the remaining tree S 1,0 := i≥2,j S • ij , which, by Lemma 11, has the same distribution as T • n−m .This is equivalent to saying that conditionally given that the first split is into subtrees with n 1 ≥ . . .≥ n i ≥ . . .≥ n k ≥ 1 leaves and that leaf 1 is in a subtree with n i leaves, the delabelled subtrees S • 1 , . . ., S • k of the common ancestor are independent and distributed as T • n j respectively, j ∈ [k].Since this conditional distribution does not depend on i, we have established the Markov branching property of T • n .(b) Notice that if γ = 1 − α, the alpha-gamma model is the model related to stable trees, the labelling of which is known to be exchangeable, see Section 3.4.
On the other hand, if γ = 1 − α, let us turn to look at the distribution of T 3 .
We can see the probabilities of the two labelled tree in the above picture are different although they have the same unlabelled tree.So if γ = 1 − α, T n is not exchangeable.

Sampling consistency and strong sampling consistency
Recall that an unlabelled Markov branching tree T • n , n ≥ 2 has the property of sampling consistency, if when we select a leaf uniformly and delete it (together with the adjacent branch point if its degree is reduced to 2), then the new tree, denoted by T • n,−1 , is distributed as T • n−1 .Denote by d : D n → D n−1 the induced deletion operator on the space D n of probability measures on T • n , so that for the distribution P n of T • n , we define d(P n ) as the distribution of T • n,−1 .Sampling consistency is equivalent to d(P n ) = P n−1 .This property is also called deletion stability in [12].Proposition 12.The unlabelled alpha-gamma trees for 0 ≤ α ≤ 1 and 0 ≤ γ ≤ α are sampling consistent.
Proof.The sampling consistency formula (14) in [16] states that d(P n ) = P n−1 is equivalent to that equal j, and where q is the splitting rule of T • n ∼ P n .In terms of EPPFs (1), formula (8) is equivalent to Now according to Proposition 1, the EPPF of the alpha-gamma model with α < 1 is where Γ α (n) = Γ(n − α)/Γ(1 − α).Therefore, p seq α,γ (n 1 , . . ., n i + 1, . . ., n k ) can be written as Sum over the above formulas, then the right-hand side of ( 9) is Hence, the splitting rules of the alpha-gamma model satisfy (9), which implies sampling consistency for α < 1.The case α = 1 is postponed to Section 3.2.Moreover, sampling consistency can be enhanced to strong sampling consistency [16] by requiring that (T • n−1 , T • n ) has the same distribution as (T • n,−1 , T • n ).Proposition 13.The alpha-gamma model is strongly sampling consistent if and only if γ = 1 − α.
Proof.For γ = 1 − α, the model is known to be strongly sampling consistent, cf.Section 3.4.
Then we delete one of the two leaves at the first branch point of t • 4 to get t • 3 .Therefore On the other hand, if T • 3 = t • 3 , we have to add the new leaf to the first branch point to get t • 4 .Thus It is easy to check that P(( )) if γ = 1 − α, which means that the alpha-gamma model is then not strongly sampling consistent.
3 Dislocation measures and asymptotics of alpha-gamma trees

Dislocation measures associated with the alpha-gamma-splitting rules
Theorem 2 claims that the alpha-gamma trees are sampling consistent, which we proved in Section 2.5, and identifies the integral representation of the splitting rule in terms of a dislocation measure, which we will now establish.
Proof of Theorem 2. Firstly, we make some rearrangement for the coefficient of the sampling consistent splitting rules of alpha-gamma trees identified in Proposition 10: where is the normalisation constant in (4) for ν = PD * α,−γ−α , as can be read from [17,Formula (17)].According to (4), Thus, Similarly, Hence, the EPPF p seq α,γ (n 1 , . . ., n k ) of the sampling consistent splitting rule takes the following form: where

3.2
The alpha-gamma model when α = 1, spine with bushes of singleton-trees Within the discussion of the alpha-gamma model so far, we restricted to 0 ≤ α < 1.In fact, we can still get some interesting results when α = 1.The weight of each leaf edge is 1 − α in the growth procedure of the alpha-gamma model.If α = 1, the weight of each leaf edge becomes zero, which means that the new leaf can only be inserted to internal edges or branch points.
Starting from the two leaf tree, leaf 3 must be inserted into the root edge or the branch point.
Similarly, any new leaf must be inserted into the spine leading from the root to the common ancestor of leaf 1 and leaf 2. Hence, the shape of the tree is just a spine with some bushes of one-leaf subtrees rooted on it.Moreover, the first split of an n-leaf tree will be (n−k+1, 1, . . ., 1) for some 2 ≤ k ≤ n − 1.The cases γ = 0 and γ = 1 lead to degenerate trees with, respectively, all leaves connected to a single branch point and all leaves connected to a spine of binary branch points (comb).
(a) The model is sampling consistent with splitting rules where (b) The dislocation measure associated with the splitting rules can be expressed as follows In particular, it does not satisfy ν({s ∈ S ↓ : Proof.(a) We start from the growth procedure of the alpha-gamma model when α = 1.Consider a first split into (n − k + 1, 1, . . ., 1) for some labelled n-leaf tree.Suppose its first branch point is created when the leaf l is inserted to the root edge for l ≥ 3.At this time the first split is (l − 1, 1) with a probability γ/(l − 2) as α = 1.In the following insertion, leaves l + 1, . . ., n have been added either to the first branch point or to the subtree with l − 1 leaves at this time.

Continuum random trees and self-similar trees
Let B ⊂ N finite.A labelled tree with edge lengths is a pair ϑ = (t, η), where t ∈ T B is a labelled tree, η = (η A , A ∈ t \ {root}) is a collection of marks, and every edge C → A of t is associated with mark η A ∈ (0, ∞) at vertex A, which we interpret as the edge length of C → A. Let Θ B be the set of such trees (t, η) with t ∈ T B .We now introduce continuum trees, following the construction by Evans et al. in [9].A complete separable metric space (τ, d) is called an R-tree, if it satisfies the following two conditions: 1. for all x, y ∈ τ , there is an isometry ϕ x,y : [0, d(x, y)] → τ such that ϕ x,y (0) = x and ϕ x,y (d(x, y)) = y, 2. for every injective path c : [0, 1] → τ with c(0) = x and c(1) = y, one has c([0, 1]) = ϕ x,y ([0, d(x, y)]).
We will consider rooted R-trees (τ, d, ρ), where ρ ∈ τ is a distinguished element, the root.We think of the root as the lowest element of the tree.
We denote the range of ϕ x,y by [[x, y]] and call the quantity d(ρ, x) the height of x.We say that x is an ancestor of y whenever x ∈ [[ρ, y]].We let x ∧ y be the unique element in τ such that [ and call it the highest common ancestor of x and y in τ .Denoted by (τ x , d| τx , x) the set of y ∈ τ such that x is an ancestor of y, which is an R-tree rooted at x that we call the fringe subtree of τ above x.
Two rooted R-trees (τ, d, ρ), (τ ′ , d ′ , ρ ′ ) are called equivalent if there is a bijective isometry between the two metric spaces that maps the root of one to the root of the other.We also denote by Θ the set of equivalence classes of compact rooted R-trees.We define the Gromov-Hausdorff distance between two rooted R-trees (or their equivalence classes) as where the infimum is over all metric spaces E and isometric embeddings τ ⊂ E of τ and τ ′ ⊂ E of τ ′ with common root ρ ∈ E; the Hausdorff distance on compact subsets of E is denoted by d H .Evans et al. [9] showed that (Θ, d GH ) is a complete separable metric space.
We call an element x ∈ τ , x = ρ, in a rooted R-tree τ , a leaf if its removal does not disconnect τ , and let L(τ ) be the set of leaves of τ .On the other hand, we call an element of τ a branch point, if it has the form x ∧ y where x is neither an ancestor of y nor vice-visa.Equivalently, we can define branch points as points disconnecting τ into three or more connected components when removed.We let B(τ ) be the set of branch points of τ .
A continuum random tree (CRT) is a random variable whose values are continuum trees, defined on some probability space (Ω, A, P).Several methods to formalize this have been developed [2,10,13].For technical simplicity, we use the method of Aldous [2].Let the space ℓ 1 = ℓ 1 (N) be the base space for defining CRTs.We endow the set of compact subsets of ℓ 1 with the Hausdorff metric, and the set of probability measures on ℓ 1 with any metric inducing the topology of weak convergence, so that the set of pairs (T, µ) where T is a rooted R-tree embedded as a subset of ℓ 1 and µ is a measure on T , is endowed with the product σ-algebra.
the random variable Π(t + s) has the same law as the random partition whose blocks are those of π i ∩ Π (i) (|π i | a s), i ≥ 1, where (Π (i) , i ≥ 1) is a sequence of i.i.d.copies of (Π(t), t ≥ 0).The process (|Π(t)| ↓ , t ≥ 0) is an S ↓ -valued self-similar fragmentation process.Bertoin [5] proved that the distribution of a P N -valued self-similar fragmentation process is determined by a triple (a, c, ν), where a ∈ R, c ≥ 0 and ν is a dislocation measure on S ↓ .For this article, we are only interested in the case c = 0 and when ν(s 1 + s 2 + . . .< 1) = 0. We call (a, ν) the characteristic pair.When a = 0, the process (Π(t), t ≥ 0) is also called homogeneous fragmentation process.
A CRT (T , µ) is a self-similar CRT with index a = −γ < 0 if for every t ≥ 0, given (µ(T i t ), i ≥ 1)) where T i t , i ≥ 1 is the ranked order of connected components of the open set {x ∈ τ : d(x, ρ(τ )) > t}, the continuum random trees , . . .are i.i.d copies of (T , µ), where µ(T i t ) −γ T i t is the tree that has the same set of points as T i t , but whose distance function is divided by µ(T i t ) γ .Haas and Miermont in [15] have shown that there exists a self-similar continuum random tree T (γ,ν) characterized by such a pair (γ, ν), which can be constructed from a self-similar fragmentation process with characteristic pair (γ, ν).

3.4
The alpha-gamma model when γ = 1 − α, sampling from the stable CRT Let (T , ρ, µ) be the stable tree of Duquesne and Le Gall [7].The distribution on Θ of any CRT is determined by its so-called finite-dimensional marginals: the distributions of R k , k ≥ 1, the subtrees R k ⊂ T defined as the discrete trees with edge lengths spanned by ρ, U 1 , . . ., U k , where given (T , µ), the sequence U i ∈ T , i ≥ 1, of leaves is sampled independently from µ. See also [21,8,16,17,18] for various approaches to stable trees.Let us denote the discrete tree without edge lengths associated with R k by T k and note the Markov branching structure.
Lemma 15 (Corollary 22 in [16]).Let 1/α ∈ (1,2].The trees T n , n ≥ 1, sampled from the (1/α)-stable CRT are Markov branching trees, whose splitting rule has EPPF We recognise p stable Proof.These properties follow from the representation by sampling from the stable CRT, particularly the exchangeability of the sequence U i , i ≥ 1.Specifically, since U i , i ≥ 1, are conditionally independent and identically distributed given (T , µ), they are exchangeable.If we denote by L n,−1 the random set of leaves L n = {U 1 , . . ., U n } with a uniformly chosen member removed, then (L n,−1 , L n ) has the same conditional distribution as (L n−1 , L n ).Hence the pairs of (unlabelled) tree shapes spanned by ρ and these sets of leaves have the same distribution -this is strong sampling consistency as defined before Proposition 13.

Dislocation measures in size-biased order
In actual calculations, we may find that the splitting rules in Proposition 1 are quite difficult and the corresponding dislocation measure ν is always inexplicit, which leads us to transform ν to a more explicit form.The method proposed here is to change the space S ↓ into the space [0, 1] N and to rearrange the elements s ∈ S ↓ under ν into the size-biased random order that places s i 1 first with probability s i 1 (its size) and, successively, the remaining ones with probabilities s i j /(1 − s i 1 − . . .− s i j−1 ) proportional to their sizes s i j into the following positions, j ≥ 2. Definition 2. We call a measure ν sb on the space [0, 1] N the size-biased dislocation measure associated with dislocation measure ν, if for any subset ν(ds) (16) for any k ∈ N, where ν is a dislocation measure on S ↓ satisfying ν(s ∈ S ↓ : s 1 + s 2 + . . .< 1) = 0. We also denote by The sum in ( 16) is over all possible rank sequences (i 1 , . . ., i k ) to determine the first k entries of the size-biased vector.The integral in ( 16) is over the decreasing sequences that have the jth entry of the re-ordered vector fall into A j , j ∈ [k].Notice that the support of such a size-biased dislocation measure ν sb is a subset of S sb := {s ∈ [0, 1] N : ∞ i=1 s i = 1}.If we denote by s ↓ the sequence s ∈ S sb rearranged into ranked order, taking (16) into formula (4), we obtain Proposition 17.The EPPF associated with a dislocation measure ν can be represented as: This is a simple σ-finite extension of the GEM distribution and ( 17) can be derived analogously to Lemma 7. Applying Proposition 17, we can get an explicit form of the size-biased dislocation measure associated with the alpha-gamma model.
Proof of Proposition 4. We start our proof from the dislocation measure associated with the alpha-gamma model.According to ( 5) and ( 16), the first k marginals of ν sb α,γ are given by where Applying (17) to F (and setting θ = −α − γ), then integrating out x k+1 , we get: Summing over D, E, F , we obtain the formula stated in Proposition 4.
As the model related to stable trees is a special case of the alpha-gamma model when γ = 1 − α, the sized-biased dislocation measure for it is For general (α, γ), the explicit form of the dislocation measure in size-biased order, specifically the density g α,γ of the first marginal of ν sb α,γ , yields immediately the tagged particle [4] Lévy measure associated with a fragmentation process with alpha-gamma dislocation measure.

Convergence of alpha-gamma trees to self-similar CRTs
In this subsection, we will prove that the delabelled alpha-gamma trees T • n , represented as R-trees with unit edge lengths and suitably rescaled converge to CRTs as n tends to infinity.
Proof of Corollary 3. The splitting rules of T • n are the same as those of T • n , which leads to the identity in distribution for the whole trees.The preceding lemma yields convergence in distribution for T • n .
4 Limiting results for labelled alpha-gamma trees In this section we suppose 0 < α < 1 and 0 < γ ≤ α.In the boundary case γ = 0 trees grow logarithmically and do not possess non-degenerate scaling limits; for α = 1 the study in Section 3.2 can be refined to give results analogous to the ones below, but with degenerate tree shapes.

The scaling limits of reduced alpha-gamma trees
For τ a rooted R-tree and x 1 , . . ., ] the reduced subtree associated with τ, x 1 , . . ., x n , where ρ is the root of τ .
As a fragmentation CRT, the limiting CRT (T α,γ , µ) is naturally equipped with a mass measure µ and contains subtrees R k , k ≥ 1 spanned by k leaves chosen independently according to µ. Denote the discrete tree without edge lengths by Tn -it has exchangeable leaf labels.Then R n is the almost sure scaling limit of the reduced trees R( T n , [k]), by Proposition 7 in [16].
On the other hand, if we denote by T n the (non-exchangeably) labelled trees obtained via the alpha-gamma growth rules, the above result will not apply, but, similarly to the result for the alpha model shown in Proposition 18 in [16], we can still establish a.s.convergence of the reduced subtrees in the alpha-gamma model as stated in Theorem 5, and the convergence result can be strengthened as follows.
Proposition 21.In the setting of Theorem 5 in the sense of Gromov-Hausdorff convergence, where W n,k is the total number of leaves in subtrees of T n \R(T n , [k]) that are linked to the present branch points of R(T n , [k]).
Proof of Theorem 5 and Proposition 21.Actually, the labelled discrete tree R(T n , [k]) with edge lengths removed is T k for all n.Thus, it suffices to prove the convergence of its total length and of its edge length proportions.Let us consider a first urn model, cf.[11], where at level n the urn contains a black ball for each leaf in a subtree that is directly connected to a branch point of R(T n , [k]), and a white ball for each leaf in one of the remaining subtrees connected to the edges of R(T n , We will partition the white balls further.Extending the notions of spine, spinal subtrees and spinal bushes from Proposition 10 (k = 1), we call, for k ≥ 2, skeleton the tree S(T n , [k]) of T n spanned by the root and leaves [k] including the degree-2 vertices, for each such degree-2 vertex v ∈ S(T n , [k]), we consider the skeletal subtrees S sk vj that we join together into a skeletal bush S sk v .Note that the total length L (n) k of the skeleton S(T n , [k]) will increase by 1 if leaf n + 1 in T n+1 is added to any of the edges of S(T n , [k]); also, L (n) k is equal to the number of skeletal bushes (denoted by K n ) plus the original total length of k + ℓ of T k .Hence, as n → ∞ The partition of leaves (associated with white balls), where each skeletal bushes gives rise to a block, follows the dynamics of a Chinese Restaurant Process with (γ, w)-seating plan: given that the number of white balls in the first urn is m and that there are K m := K n skeletal bushes on the edges of S(T n , [k]) with n i leaves on the ith bush, the next leaf associated with a white ball will be inserted into any particular bush with n i leaves with probability proportional to n i − γ and will create a new bush with probability proportional to w + K m γ.Hence, the EPPF of this partition of the white balls is Applying Lemma 8 in connection with (20), we get the probability density of L k /W γ k as specified.Finally, we set up another urn model that is updated whenever a new skeletal bush is created.This model records the edge lengths of R(T n , [k]).The alpha-gamma growth rules assign weights 1 − α + (n i − 1)γ to leaf edges of R(T n , [k]) and weights n i γ to other edges of length n i , and each new skeletal bush makes one of the weights increase by γ.Hence, the conditional probability that the length of each edge is (n 1 , . . ., n k+l ) at stage n is that Then D (n) k converge a.s. to the Dirichlet limit as specified.Moreover, L (n) k → L k D k a.s., and it is easily seen that this implies convergence in the Gromov-Hausdorff sense.
The above argument actually gives us the conditional distribution of L k /W γ k given T k and W k , which does not depend on W k .Similarly, the conditional distribution of D k given given T k , W k and L k does not depend on W k and L k .Hence, the conditional independence of W k , L k /W γ k and D k given T k follows.

Further limiting results
Alpha-gamma trees not only have edge weights but also vertex weights, and the latter are in correspondence with the vertex degrees.We can get a result on the limiting ratio between the degree of each vertex and the total number of leaves.Proof.Recall the first urn model in the preceding proof which assigns colour black to leaves attached in subtrees of branch points of T k .We will partition the black balls further.The partition of leaves (associated with black balls), where each subtree S sk vj of a branch point v ∈ R(T n , [k]) gives rise to a block, follows the dynamics of a Chinese Restaurant Process with (α, w)-seating plan.Hence, the total degree C tot k (n)/W α n,k → M k a.s., where C tot k (n) is the sum of degrees in T n of the branch points of T k , and W n,k = n − k − W n,k is the total number of leaves of T n that are in subtrees directly connected to the branch points of T k .
Similarly to the discussion of edge length proportions, we now see that the sequence of degree proportions will converge a.s. to the Dirichlet limit as specified.Since 1 − W k is the a.s.limiting proportion of leaves in subtrees connected to the vertices of T k .
Given an alpha-gamma tree T n , if we decompose along the spine that connects the root to leaf 1, we will find the leaf numbers of subtrees connected to the spine is a Chinese restaurant partition of {2, . . ., n} with parameters (α, 1 − α).Applying Lemma 7, we get following result.Proposition 23.Let (T n , n ≥ 1) be alpha-gamma trees.Denote by (P 1 , P 2 , . ..) the limiting frequencies of the leaf numbers of each subtree of the spine connecting the root to leaf 1 in the order of appearance.These can be represented as where the W i are independent, W i has beta(1 − α, 1 + (i − 1)α) distribution, and Observe that this result does not depend on γ.This observation also follows from Proposition 6, because colouring (iv) col and crushing (cr) do not affect the partition of leaf labels according to subtrees of the spine.

(
iii) F to create T n+1 from T n , replace a n → c n by three edges a n → b n , b n → c n and b n → n + 1 so that two new edges connect the two vertices a n and c n to a new branch point b n and a further edge connects b n to a new leaf labelled n + 1.

Figure 1 :
Figure 1: Sequential growth rule: displayed is one branch point of T n with degree k + 1, hence vertex weight (k − 1)α − γ, with k − r leaves L r+1 , . . ., L k ∈ [n] and r bigger subtrees S 1 , . . ., S r attached to it; all edges also carry weights, weight 1 − α and γ are displayed here for one leaf edge and one inner edge only; the three associated possibilities for T n+1 are displayed.