Leaves on the line and in the plane

The Dead Leaves Model (DLM) provides a random tessellation of $d$-space, representing the visible portions of fallen leaves on the ground when $d=2$. For $d=1$, we establish formulae for the intensity, two-point correlations, and asymptotic covariances for the point process of cell boundaries, along with a functional CLT. For $d=2$ we establish analogous results for the random surface measure of cell boundaries, and also determine the intensity of cells in a more general setting than in earlier work of Cowan and Tsang. We introduce a general notion of Dead Leaves Random Measures and give formulae for means, asymptotic variances and functional CLTs for these measures; this has applications to various other quantities associated with the DLM.


Overview
The dead leaves model (or DLM for short) in d-dimensional space (d ∈ N), due originally to Matheron [21], is defined as follows [4,34]. Leaves fall at random onto the ground and the visible parts of leaves on the ground (i.e., those parts that are not covered by later arriving leaves) tessellate R d (typically with d = 2; see Figure  1). Motivation for studying the DLM from the modelling of natural images, and from materials science, is discussed in [4], [15], and [8], for example. The DLM provides a natural way of generating a stationary random tessellation of the plane with non-convex cells having possibly curved boundaries.
To define the model more formally, let Q be a probability measure on the space C of compact sets in R d (equipped with the σ-algebra generated by the Fell topology, as defined in Section 1.3 below), assigning strictly positive measure to the collection of sets in C having non-empty interior. A collection of 'leaves' arrives as an independently marked homogeneous Poisson point process P = ∞ i=1 δ (x i ,t i ) in R d × R of unit intensity with marks (S i ) i≥1 taking values in C with common mark distribution Q. Each point (x i , t i , S i ) of this marked point process is said to have arrival time t i and the associated leaf covers the region S i + x i ⊂ R d from that time onwards (where S + x := {y + x : y ∈ S}). At a specified time, say time 0, and at spatial location x ∈ R d , the most recent leaf to arrive before time 0 that covers location x is said to be visible (or exposed) at x. For each i ∈ N, the visible portion of leaf i at time 0 is the set of sites x ∈ R d such that leaf i is visible at x at time 0. The connected components of visible portions of leaves (at time 0) form a tessellation of R d , which we call the DLM tessellation.  Figure 1: A realization of the DLM tessellation, restricted to a window, where all the leaves are unit disks. The numbers indicate the reverse order of arrival of the leaves visible within the window. In this paper we view the two visible components of leaf 5 as being separate components of the DLM tessellation Properties of the DLM itself are discussed in [4,6,15,34], while percolation on the DLM tessellation has been considered in [2,24]. In some of these works the authors call the DLM the 'confetti' model. Note that in the present paper, all cells of our DLM tessellation are connected; the tessellation where cells are taken to be the visible portions of leaves (rather than their connected components) is also of interest, and is considered in some of the works just mentioned.
In this paper we consider the DLM for d = 1 and for d = 2. For d = 1, we develop the second order theory for the point process of cell boundaries. That is, we determine its second factorial moment measure, two-point correlation functions, asymptotic variance and a spatial central limit theorem (CLT). Moreover, we can and do consider the point process to be evolving as leaves continue to rain down; we establish a functional CLT showing that the (evolving) number of cells in a large window approximates to an Ornstein-Uhlenbeck process. For d = 2 we carry out a similar programme (asymptotic variance and functional CLT) for the surface measure of cell boundaries within a large window. We state our results for d = 1 in Section 2, and for d = 2 in Section 3.
For general d, we also develop (in Section 4) an extension of the DLM which we call the dead leaves random measure (DLRM). Suppose now that each point (x i , t i ) of P is marked with not only a random closed set S i as before, but also a random measure M i , for example the surface measure of ∂S i . The DLRM at time t is the sum, over those i with t i ≤ t, of the measures M i + x i := M i (· + (−x i )), restricted to the complement of leaves arriving between times t i and t. We give results on its intensity, limiting covariances and functional CLTs. This provides a general framework from which we may deduce the results already mentioned as special cases, and is also applicable to other DLM functionals and to variants of the DLM including the colour DLM and dead leaves random function, as discussed in Section 4.
As well as the new results already mentioned, we provide some extensions to known first-order results, giving the intensity of cell boundaries in d = 1, and the intensity of cells in d = 2. These were already in the literature in the special cases when all of the leaves are connected (for d = 1; see [21]), and when they all have the same shape (for d = 2; see [6]). Finally, for d = 1 we discuss the distribution of cell sizes; essentially this was given in [21] but we give a bit more detail here.
Exact formulae for second moment measures and for two-point correlation functions are rarely available for non-trivial point processes in Euclidean space, and one contribution of this work is to provide such formulae in one class of such models. Our CLTs could be useful for providing confidence intervals for parameter estimation in the DLM and related models. Our general functional CLT shows that the DLRMs are a class of off-lattice interacting particle systems for which the limiting process of fluctuations can be identified explicitly as an Ornstein-Uhlenbeck process. In earlier works [28,30], functional CLTs were obtained for certain general classes of particle systems but without any characterisation of the limit process.
The proof of some of our results in d = 2 uses the following results. For rectifiable curves γ and γ in R 2 of respective lengths |γ| and |γ |, let N (γ, γ ) (respectivelỹ N (γ, γ )) denote the number of times they cross each other (respectively, touch each other). If one integrates N (γ, T (γ )) (resp.Ñ (γ, T (γ )) over all rigid motions T of the plane, one obtains a value of 4|γ| × |γ | (resp. zero). We discuss these results, which are related to the classic Buffon's needle problem, in Section 5.
Since we prefer to work with positive rather than negative times, in this paper we often consider a time-reversed version of the DLM where, for each site x ∈ R d , the first leaf to arrive at x after time 0 is taken to be visible at x. Imagine leaves falling onto a glass plate which can be observed from below, starting from time 0. Clearly this gives a tessellation with the same distribution as the original DLM. This observation dates back at least to [12]. In [17], it is the basis for a perfect simulation algorithm for the DLM. The time-reversed DLM is illustrated for d = 1 in Figure 2.
We shall state our results in Sections 2-5, and prove them in Sections 6-8.

Motivation
We now discuss further the motivation for considering the DLM. Since we consider the case d = 1 in some detail, we discuss the motivation for this at some length. The phrase 'leaves on the line' entered the British folklore in the early 1990s as a corporate justification for delays on the railways. To quote Wikipedia, this phrase is 'a standing joke ... seen by members of the public who do not understand the problem as an excuse for poor service.' This paper is a mathematical contribution to said public understanding.
A one-dimensional DLM is obtained whenever one takes the restriction of a higher-dimensional DLM (in R d , say, with d > 1) to a specified one-dimensional subspace of R d . Such restrictions are considered in [34] and [4].
Moreover, the one dimensional DLM is quite natural in its own right. For example, to quote [34], 'Standing at the beginning of a forest, one sees only the first few trees, the others being hidden behind.' A less pleasant interpretation, is that if an explosion takes place in a crowded spot, one might be interested in the number of people directly exposed to the blast (rather than shielded by others). In these two interpretations, the 'time' dimension in fact represents a second spatial dimension.
In another interpretation of the one-dimensional time-reversed DLM, consider a rolling news television or radio station. Suppose news stories arise as a homogeneous Poisson process in the product space R × R + , where the first coordinate represents the time at which the story starts, and the second coordinate represents its 'newsworthiness' (a lower score representing a more newsworthy story), and each story is active for a random duration. Suppose at any given time the news station presents the most newsworthy currently active story. Then the stories presented form a sequence of intervals, each story presented continuing until it finishes or is superseded by a more newsworthy story. The continuum time series of stories presented forms a DLM in time rather than in space, with 'newsworthiness' taking the role of 'time' in the original DLM. One can imagine a similar situation with, for example, the time series of top-ranked tennis or golf players.
In these interpretations, we are taking the trajectory of a news story's newsworthiness, or a tennis player's standard of play, to be flat but of possibly random duration. It would be interesting in future work to allow for other shapes of trajectory. If the trajectory is taken to be a fixed wedge-shape, then the sequence of top-ranked stories/players is the sequence of maximal points (actually minimal points in our formulation), which has been considered, for example, in [18,37].
The two-dimensional DLM has received considerable attention in applications; see [4] and references therein. For any two-dimensional image of a three-dimensional particulate material with opaque particles, the closest particles obscure those lying behind, and the DLM models this phenomenon. See for example [15,14] for applications to analysis of images of powders. Jeulin [13,15] extends to the DLM to a dead leaves random function model for further flexibility in modelling greyscale images, and some of our results are applicable to this model. See Section 4.
Another reason to study the DLM, in arbitrary dimensions, is as an analogue to the car parking model of random sequential adsorption. In the one-dimensional and infinite-space version of the latter model, unit intervals ('cars') arrive at locations in space-time given by the points of a homogeneous Poisson process in R × R + . Each car is accepted if the position (a unit interval) where it arrives does not intersect any previously accepted cars. Ultimately, at time infinity one ends up with a random maximal (i.e., saturated) packing of R by unit intervals. The higher-dimensional version of the car parking model has also been studied, for example in [9] and references therein.
The problem of covering can be viewed as in some sense dual to that of packing (see for example [31,39]), and in this sense the (time-reversed) DLM is dual to the car parking model; in each case, objects of finite but positive size (cars/leaves) arrive sequentially at random in d-space, and are accepted in a greedy manner subject to a hard-core constraint (for the packing) or a visibility constraint (for the DLM).

Notation and terminology
Let B d denote the Borel σ-algebra on R d . For k ∈ {0, 1, . . . , d}, let H k denote the k-dimensional Hausdorff measure of sets in R d . This is a measure on (R d , B d ). In particular, H 0 is the counting measure and H d is Lebesgue measure. See [19].
Let · denote the Euclidean norm in R d . Given x ∈ R d , let δ x denote the Dirac measure at x, i.e. δ x (A) = 1 if x ∈ A, otherwise δ x (A) = 0. For r > 0, let B(r) := {y ∈ R d : y ≤ r}, the closed Euclidean ball of radius r centred on the origin. Set π d := H d (B(1)), the Lebesgue measure of the unit ball in d dimensions.
We say a set γ ⊂ R 2 is a rectifiable curve if there exists a continuous injective function Γ : [0, 1] → R 2 such that γ = Γ([0, 1]) and H 1 (γ) < ∞. If moreover there exist k ∈ N ∪ {0} and numbers 0 = x 0 < x 1 < · · · < x k+1 = 1, such that for 1 ≤ i ≤ k + 1 the restriction of Γ to [x i−1 , x i ] is continuously differentiable with derivative that is nowhere zero, we say that γ is a piecewise C 1 curve. We then refer to the points Γ(x 1 ), . . . , Γ(x k ) (where k is assumed to be taken as small as possible) as the corners of γ. If we can take k = 0 (so there are no corners), then we say γ is a C 1 curve. We say that Γ(0) and Γ(1) are the endpoints of γ. We define a rectifiable Jordan curve (respectively, a piecewise C 1 Jordan curve) similarly to a rectifiable curve (respectively, piecewise C 1 curve) except that now Γ must satisfy Γ(1) = Γ(0) but be otherwise injective.
For σ ≥ 0, let N (0, σ 2 ) denote a normally distributed random variable having mean zero and variance σ 2 if σ > 0, and denote a random variable taking the value 0 almost surely if σ = 0.
We now review some concepts from the theory of point processes and random measures that we shall be using. See for example [19] or [33] for more details.
Let M be the space of locally finite measures on (R d , B d ), equipped with the smallest σ-algebra that makes measurable all of the functions from M to R of the form µ → µ(A), with A ∈ B d . For µ ∈ M we shall often write |µ| for µ(R d ). A random measure on R d is a measurable function from an underlying probability space to M, or equivalently, a measurable kernel from the probability space to (R d , B d ). A random measure on R d is said to be stationary if its distribution is shift invariant, in which case the expected value of the measure it assigns to a Borel set B ⊂ R d is proportional to the Lebesgue measure of B; the constant of proportionality is called the intensity of the random measure.
A random measure on R d taking integer values is called a point process on R d , and the notions of intensity and stationarity for random measures carry through to point processes. A point process is said to be simple if it has no multiple points.
The second factorial moment measure of a point process η in R d is a Borel measure α 2 on R d × R d , defined at [19, eqn (4.22)]. For disjoint Borel sets . If η is simple then for x, y ∈ R d with x = y, loosely speaking α 2 (d(x, y)) is the probability of seeing a point of η in dx and another one in dy. If α 2 (d(x, y)) = ρ(y − x)dxdy for some Borel function ρ : R d → R + , then the pair correlation function ρ 2 (·) of the point process η is defined by ρ 2 (z) := ρ(z)/γ 2 , where γ is the intensity of η.
The Fell topology on C is the topology generated by all sets of the form {F ∈ C : A random closed set in R d is a measurable map from a probability space to the space of closed sets in R d equipped with the sigma-algebra generated by the Fell topology. See [ We now elaborate on the definition of the DLM given already. Let Q be a probability measure on C, which we call the grain distribution of the model. Assume Q assigns strictly positive measure to the collection of sets in C having non-empty interior. Let P be a homogeneous Poisson process in R d × R of unit intensity. Write This can be done: see [19,Corollary 6.5]. Independently of P, let (S i ) i≥1 be a sequence of independent random elements of C with common distribution Q. By the Marking theorem (see (see [19,Theorem 5.6 For A ⊂ R d , let A denote its closure and A o its interior; let ∂A := A \ A o , the topological boundary of A. The boundary of the DLM tessellation at time t, which we denote by Φ t , is given by The boundary of the the time-reversed DLM tessellation is denoted by Φ, and given by The cells of our (time-reversed) DLM tessellation are then defined to be the closures of the connected components of R d \ Φ. Clearly Φ t has the same distribution as Φ for all t. Throughout, we let S denote a random element of C with distribution Q; that is, a measurable function from an underlying probability space (denoted (Ω, F, P)) to C. For x ∈ R d we set taking R = 0 if S is empty. Observe that 2λ − λ x equals the expected value of H d (S ∩ (S + x)), sometimes called the covariogram of S. The function λ x will feature in certain formulae for limiting covariances below. It also features in certain formulae for limiting variances arising from the Boolean model (see for example [10]). The Boolean model, that is the random set ∪ i:0≤t i ≤λ (S i + x i ) for some fixed λ, is another fundamental model in stochastic geometry. Some of our results require the following measurability condition.
For d = 1, we shall show in Lemma 6.2 that Condition 1.1 can actually be deduced from our earlier assumption that S is a random element of C, that is, a measurable map from a probability space to C. However, we do not know whether this is also the case for d ≥ 2.
Given d, for n > 0 let W n := [0, n 1/d ] d , a cube of volume n in d-space. Let R 0 denote the class of bounded measurable real-valued functions on R d that are Lebesgue-almost everywhere continuous and have compact support (the R stands for 'Riemann integrable'). For f ∈ R 0 , and n > 0, define the rescaled function T n f by

Leaves on the line
In this section we take d = 1 and state our results for the 1-dimensional DLM. We shall prove them in Section 7.
We define a visible interval to be a cell of the DLM tessellation. In the special case where all of the leaves are single intervals of fixed length, a visible interval is simply the visible part of a leaf, because this visible part cannot be disconnected.
The endpoints of the visible intervals form a stationary point process in R. In terms of earlier notation this point process is simply the random set Φ (if viewed as a random subset of R) or the measure H 0 (Φ ∩ ·) (if viewed as a random point measure). We denote this point process (viewed as a random measure) by η, as illustrated in Figure 2. The point process η is simple, even if some of the leaves include constituent intervals of length zero (recall that we assume Q is such that some of the leaves have non-empty interior).
Our main results for d = 1 concern the second factorial moment measure and the pair correlation function of η (Theorem 2.2), asymptotic covariances and a CLT for the total number of points of η in a large interval (Theorems 2.3, 2.4, and 2.5), and a functional CLT for this quantity as the DLM evolves in time (Theorem 2.6).
We shall also give formulae for the intensity of η, the distribution of the length of the visible interval containing the origin and the length of a typical visible interval (Propositions 2.1, 2.7 and 2.8). These propositions are to some extent already known, as we shall discuss.
Recall that λ and R are defined by (1.3) and (1.4), respectively. Remarks. In the special case where all the leaves are intervals of strictly positive length, the intensity of η simplifies to just 2/λ. This special case of Proposition 2.1 is already documented; see [21, page 4], or [34,(XIII.41)].
Our more general statement (and proof) of Proposition 2.1 allows for disconnected leaves. If H 0 (∂S) is finite, then S consists of finitely many disjoint intervals, and H 0 (∂S) equals twice the number of constituent intervals of S, minus the number of these intervals having length zero. It is quite natural to allow a leaf to have several components; for example if the one-dimensional DLM is obtained as the restriction of a higher-dimensional DLM (in R d , say, for some d > 1) to a specified one-dimensional subspace of R d . If the leaves in the parent DLM in R d are not restricted to be convex, but have 'nice' boundaries (for example the polygonal boundaries considered in [6]), then the one-dimensional DLM induced in this manner will typically include leaves with more than one component. The proof given here, based on a general result for DLRMs, and ultimately on the Mecke formula from the theory of Poisson processes [19], is quite simple and may be new.
In the next two results we use the following notation. Let ν denote the distribution of the Lebesgue measure of a leaf under the measure Q. Theorem 2.2 (Second moment measure of η). Suppose that Q is concentrated on connected sets (i.e., on intervals), and that F (0) = 0 and λ < ∞. Then the second factorial moment measure α 2 of the point process η is given for x < y by If ν has a probability density function f , then the pair correlation function ρ 2 of η is given by In the particular case where ν = δ λ for some λ > 0, we have for x < y that We now give some limit theorems for η([0, n]), as n → ∞. In these results, n does not need to be integer-valued.
In (2.4) the last integrand on the right hand side is equal to .
(2.6) u t Figure 3: Illustration of the evolving DLM tessellation in d = 1 with the point processes η t and η u shown. Here t < u.
By the case s = 1 of Theorem 2.5, the finite dimensional distributions of the process (n −1/2 (η t (n) − E [η t (n)]), t ∈ R) converge to those of a Gaussian process (X t ) t∈R with covariance function σ 2 1 exp(−λ|u − t|). This limiting process is a stationary Ornstein-Uhlenbeck process; see [16, page 358]. That is, it is the solution to the stochastic differential equation where (B t ) is a standard Brownian motion. Under a stronger moment condition, we can improve this finite dimensional convergence to a functional CLT; that is, to convergence in D(−∞, ∞) of right-continuous functions on R with left limits. We give this space the Skorohod topology, as described in [3] and extended to noncompact time intervals in [36].
Theorem 2.6 (Functional CLT for η t (n)). Suppose there exist ε, K ∈ (0, ∞) such that E [(H 0 (∂S)) 4+ε ] < ∞ and P[R ≤ K] = 1. Then as n → ∞, the stochas- , to the stationary Ornstein-Uhlenbeck Gaussian process with covariance function σ 2 1 exp(−λ|u − t|), t, u ∈ R. The limiting random field in Theorem 2.5 is an Ornstein-Uhlenbeck process in Wiener space. See for example [26] for a definition, or [22] for a more detailed discussion of this infinite-dimensional Ornstein-Uhlenbeck process. It would be interesting, in future work, to try to extend the finite dimensional convergence of Theorem 2.5 to convergence in an appropriate two-parameter function space. This is beyond the scope of the methods used here, since our proof is based ultimately on the classical approach of [3] for showing convergence of a sequence of processes with a single time parameter, namely finite-dimensional convergence plus tightness via moment bounds.
It would also be of interest to extend these CLTs to cases where the leaves are intervals of unbounded length but satisfy a moment condition.
We conclude this section with results on the length of the interval of the DLM tessellation containing the origin, and the length of a 'typical interval'. These follow from results in [21], as we discuss in Section 7. For an alternative proof, see the earlier version [29] of the present paper.
Proposition 2.7 (Exposed interval length distribution). Assume that Q is concentrated on intervals of strictly positive length, and that λ < ∞. Let ν denote the distribution of the length of a leaf under the measure Q, and X the length of the visible interval covering the origin. The distribution of X is given by In the special case where the measure ν is the Dirac measure δ λ for some λ > 0, If ν is a Dirac measure, then X is the length of the visible part of the leaf visible at the origin. In general, X counts the length only of the connected component of the visible part of this leaf that includes 0, ignoring any other components.
A typical visible interval, loosely speaking, is obtained by fixing a very large region of R, and choosing at random one of the inter-point intervals of η that lie in that region. The distribution of the length of a typical visible interval is the inverse-size-biased distribution of X (see [19,Proposition 9.7 .
If all the leaves are intervals of length λ, i.e. ν = δ λ for some λ > 0, then Y has a mixed distribution with

Leaves in the plane
In this section we take d = 2, and state our results for the two-dimensional DLM. We shall prove them in Section 8. We shall say that our grain distribution Q has the rectifiable Jordan property if it is concentrated on nonempty regular compact sets having a rectifiable Jordan curve as their boundary. Here, we say a compact set in R 2 is regular if it is the closure of its interior. We say Q has the piecewise C 1 Jordan property if it is concentrated on sets having a piecewise C 1 Jordan curve as their boundary.
Recalling the definitions (1.1) and (1.2), define the measures φ := H 1 (Φ ∩ ·), the restriction of the one-dimensional Hausdorff measure to the boundaries of the DLM tessellation, and φ t : Then φ is a stationary random measure and its intensity is As mentioned earlier, the cells of the (time-reversed) DLM tessellation are the closures of the connected components of the set R 2 \ Φ. We now define Ξ to be the set of points in R 2 which lie in three or more cells of this tessellation. Later we shall view Φ as a planar graph with the points of Ξ as the nodes, which we call branch points, in this graph. We define the measure χ := H 0 (Ξ ∩ ·).
For A ⊂ R 2 and θ ∈ (−π, π], let ρ θ (A) denote the image of A under an anticlockwise rotation through an angle θ about the origin (elsewhere we are using ρ 2 to denote pair correlation, but this clash of notation should not be confusing). We Theorem 3.2 (Intensity of branch points). Assume either that Q has the piecewise C 1 Jordan property, or that Q has the rectifiable Jordan property and is rotation invariant. Assume also that λ < ∞, and E [R 2 ] < ∞, and set The next two results require Q to have a further property. We say Q has the non-containment property if for (Q ⊗ Q)-almost all pairs (σ, σ ) ∈ C × C, the set of x ∈ R 2 such that σ + x ⊂ σ is Lebesgue-null. One way to guarantee the noncontainment property is to have Q be such that under Q, all of the sets S i have the same area. Theorem 3.3 (Connectivity of Φ). Suppose Q has the rectifiable Jordan and noncontainment properties, and that E [R 2 ] < ∞. Then Φ is almost surely a connected set.
Let Ψ be the set of centroids of cells of the DLM tessellation, and define the measure ψ := H 0 (Ψ ∩ ·). While we would expect that ψ is a point process (i.e., that it is measurable), we have not proved this in general (unlike in the case of χ), so we leave this as an open problem and include the measurability as an assumption in the next result.
Theorem 3.4 (Intensity of cells). Suppose Q has the rectifiable Jordan and noncontainment properties, and either has the piecewise C 1 Jordan property or is rotation invariant. Assume that β 3 < ∞, and E [R 2 ] < ∞, and that ψ is a point process. Then ψ is a stationary point process, and its intensity, denoted β 1 , is given by β 1 = β 3 /2. In particular, if Q is rotation invariant, then Remarks. Our formula for the intensity of φ in Theorem 3.1 agrees with that of [6, p. 57] but is considerably more general. In [6] it is assumed that Q is such that a random set S with distribution Q is a uniform random rotation of a fixed polygon S 0 . In [6,Sec. 7] there is some discussion on generalising to the case where S 0 is non-polygonal, but it is still taken to be a fixed set. Similarly, Equations (3.2) and (3.3) also generalize formulae in [6]. Theorem 3.3 is perhaps intuitively obvious, but a careful proof seems to require some effort. As well as being of interest in itself, the connectivity of Φ is required for the proof of Theorem 3.4.
The reason we require Q to have the Jordan and non-containment properties in Theorem 3.4, is because the proof relies on a topological argument based on the cell boundaries of the DLM tessellation forming a connected planar graph with all vertices of degree 3. The Jordan property (requiring all leaves to be connected with a Jordan curve boundary) could be relaxed to a requirement that every leaf has a finite (and uniformly bounded) number of components, each with a Jordan curve boundary; the key requirement here is to avoid having leaves which are onedimensional sticks or have boundary shaped like a figure 8 or letter b, for example, since then there would be vertices of degree other than 3. The non-containment condition is needed to ensure that the planar graph of boundaries is connected. If it fails (but the Jordan condition holds) then one may still deduce in a more general version of Theorem 3.4 that β 3 /2 is the density of faces minus the density of 'holes', where by a 'hole' we mean a bounded component of the union of cell boundaries.
Examples. If the leaves are all a fixed convex set S 0 with 0 < H 2 (S 0 ) < ∞, then using Theorem 3.4 and (3.1) we have x ∈ S 0 }. Therefore, if moreover S 0 is symmetric (i.e. S 0 =Š 0 ; for example if S 0 is a fixed rectangle or circle centred on the origin), then β 1 = 4/H 2 (S 0 ).
On the other hand, if each leaf is a uniformly distributed random rotation of a unit square, then (3.3) gives us β 1 = 16/π.
Recall the definitions of R 0 , W n , T n and R from Section 1.3, and of φ, φ t from the start of this section.

5)
and (3.6) and the quantities v 1 , v 2 , v 3 are all finite. More generally, for t, u ∈ R and f, g ∈ R 0 , We now provide a central limit theorem for φ(W n ), under the assumption that the leaves are uniformly bounded together with a moment condition on H 1 (∂S).
Theorem 3.6 (CLT for the length of tessellation boundaries). Suppose for some where σ 2 is as given in Theorem 3.5. More generally, the finite-dimensional distributions of the random converge to those of a centred Gaussian random field with covariance function κ 2 ((f, t), (g, u)) given by (3.7).
Remarks. The limiting Gaussian process in the preceding theorem is a stationary Ornstein-Uhlenbeck process. Similar remarks to those made after Theorem 2.5, regarding possible extensions to the result above, apply here.
It should be possible to adapt the conditional variance argument of Avram and Bertsimas [1] to show the proportionate variance of φ(W n ) is bounded away from zero, so that σ 2 2 is strictly positive.

Dead leaves random measures
In this section we present some results for the DLM in arbitrary dimension d ∈ N (which we shall prove in Section 6), which enable us to consider some of the results already stated in a unified framework and also to indicate further results on deadleaves type models that can be derived similarly.
It is convenient here to consider a slightly more general setting than before. We augment our mark space (previously taken to be C) to now be the space C × M. Let Q denote a probability measure on C × M with first marginal Q. Assume that our Poisson process For each i the measure M i + x i is added at time t i but is then restricted to the complement of regions covered by later arriving leaves S j + x j , as they arrive. Thus, at time t ∈ R we end up with a measure which we call the dead leaves random measure (DLRM) at time t. We also define the time-reversed DLRM (at time zero) by Here are some examples of how to specify a type of distribution Q that yields a resulting DLRM of interest. In these examples, to describe Q we let (S, M ) denote a random element of C × M having the distribution Q , and we describe the interpretation of the resulting DLRM. Often we take M to be supported by S but this is not essential. For d = 1, this ξ is the same as the measure η considered earlier. For d = 2, this ξ is the same as the measure φ considered earlier.
• Take d = 2 and let M be the counting measure supported by the set of corners of S (counting measures are defined in e.g. [19]). Here we could be assuming that the shape S is almost surely polygonal, or more generally, that its boundary is almost surely a piecewise C 1 Jordan curve. We defined a 'corner' of such a curve in Section 1.3. The resulting measure ξ is the counting measure supported by the set of corners of the boundaries of the DLM tessellation, which has been considered in [6].
• Colour Dead Leaves Model (CDLM). Let each leaf have a 'colour' (either 1 or 0) and let M be Lebesgue measure restricted to S (if the colour is 1) or the zero measure (if the colour is 0). Then ξ is Lebesgue measure restricted to those visible leaves which are coloured 1. The CDLM was introduced by Jeulin in [12] (see also [15]), and is the basis of the percolation problems considered in [2,24].
• Dead Leaves Random function (DLRF). Let M have a density given by a random function f : R d → R + with support S (representing for example the level of 'greyscale' on the leaf S). Then ξ is a measure with density at each site x ∈ R d given by the level of greyscale on the leaf visible at x. The DLRF has been proposed by Jeulin [15] for modelling microscopic images.
• Seeds and leaves model. Imagine that at each 'event' of our Poisson process the arriving object is either a random finite set of points (seeds) or a leaf. Thus for the random Q -distributed pair (S, M ), either M is a finite sum of Dirac measures and S is the empty set, or M is the zero measure and S is a non-empty set in C (a leaf). The point process ξ t will then represent the set of locations of seeds on the ground that are visible (i.e., not covered by leaves) at time t. It might be that these are the seeds which have potential to grow into new trees, or that they are the seeds which get eaten.
We now give some general results on the DLRM. In applying these results elsewhere in this paper, we concentrate on the first of the examples just listed. However, the general results could similarly be applied to the other examples. In the following results, (S, M ) denotes a random element of C × M with distribution Q , and we write |M | for M (R d ). We define λ, λ x , R and W n as in Section 1.3.
Then ξ defined at (4.2) is a stationary random measure and its intensity, denoted α, is given by

5)
and 6) and the quantities v 4 , v 5 , v 6 are all finite. More generally, for t, u ∈ R, f, g ∈ R 0 , Theorem 4.2 does not rule out the possibility that σ 0 could be zero. Our formula for σ 2 0 has some resemblance for the formula for the asymptotic variances of certain measures associated with the Boolean model in [10, eqn (7.3)]. There is a certain similarity between the manner in which these measures are defined in [10], and the DLRMs considered here. However there is no time-parameter in the definition of the Boolean model.
Our proof of (4.9) provides a rate of convergence (using the Kolmogorov distance) to the normal in (4.9), and hence also in Theorems 2.5 and 3.6. Under a stronger moment condition, namely E [|M | 3+ε ] < ∞, one can adapt the proof (which is based on the Chen-Stein method) to make the rate of convergence presumably optimal.
It would be of interest to derive a functional CLT for the DLRM starting from the zero measure at time 0 (rather than starting from equilibrium as we have taken here). It may be possible to do this using [28,Theorem 3.3]; the evolving DLRM fits into the general framework of the spatial birth, death, migration and displacement process in [28, Section 4.1]. It is not so clear whether results from [28] can be used directly in the present setting where the DLM starts in equilibrium, although the argument used here is related to that in [28].
It would also be of interest to extend these CLTs to cases where there is no uniform bound r 0 on the range of the support of M and the value of R. We would expect that the uniform boundedness condition could be replaced by appropriate moment conditions, but we leave this for future work. There are several approaches to proving central limit theorems for Boolean models (see for example in [10,27,11]), which allow for unbounded grains and might be adaptable to the dead leaves setting.
Our last result in this section confirms that the surface measure H d−1 (Φ ∩ ·) of the DLM can be obtained as a special case of the DLRM.

Buffon's noodle and Poincaré's formula
The classical Buffon's needle problem may be phrased as follows. If one throws a stick (straight, and of zero thickness) at random onto a wooden floor (so both its location and its orientation are random and uniform), then how often does one expect to see it cross the cracks between floorboards? The generalization to a possibly curved stick has been wittily christened Buffon's noodle. What we require here is a further variant, concerned with the expected number of crossings for two curved sticks thrown at random onto a carpeted floor. This has been referred to as Poincaré's formula [20], although in an earlier version of the present paper [29] we called it the two noodle formula.
Lemma 5.1 (Poincare's 'two noodle' formula). Let γ and γ be rectifiable curves in In other words, if γ is rotated uniformly at random, and translated by a random amount uniformly distributed over a large region A, then the expected number of times it intersects γ is equal to (2/π) times the product of the lengths of γ and γ , divided by the area of A. Lemma 5.1 follows from [38, Theorem 1.5.1]. In the special case where γ and γ are piecewise C 1 , an elementary proof of (5.1) may be found in [29].
Given rectifiable curves γ and γ in R 2 , we say that γ and γ cross at a point x ∈ γ ∩ γ if x is not an endpoint of γ or γ , and γ passes from one side of γ to the other at x, where the 'sides' of γ in a neighbourhood of x can be defined by extending γ to a Jordan curve and taking the two components of its complement. We say that γ and γ touch at x if x ∈ γ ∩ γ but γ and γ do not cross at x.
We say that γ and γ touch if there exists z ∈ R 2 such that they touch at z. As well as Lemma 5.1, in the proof of (3.2) and (3.3) we require the following.
We shall prove Lemmas 5.2 and 5.3 later in this section. The proof of Lemma 5.2 is very short, but heavily reliant on results in [20,38]. In the case where γ and γ are piecewise C 1 , the conclusion of Lemma 5.2 can alternatively be derived from Lemma 5.3; we shall provide an elementary proof of the latter result.
As a slight digression, we also state the Buffon's noodle result mentioned above.
This result is well known, though not all of the proofs in the literature are complete. It can be deduced from Lemma 5.1, but we do not give the details here. For further discussion and a proof of Theorem 5.4 in the piecewise C 1 case, see [29].
In the rest of this section we prove Lemma 5.3. Given C 1 curves γ and γ in R 2 , we shall say that they graze at a point z ∈ R 2 , if z ∈ γ ∩ γ but z is not one of the endpoints of γ or γ , and γ, γ have a common tangent line at z (in [29] we used the term 'touch' for this notion). We say that γ and γ graze if they graze at z for some z ∈ R 2 .
We say that a C 1 curve γ in R 2 is almost straight, if all lines tangent to γ are an angle of at most π/99 to each other. Observe that if γ is almost straight, then there exists θ ∈ [−π, π) such that ρ θ (γ) is the graph of a C 1 function defined on an interval.
Lemma 5.5. Suppose γ and γ are C 1 curves in R 2 , and γ is almost straight. Assume there exist an interval [a, b] and a function f ∈ C 1 ([a, b]) such that Proof. Without loss of generality we may assume γ is also almost straight, since if not, we may break γ into finitely many almost straight pieces. We claim that we may also assume that the locus of γ takes a similar form to that of γ, namely for some a < b and and for some C 1 function g : [a , b ] → R. Indeed, if γ cannot be expressed in this form then it must have a vertical tangent line somewhere, but in this case, since both γ and γ are assumed almost straight and γ has at least one horizontal tangent line, it is impossible for any translate of γ to be tangent to γ. Since h ∈ C 1 , the derivative h is uniformly continuous on [a, b] ∩ [a , b ]. Therefore, given ε > 0, we can choose n large enough so that for all i ∈ I n we have |h (x)| ≤ ε for all x ∈ I n,i . Hence, for all such i, by the mean value theorem, with H 1 denoting Lebesgue measure we have H 1 (h(I n,i )) ≤ εH 1 (I n,i ). Thus and hence, since ε is arbitrarily small, H 1 (h(A)) = 0. Therefore by (5.6) we have (5.4), as required.
Proof of Lemma 5.3. We can and do assume without loss of generality that both γ and γ are C 1 (not just piecewise C 1 ), and moreover that they are almost straight. It is enough to prove the result for ρ θ (γ) and ρ θ (γ ) for some θ ∈ (−π, π] (rather than for the original γ, γ ) and therefore we can (and do) also assume there exists an interval [a, b] and function f ∈ C 1 ([a, b]) such that (5.3) holds. Under these assumptions, applying (5.4) to the curve γ and hence the set of z = (x, y) such that γ + z grazes γ is Lebesgue null.
Let γ 0 , γ 1 denote the endpoints of γ and γ 0 , γ 1 the endpoints of γ . If γ and γ + z touch but do not graze for some z ∈ R 2 , then either γ i ∈ γ + z or γ i + z ∈ γ for some i ∈ {0, 1}. Since γ is rectifiable we have for i = 0, 1 that and similarly R 2 1 γ (γ i + z)dz = 0. Hence the set of z ∈ R 2 such that γ and γ + z touch but do not graze is also Lebesgue null.  (1))] < ∞. Then for any K ∈ (0, ∞), with probability 1 only finitely many of the sets S j + x j with −K ≤ t j ≤ K have non-empty intersection with B(K).
Proof. The number of sets S j + x j that intersect B(1) and have |t j | ≤ K is Poisson with mean which is finite by the assumption E [H d (S ⊕ B(1))] < ∞. Hence, almost surely, (S j + x j ) ∩ B(1) = ∅ for only finitely many j with −K ≤ t j ≤ K. Since we can cover B(K) with finitely many translates of B(1), the result follows. (1))] < ∞. Then the time-reversed DLRM ξ is indeed a random measure, and so is the DLRM ξ t for all t ∈ R.
Proof. We prove just the first assertion (the proof of the second assertion is similar). It suffices to show, for arbitrary bounded Borel A ⊂ R d , that ξ(A) is a random variable. By the definition (4.2), and it suffices to prove that each summand is a random variable. Choose r 1 such that A ⊂ B o (r 1 ), where B o (r 1 ) is the interior of the ball B(r 1 ). Fix i ∈ N. By Lemma 6.1, with probability 1 only finitely many of the sets S j + x j with 0 ≤ t j < t i have non-empty intersection with B o (r 1 ). Therefore the set is open.
Given n ∈ N, partition R d into cubes of the form [0, 2 −n ) d +2 −n z with z ∈ Z d . Let the cubes in the partition that are contained in B o (r 1 ) be denoted Q n,1 , . . . , Q n,mn . Let U n be the union of cubes of the form Q n,k with 1 ≤ k ≤ m n and Q n,k ⊂ U .
Since U is open we have U n ↑ U , and so by monotone convergence, and we claim that this is a random variable. For example, if Q n,k = [0, 1) d , then for each j we have which is an event by the definition of the Fell topology and the fact that S j + x j is a random element of K by [33, Theorem 2.4.3], for example.
t be the event that the site x is exposed (i.e., not already covered) just before time t, in the time-reversed DLM. That is, let Then with λ and λ x defined at (1.3), for all x, y ∈ R d , t ≥ 0 and u ∈ [t, ∞) we have In particular P[E x,t ] = exp(−λt).
Proof. We first prove (6.2) in the case with u = t. The number of i with {x, y} ∩ and since λ x−y = λ y−x this gives us (6.2) for u = t. Taking y = x gives us also that P[E x,t ] = e −λt . Finally, for u > t, by the independence property and timehomogeneity of the Poisson process i δ (x i ,t i ,S i ) we have that which gives us (6.2) in general.
Proof of Theorem 4.1. By Lemma 6.2, ξ is a random measure. It is easy to see that this random measure is stationary. Recalling (4.2), and using the Mecke formula (see [19]), and the notation from (6.1), we have that Hence by Lemma 6.3 and Fubini's theorem, That is, we have (4.3).
Proof of Theorem 4.2. Let f ∈ R 0 and t ∈ R. We first prove (4.7) in the special case where g = f and u = t. For each n set f n := T n (f ) and Z n := ξ(f n ). Then E [Z 2 n ] = a n + b n , where we set In what follows, we adopt the convention that any unspecified domain of integration is taken to be R d . By the Mecke formula so using notation E x,t from Lemma 6.3, Changing variables toỹ = y − x andz = z − x we obtain that Now writing y forỹ and z forz, using Fubini's theorem and Lemma 6.3, followed by a change of variablex = x + y, we have Using the further change of variables x := n −1/dx , we have for almost all x ∈ R d and all (z, y) that f n (n 1/d x +z −y) → f (x ) as n → ∞ (because we assume f ∈ R 0 ), so by the dominated convergence theorem n −1 a n → v 4 f 2 2 , with v 4 given by (4.4), and v 4 is finite because we assume E [|M | 2 ] < ∞, and because λ z ≥ λ > 0 for all z ∈ R d .

Then by Fubini's theorem and the changes of variablesx
By the law of total probability, the two outer integrals may be written as expectations with respect to (S, M ) and (S , M ). Using also Lemma 6.3, and writing just x forx and y forỹ, we obtain that On the other hand, (E Z n ) 2 = λ −2 (E |M |) 2 ( f n (x)dx) 2 by Theorem 4.1 and Campbell's formula (see e.g. [19, page 128]). Setting w = v − u andũ = u + x, we may deduce that w+y−x 1{w + y / ∈ S} − λ −1 ). Now take u = n −1/dũ . For almost every u ∈ R d , and all (w, x, y), we have that f n (n 1/d u + w + y − x) → f (u ) as n → ∞. Hence using dominated convergence, we find that where we set Then we obtain that Now setting w := w + y, we obtain that with v 6 given by (4.6). Also, setting v = w + y − x yields that where v 5 is given by (4.5). The integral in (4.5) is finite because (2λ − λ x )/λ x is bounded above by a constant times E [H d (S ∩ (S + x))], and with R given by (1.4), Thus we have the case t = u and f = g of (4.7). We can then deduce the case of (4.7) with t = u but with general f, g ∈ R 0 , by polarisation (see e.g. [19, page 192]). Finally we need to prove (4.7) in general. Without loss of generality, we assume u > t. Write g n for T n (g). Then we may write ξ u (g n ) := X + Y , where and Y denotes the sum, over those i for which t < t i ≤ u, of the integral of g n with respect to the measure (M i + x i ) restricted to regions which do not subsequently get covered between times t i and u, i.e.
Let F t denote the σ-algebra generated by all Poisson arrivals and associated marks up to time t. Then by Lemma 6.3, Also Y is independent of F t and by the Mecke formula and Fubini's theorem, Hence, and by the case of (4.7) already proved, this tends to e λ(t−u) σ 2 0 f, g as n → ∞. Thus we obtain the general case of (4.7).
Then B(y k , r 2 ) ⊂ S j(N k ) + x j(N k ) for each k, and hence N ≤ max(N 1 , . . . , N m ). Also each of N 1 , . . . , N m has a geometric distribution with strictly positive parameter. It follows that N has finite moments of all orders as claimed, since for each r ∈ N we have N r ≤ m k=1 N r k and E [N r k ] < ∞ for each k. For k ∈ N set Z k := |M j(k) |. Then ξ(W 1 ) ≤ N i=1 Z i , and by Jensen's inequality, For each i ∈ N, let which has the same distribution as N . Then N ≤ i + N i for each i, so that For each i the three random variables Z i , 1 {N ≥i} and N i are mutually independent, so Given a finite graph G with vertex set V , we say G is a dependency graph for a collection of random variables (or random vectors) {X i , i ∈ V } if for all pairs of disjoint subsets V 1 , V 2 of V such that there are no edges connecting V 1 to V 2 , the random vectors (X i , i ∈ V 1 ) and (X i , i ∈ V 2 ) are independent of each other. Let |V | denote the number of elements of V . To derive a central limit theorem we are going to use the following result from [5, Theorem 2.7]: Lemma 6.5. Let 2 < q ≤ 3. Let X i , i ∈ V , be random variables indexed by the vertices of a dependency graph with maximum degree D. Let W = i∈V X i . Assume that E [W 2 ] = 1, E [X i ] = 0, and E [|X i | q ] ≤ θ q for all i ∈ V and some θ > 0. Then Let k ∈ N and f 1 , . . . , f k ∈ R 0 , and t 1 , . . . , t k ∈ R. Write f n,i for T n (f i ), 1 ≤ i ≤ k. By the Cramér-Wold theorem [3, page 49], it suffices to prove that .
Suppose the right hand side of (6.6) is strictly positive. Then the denominator in (6.7) is Θ(n 1/2 ), and therefore by Lemma 6.4 and the assumption that By our assumption that the random set S and the support of the random measure M are uniformly bounded, the indices 1 ≤ m ≤ m n of the random variables X n,m , have a dependency graph structure with all vertex degrees bounded by a constant independent of n. Moreover Therefore we obtain from Lemma 6.5 that Using (6.6) again, we thus have (6.5), if the right hand side of (6.6) is strictly positive. If in fact this limit is zero, we still have (6.5) by Chebyshev's inequality.
The proof of Theorem 4.3(b) is based on the following lemma. hold. Let f ∈ R 0 , and for n ∈ N ∪ {0} set f n := T n (f ). Then there are constants C > 0, ε > 0 such that for all s, t, u with a ≤ s < t < u ≤ b, Proof. Assume initially that 0 ≤ f (x) ≤ 1 for all x ∈ R d . Also let g ∈ R 0 with 0 ≤ g(x) ≤ 1 for all x ∈ R d , and set g n := T n (g). Partition R d into half-open rectilinear unit cubes, and for n > 0, denote those cubes in this partition which intersect the support of f n or the support of g n by Q n,1 , . . . , Q n,mn , with the centres of these cubes denoted q n,1 , . . . , q n,mn respectively. Then m n = O(n) as n → ∞.
Let a ≤ s < t < u ≤ b. For 1 ≤ i ≤ m n , set Since where, throughout this proof, C denotes a positive constant independent of s, t, u, n, i, j, k, and which may change from line to line (or even within a line). Given i and j, let N ij (respectively, N ij ) be the number of arrivals of P between times s and t (respectively, between times t and u) within Euclidean distance r 0 of Q n,i ∪ Q n,j . Then N ij and N ij are independent Poisson variables, each with parameter bounded by c(u − s), where we may take the constant c to be 2(2r 0 + 1) d . Since we assume a ≤ s < u ≤ b, by the law of the unconscious statistician, for any constant β > 0, and any i, j, we have Given i, j ∈ {1, 2, . . . , m n }, let By Jensen's inequality, followed by (6.10), and our assumption that Given any random variable X, let X + := max(X, 0), and X − := max(−X, 0) be its positive and negative parts. Since we assume 0 ≤ f ≤ 1 and 0 ≤ g ≤ 1 pointwise we have the following estimates, which we shall use repeatedly: Let i, j ∈ {1, . . . , m n }. By (6.12) and (6.13), Hence, Thus using (6.11), and Lemma 6.4, and the fact that N ij is independent of ξ s (Q n,i ), we have that E [R 2 i ] ≤ C(u − s). By the same argument we may also deduce that E [Y 2 i ] ≤ C(u − s), and then by the Cauchy-Schwarz inequality we obtain that sup m∈N,i,j∈{1,...,mn} (6.14) Given now i, j, k, ∈ {1, . . . , m n }, by (6.12) we have , and the two factors on the right are independent of each other. Hence by (6.11) and a similar estimate for Also by (6.13), Since we assume for some ε > 0 that E [|M | 4+ε ] < ∞, we have by Lemma 6.4 that E [ξ s (Q n,i ) 4+ε ] and E [ξ t (Q n,j ) 4+ε ] are bounded by a constant, independent of n, i, j, s and t. Hence by Hölder's inequality, taking p = 1 + (ε/4), we have Also by (6.12) and (6.13), so by independence of N k , and the Cauchy-Schwarz inequality, Hence by (6.11), Lemma 6.4, our (4 + ε)-th moment assumption on |M |, and the Cauchy-Schwarz inequality, Next, observe that by (6.12) and (6.13), and since the last factor on the right is independent of the other factors, using the Cauchy-Schwarz inequality and Lemma 6.4 we obtain Next, note from (6.12) and (6.13) that and since the last factor on the right is independent of the other factors, using the Cauchy-Schwarz inequality, (6.11) and Lemma 6.4 again yields Combining (6.15), (6.16), (6.17), (6.18) and (6.19) gives us where we take ε := min(ε/4, 1/2). Using (6.20), (6.14), and the fact that m n = O(n), gives us from (6.9) that for 0 ≤ f ≤ 1 and 0 ≤ g ≤ 1 pointwise (the case g = f gives (6.8) for the special case with 0 ≤ f ≤ 1 pointwise). Now we drop the assumption that f ≥ 0 but still assume |f | ≤ 1 pointwise. Write ξ s,t (f ) for ξ t (f ) − ξ s (f ). Using the fact that for any real A, B we have (A + B) 2 ≤ (2 max(|A|, |B|)) 2 ≤ 4(A 2 + B 2 ), we obtain that which comes to zero because, almost surely, H d−1 (∂S) < ∞ and H d (∂S) = 0. The claim follows.
By (1.2), and the preceding claim, almost surely which is equal to ξ by (4.2), since 7 Proof of results for the DLM in d = 1 We start this section with a measurability result that we shall use more than once.
Proof. In the notation of [33, page 51], the set X is a random element of F f . For bounded Borel A ⊂ R d , and k ∈ N ∪ {0}, let F A,k be the set of locally finite sets σ ⊂ R d such that H 0 (σ ∩ A) = k. Then using notation from [33, Lemma 3.1.4], we have and therefore by [33, Lemma 3.1.4], F A,k ∈ B(F) f so that the event {H 0 (X ∩ A) = k} = {X ∈ F A,k } is measurable (that is, it is indeed an event). Hence H 0 (X ∩ ·) is a point process.
Throughout the rest of this section we take d = 1. We prove the results stated in Section 2.
Lemma 7.2. Suppose Q is such that ∂S is almost surely finite. Then Condition 1.1 holds, that is, H 0 (∂S ∩ ·) is a point process in R.
Proof. We are now assuming d = 1. By [33, Theorem 2.1.1], the set ∂S is a random closed set that is almost surely finite by assumption. Therefore H 0 (∂S ∩ ·) is a point process in R by Lemma 7.1.
Proof of Proposition 2.1. By definition η = H 0 (Φ∩·), where Φ is the set of boundary points of our time-reversed DLM tessellation. We aim to apply Theorem 4.1. We are given the measure Q, and let Q be the measure on C × M whereby a random pair (S, M ) under Q is such that S has distribution Q and M = H 0 ((∂S) ∩ ·). Note that M is a random element of M by Lemma 7.2. Then by Proposition 4.4, η is the dead leaves random measure ξ, defined at (4.1). Hence by Theorem 4.1, η is a point process and its intensity is equal to E [H 0 (∂S)]/λ.
Proof of Theorem 2.2. Assume without loss of generality that Q is concentrated on intervals of the form [0, x] with x ≥ 0. Let A, B ∈ B 1 with 0 < H 1 (A) < ∞ and 0 < H 1 (B) < ∞, and with x < y for all x ∈ A, y ∈ B. The product η(A)η(B) equals the number of pairs of exposed endpoints of intervals (i.e., leaves) in the time-reversed DLM, one arriving in A and the other in B. We can split this into several contributions according to whether the endpoints in question are left or right endpoints, whether they belong to the same or different intervals, and (in the latter case) which of the two endpoints arrives first.
Consider first the contribution from pairs consisting of an exposed right endpoint arriving in A before an exposed left endpoint arriving in B. Let N 1 denote the number of such pairs. By the multivariate Mecke formula, where E x,t is defined in Lemma 6.3, and the range of integration when unspecified is (−∞, ∞). By Lemma 6.3, for 0 < s < t and x, y ∈ R we have P[E x,s ∩ E y,t ] = exp(−λ y−x s − λ(t − s)). Hence, using the change of variables z = x + u, we have We get the same contribution as E [N 1 ] for pairs consisting of a right endpoint in A arriving before a right endpoint in B, and also from a left endpoint in B arriving before a left endpoint in A, and also from a left endpoint in B arriving before a right endpoint in A.
Let N 2 denote the number of pairs that consist of an exposed left endpoint arriving in A before an exposed left endpoint arriving in B. In this case the first of these arrivals has to avoid covering the second endpoint, for the pair to contribute. By the multivariate Mecke formula and Lemma 6.3, where we have used the fact that ν({z}) = 0 for all but countably many z ∈ R so B ν({y −x})dy = 0. We get the same contribution as E [N 2 ] from pairs consisting of a left endpoint arriving in A before a right endpoint in B, and from pairs consisting of a right endpoint in B arriving before a left endpoint arriving in A, and a right endpoint in B arriving before a right endpoint arriving in A.
Let N 3 be the number of pairs consisting of an exposed left endpoint in A and an exposed right endpoint in B, both being endpoints of the same leaf. Then and using the change of variable z = y + x along with Lemma 6.3, we obtain that Now suppose also that Q is concentrated on connected intervals and F (0) = 0. Then we can derive (2.4) using either (2.1) or the formula for σ 2 0 given in Theorem 4.2. We take the first of these options, and leave it to the reader to check that the latter option gives the same value for σ 2 1 . Since η is a simple point process, we have by (2.1) and Proposition 2.1 that dt, in the last integral of the right hand side of (2.4) the integrand can be re-written as which equals the expression in (2.5).
Since we assume and therefore the expression in (2.5) is integrable and the right hand side of (2.4) is indeed finite.
In the special case with ν = δ 1 (so that λ = 1), the expression in (2.5) comes to −u/(1 + u) for u < 1 (and zero for u ≥ 1). Therefore in this case the right hand side of (2.4) comes to 2 + 1 + 8 Proof of Propositions 2.7 and 2.8. We use formulae from [21], and also provide some extra details compared to [21]. Given h ≥ 0, let K(h) and P (h) be as defined in [21, page 3]; that is, using our notation from Section 2, let P  F (h)). In particular K (0) = −1 under our present assumptions. Also K(0) = λ. We assert that Moreover the time T to the first arrival of a leaf that intersects but does not cover [0, h] is also exponential with mean 1/(µ 1 − µ 2 ), and independent of T . Then P (h) = P[T < T ], which gives us (7.4) by a well-known result on the minimum of independent exponentials. Let X and Y be as in the statement of Propositions 2.7 and 2.8. By stationarity, given X = x the first point of η to the right of 0 is Unif(0, x).
, so by the discussion just before Proposition 2.8, is the cumulative distribution function of Y , and the last equality comes from Fubini's theorem. Hence by (7.4), This formula appears on [21, page 10] (Matheron's F 0 is our F , and Matheron's ν is the intensity of η, which is 2/λ by Proposition 2.1). By the product rule, The expression inside the square brackets in the above right hand side is equal to ∞ y (λ + u)dF (u), and so we have Proposition 2.8. The argument just before Proposition 2.8 shows that we can then deduce Proposition 2.7.

Proofs for the DLM in d = 2
Throughout this section we take d = 2. Also S, S , S denote independent random elements of C with common distribution Q, and Θ denotes a random variable uniformly distributed over (−π, π], independent of (S, S ).
Proof of Theorem 3.1. We obtain the result by application of Theorem 4.1. Here we are given Q, and we take Q to be the probability measure on C × M with first marginal Q such that if (S, M ) is Q -distributed then M = H 1 (∂S ∩ ·).
Our proof of Theorem 3.2 requires a series of lemmas. The first is concerned with random closed sets in R 2 (or more generally, in R d ).
Lemma 8.1. Any countable intersection of random closed sets in R 2 is a random closed set in R 2 .
Proof. Let X 1 , X 2 , . . . be random closed sets in R 2 . For n ∈ N set Y n = ∩ n i=1 X i . Then Y n is a random closed set by [33,Theorem 2.1.1]. Set X = ∩ ∞ n=1 X n = ∩ ∞ n=1 Y n . Then for any compact K ⊂ R 2 , we have the event equalities which is an event because each Y n is a random closed set (see [23,Definition 1.1.1]). Therefore X is also a random closed set.
For (b), note that for K > 0, by the multivariate Mecke formula, writing = i,j,k∈N for the sum over ordered triples (i, j, k) of distinct elements of N, we have where the range of integration is taken to be R 2 whenever it is not specified explicitly. Taking y = y − x and z = z − x, we find that the last expression equals In the last line we may interchange the innermost two integrals because almost surely and for almost all y the innermost integral ∂S∩(∂S +y ) H 0 (dw) is finite because of the assumption that β 3 < ∞. Therefore the last expression is equal to which is zero because, almost surely, ∂S is a rectifiable curve so that E [H 2 (∂S )] = 0. Since K is arbitrary, this gives us part (b).
Lemma 8.3. Assume either that Q has the piecewise C 1 Jordan property, or that Q has the rectifiable Jordan property and is rotation invariant. Then, almost surely, there is no pair {i, j} of distinct elements of N such that ∂S i + x i touches ∂S j + x j .
Proof. Let K ∈ (0, ∞). Let N K denote the number of ordered pairs (i, j) of distinct elements of N such that ∂S i + x i touches ∂S j + x j , and Let us say, for any two piecewise C 1 Jordan curves γ and γ , that γ grazes γ if there exists z ∈ γ ∩ γ that is not a corner of either γ or γ , such that γ grazes γ' at z.
Suppose Q has the piecewise C 1 Jordan property. Then by the bivariate Mecke formula, dy1{∂σ + x touches ∂σ + y} which is zero by Lemma 5.3.
Suppose instead that Q has the rectifiable Jordan property and is rotation invariant. Then which equals zero by Lemma 5.2. Thus in both cases, N K = 0 almost surely, for all K, and the result follows.
Before proving Theorem 3.2, we introduce some further notation. For any random closed set X in R 2 and event A, let X A be the random closed set that is X if A occurs and is R 2 if not. Let X A be the random closed set that is X if A occurs and is ∅ if not.
Given i ∈ N, write X i for the set S i + x i , and X o i for the interior of X i . Then X i is a random element of C, by [33, Theorem 2.4.3], for example. Hence, given also j ∈ N \ {i}, the set ∂X i ∩ ∂X j is also a random element of C by [ Recall from Section 3 that we define Ξ to be the set of points in R 2 which lie in three or more cells of the time-reversed DLM tessellation, and χ to be the measure H 0 (Ξ ∩ ·).
Lemma 8.4. Assume that Q either has the piecewise C 1 Jordan property, or is rotation invariant and has the rectifiable Jordan property. Assume also that β 3 < ∞, where β 3 is given by (3.1), and that E [H 2 (S ⊕ B(1))] < ∞. Then, almost surely, Proof. Assume the times t 1 , t 2 , . . . are distinct (this occurs a.s.). Given x ∈ Ξ, x must lie on the boundary of the first two shapes X i to arrive after time zero and contain x, and this gives us the inclusion . For the reverse inclusion, let E be the event that there is no triple (i, j, k) of distinct elements of N with ∂X i ∩ ∂X j ∩ ∂X k = ∅, and let E be the event that there is no pair (i, j) of distinct elements of N such that ∂X i touches ∂X j . Then E and E occur almost surely, by Lemmas 8.2 and 8.3. Let E be the event that for all K ≥ 0 the number of shapes X j with X j ∩ B K = ∅ and −K ≤ t j ≤ K is finite. This event also occurs almost surely, by Lemma 6.1.
Also the times t i are all distinct, almost surely. Assume from now on that events E, E and E all occur and all of the times t i are distinct. Suppose x ∈ Y ij for some i, j with 0 < t i < t j . Let T k := inf{t : t > 0, x ∈ X o }. Then since we assume E occurs, x / ∈ ∂X for all ∈ N \ {i, j}. Since we assume E occurs, almost surely only finitely many of the shapes X with 0 ≤ t ≤ t k have non-empty intersection with B(1) + x. Hence, x ∈ ∂X i ∩ ∂X j ∩ X o k , and there exists a constant ε > 0 such that ∈{i,j,k} X c . Now, x ∈ X i and since X i is a regular set, x is an accumulation point of the interior of X i , which is connected by the Jordan curve theorem. Thus x is on the boundary of a component of Ξ c which is contained in the interior of X i .
Since we assume that E occurs, ∂X j crosses ∂X i at x rather than touching it. Hence there is an arc within ∂X j , with an endpoint at x, that lies outside X i except for this endpoint. On one side of this arc is a part of the interior of X j , and hence there is a component of X o j \ X i with an accumulation point at x, and hence a component of Φ c that is contained in X o j \ X i with an accumulation point at x. Moreover, on the other side of the arc just mentioned is a region of X c j ∩ X c i with an accumulation point at x. Hence there is a component of Φ c that is contained in X o k \ (X i ∪ X j ) and has an accumulation point at x. Therefore x ∈ Ξ, so that , as claimed. Then since E is assumed to occur, we have Y i j = Y ij for all (i , j ) = (i, j), so that Proof of Theorem 3.2 (a). For each k ∈ N \ {i, j}, the set (X o k ) c is a random closed set by [33, Theorem 2.1.1]. Therefore (X o k ) c {0<t k <max(t i ,t j )} is also a random closed set, and hence by Lemma 8.1 the set Y ij is also a random closed set. By Lemma 8.2, Y ij is almost surely finite. By Lemma 7.1, H 0 (Y ij ∩ ·) is a point process in R 2 . Since χ = 1≤i<j<∞ H 0 (Y ij ∩ ·), also χ is a point process. The stationarity of χ is clear.
Denote the intensity of the stationary point process χ byβ 3 . By Lemma 8.4 and the multivariate Mecke formula, using notation E x,t from Lemma 6.3, we havẽ Taking x = x − y and z = z − y, using Lemma 6.3 we havẽ and taking the y-integral inside the x -integral and the sum, we obtain that and henceβ 3 = β 3 , as asserted.
Proof of Theorem 3.2 (b). Assume now that Q is rotation-invariant. Then ρ Θ (S ) D = S , so by (3.1) we have and hence by the 'two noodle' formula (Lemma 5.1), which yields (3.2).
We now work towards proving Theorem 3.3. Recall from (1.2) that Φ denotes the boundary of our time-reversed DLM tessellation. It is helpful to represent Φ in terms of the following sets. For each i with t i ≥ 0, define the set (Here the P stands for 'patch'.) Set P i = ∅ if t i < 0.
Lemma 8.5. Under the assumptions of Theorem 3.3, almost surely Φ = ∪ ∞ i=1 ∂P i . Proof. Let y ∈ Φ. Then by (1.2), y lies on the boundary of some leaf that arrives at a non-negative time; let i be the index of the earliest-arriving leaf (at or after time 0) that contains y in its boundary. Then by (1.2) again, y does not lie in (either the interior or the boundary of) any leaf arriving between times 0 and t i , so y ∈ P i , and since y ∈ ∂S i + x i , moreover y ∈ ∂P i . Thus Φ ⊂ ∪ ∞ i=1 ∂P i . Conversely, let j ∈ N be such that P j = ∅, and let z ∈ ∂P j . Then z ∈ S j + x j (since S j is closed). Also z / ∈ S o k + x k for all k ∈ N with 0 ≤ t k < t j (else some neighbourhood of z is disjoint from P j ). If z ∈ ∂S j + x j , then z ∈ Φ by (1.2). If z ∈ S o j + x j , then (since z ∈ ∂P j ) there exists some k with 0 ≤ t k < t j and z ∈ ∂S k + x k . Hence, again z ∈ Φ by (1.2). Thus ∪ ∞ i=1 ∂P i ⊂ Φ. Lemma 8.6. Let A ⊂ R 2 be bounded. Under the assumptions of Theorem 3.3, it is almost surely the case that (a) each component of R 2 \ Φ is contained in the interior of one of the patches P i , and (b) the union of all components of R 2 \ Φ that intersect A is a bounded set.
Proof. Suppose y ∈ R 2 \ Φ. Then y must lie in the interior of the first-arriving leaf (after time 0) to contain y. Suppose this leaf has index i. Then y ∈ P o i , and since ∂P i ⊂ Φ by Lemma 8.5, the component of R 2 \ Φ containing y is contained in P o i . This gives us part (a). Moreover the patches P i are almost surely bounded sets. Therefore, for part (b) it suffices to prove that the number of patches P i which intersect [0, 1] 2 is almost surely finite.
The number of leaves S i + x i having non-empty intersection with [0, 1] 2 and arrival time t i ∈ [0, 1] is Poisson with intensity λ 0 given by with R is given by (1.4). Thus λ 0 < ∞ since we assume E [R 2 ] < ∞. Set then i∈I δ t i is a 1-dimensional Poisson process of intensity λ 0 .
Define N as at (6.4); by a similar argument to the one given in the proof of Lemma 6.4, N is almost surely finite. That is, almost surely the square [0, 1] 2 is completely covered within a finite (random) time, denoted T say. The number of i for which (S i + x i ) ∩ [0, 1] 2 = ∅ and 0 ≤ t i ≤ T is almost surely finite, and provides an upper bound for the number of patches P i which intersect [0, 1] 2 , so this number is also almost surely finite, as required. Proof. Let Q ⊂ R 2 be compact. Then by Lemma 8.5, and by the proof of Lemma 8.6, the number of patches P i which intersect Q is almost surely finite. Therefore Φ ∩ Q is a finite union of closed sets, so is closed. This holds for any compact Q, and hence Φ is also closed, almost surely. Now suppose Z is a connected component of Φ. It is easy to see from the definition of a connected component (see e.g. [32]) that any limit point of Z must be in Z, and therefore Z is closed.
It remains to prove that Φ \ Z is closed. Suppose this were not the case. Then there would exist z ∈ Z, and a sequence (z n ) n≥1 of points in Φ \ Z, such that z n → z as n → ∞. Then z lies on the boundary of some leaf arriving after time 0; let i be the index of the earliest-arriving such leaf. Also let k be the index of the earliestarriving leaf with z in its interior. Then without loss of generality we may assume that for all n we have z n ∈ (S o k + x k ).
For each n ∈ N, choose j(n) such that t j(n) < t k and z n ∈ ∂S j(n) + x j(n) . Since there are almost surely only finitely many leaves arriving that intersect any given compact region between times 0 and t k , the numbers j(n) run through a finite set of indices, and hence by taking a subsequence if necessary we can assume there is a single index j such that j(n) = j for all n. Since ∂S j + x j is closed, we also have z ∈ ∂S j + x j . By our choice of i we then have t i ≤ t j .
Suppose t i = t j ; then i = j and z n ∈ ∂S i + x i for all n. Hence for all large enough n, there is a path from z n to z along ∂S i + x i that does not meet any leaf arriving before time t i , so this path lies in Φ, and hence z n ∈ Z, which is a contradiction.
Therefore we may assume that t i < t j , and z n ∈ ∂S j + x j for all n. Then z ∈ (∂S i + x i ) ∩ (∂S j + x j ). By Lemma 8.2 (b), z / ∈ S + x for all ∈ N \ {i, j} with t < t k . Hence z has a neighbourhood that is disjoint from ∪ { ∈N\{i,j}:t <t k } (S + x ). Hence for all large enough n, there is a path in ∂S j + x j from z n to z that does not intersect any leaf arriving before t k other than possibly leaf i; then by taking this path from z n as far as the first intersection with leaf i, and then (if this intersection is not at z) concatenating it with a path from there along ∂S i + x i to z we have a path in Φ from z n to z, and therefore also z n ∈ Z, which is a contradiction. Thus Φ \ Z must be closed, as claimed.
In the next two proofs, we shall use the fact that R 2 is unicoherent. The unicoherence property says that for any two closed connected sets in R 2 having union R 2 , the intersection of these two sets is connected. See e.g. [25, page 177], or [7]. Proof. Suppose that Φ has at least one bounded component, pick one of these bounded components, and denote this component by Z. Given ε > 0, let Z ε denote the closed ε-neighbourhood of Φ, that is, the set of x ∈ R d such that x − y ≤ ε for some y ∈ Z. By Lemma 8.7, Z is compact and Φ \ Z is closed. Hence we can and do choose ε > 0 such that Denote by V ε the unique unbounded connected component of R 2 \Z ε . The set R 2 \V ε is connected; we can think of it as 'Z ε with the holes filled in'. Then R 2 \ V ε , and V ε , are connected closed sets with union R 2 , so by unicoherence their intersection, which is simply ∂V ε , is a connected set (a kind of loop surrounding Z). Moreover ∂V ε ∩ Φ = ∅ by (8.3), since every element of ∂V ε is distant ε from Z. Hence ∂V ε is contained in a single component of R 2 \ Φ, so by Lemma 8.6 (a), there exists j 0 ∈ N such that ∂V ε ⊂ P o j 0 . Then by the assumed Jordan property, and the Jordan curve theorem, ∂V ε is surrounded by the boundary of S j 0 + x j 0 . Also Z is contained in a bounded component of R 2 \ ∂V ε . Pick x ∈ Z. Then x must lie on the boundary of some leaf i arriving before leaf j 0 (in the time-reversed DLM), since otherwise x would be in the interior of P j 0 and not in Φ at all. Therefore there is some i such that 0 ≤ t i < t j 0 and the leaf boundary ∂S i + x i includes a point in Z. Let i 0 be the index of the first-arriving such leaf.
By the Jordan property the leaf S i 0 + x i 0 is connected, and it does not intersect ∂V ε , since ∂V ε ⊂ P j 0 and t i 0 < t j 0 . Therefore it is contained in a bounded component of R 2 \ ∂V ε . Hence, this leaf is entirely surrounded by the boundary ∂S j 0 + x j 0 . Thus in this case there would exist distinct i 0 , j 0 ∈ N such that S i 0 + x i 0 ⊂ S j 0 + x j 0 , and the expected number of such pairs is zero by our non-containment assumption, and a similar argument using the Mecke formula to the proof of Lemma 8.3. Thus Φ almost surely has no bounded component.
Proof of Theorem 3.3. Suppose that Φ has at least two unbounded components. Pick two unbounded components of Φ, and denote them by Z 0 and Z 1 . By Lemma 8.7, both Z 0 and Φ \ Z 0 are closed. By Urysohn's lemma, we can (and do) pick a continuous function g : R 2 → [0, 1] taking the value 0 for all x ∈ Z 0 and 1 for all x ∈ Φ \ Z 0 . For example, we could take Define the set F := {x ∈ R 2 : g(x) ≤ 1/2}. Then F ∩ (Φ \ Z 0 ) = ∅, and F is a closed subset of R 2 . Let V denote the component of R 2 \ F containing Z 1 . Then R 2 \ V , and V , are closed connected sets with union R 2 , so by the unicoherence of R 2 , their intersection, which is ∂V , is connected; moreover, ∂V ⊂ ∂F , so g(x) = 1/2 for all x ∈ ∂V , and hence (∂V ) ∩ Φ = ∅.
By Lemma 8.6 (b), all components of R 2 \ Φ are bounded, almost surely. Since ∂V is connected, and disjoint from Φ, it is contained in a single component of R 2 \Φ, and therefore ∂V is bounded.
We now show, however that the set ∂V is unbounded, which is a contradiction. Let r > 0 and recall that B(r) := {y ∈ R 2 : y ≤ r}. Since Z 0 and Z 1 are unbounded, we can pick points z 0 ∈ Z 0 \ B(r) and z 1 ∈ Z 1 \ B(r). We may then take a polygonal path in R 2 \ B(r) from z 0 to z 1 . The last point z in this path for which g(z) = 1/2, lies in ∂V . Hence, ∂V \ B(r) is non-empty. Since r is arbitrary, ∂V is unbounded.
We have proved by contradiction that Φ almost surely has at most one unbounded component. Combined with Lemma 8.8, this shows that it is almost surely connected.
Proof of Theorem 3.4. We now view the random set Φ (the boundaries of the DLM tessellation) as a planar graph. By Lemmas 8.2 and 8.3, there are no vertices of degree 4 or more in this graph.
The planar graph Φ has no vertices of degree 1, by the Jordan assumption. Thus, we may view Φ as a planar graph with all of its vertices having degree 3, and χ is the point process of these vertices.
Moreover, by Theorem 3.3 this planar graph is almost surely connected. Let τ denote the intensity of the point process of midpoints of edges in this planar graph. By the handshaking lemma, τ = 3β 3 /2. Also, by an argument based on Euler's formula (see [35, eqn.  Proof of Theorem 3.6. We apply Theorem 4.3, using the same choice of Q as in the first part of the proof of Theorem 3.1.