Lower bounds for variances of Poisson functionals

Lower bounds for variances are often needed to derive central limit theorems. In this paper, we establish a lower bound for the variance of Poisson functionals that uses the difference operator of Malliavin calculus. Poisson functionals, i.e. random variables that depend on a Poisson process, are frequently studied in stochastic geometry. We apply our lower variance bound to statistics of spatial random graphs, the $L^p$ surface area of random polytopes and the volume of excursion sets of Poisson shot noise processes. Thereby we do not only bound variances from below but also show positive definiteness of asymptotic covariance matrices and provide associated results on the multivariate normal approximation.


Introduction and main result
As the variance quantifies the fluctuations of a random variable around its mean, upper bounds for variances are an important topic of probability theory.A main motivation to study lower bounds comes from the problem to establish central limit theorems.Here, after applying quantitative bounds for the normal approximation to standardised random variables, one has to divide by powers of the variance, whence it is essential to have lower bounds for the variance.In this paper, we derive such lower bounds for random variables that only depend on an underlying Poisson process.These so-called Poisson functionals play a crucial role in stochastic geometry but also appear in other branches of probability theory.
Let η be a Poisson process on a measurable space (X, X ) with a σ-finite intensity measure λ.The underlying probability space is denoted by (Ω, F , P).Let N denote the set of all σ-finite counting measures equipped with the σ-field generated by the mappings ν → ν(B) for B ∈ X .The Poisson process can be seen as a random element in N. A detailed introduction to Poisson processes can be found in e.g.[22].A Poisson functional F is a real-valued measurable function on Ω that can be written as F = f (η), where f is a real-valued measurable function on N and is called representative.
For simplicity and by a slight abuse of notation, we denote a Poisson functional in the following by F = F (η).If F is square-integrable, we write F ∈ L 2 η .Throughout this paper we are mostly interested in the asymptotic behaviour of Poisson functionals in two frameworks, namely increasing intensity or increasing observation window.More precisely, we study for s → ∞ a family of Poisson functionals F s , * Hamburg University of Technology, Germany, matthias.schulte@tuhh.de† Hamburg University of Technology, Germany, vanessa.trapp@tuhh.des ≥ 1, where F s is either a Poisson functional on a homogeneous Poisson process with intensity s or a functional that considers only points of a fixed Poisson process in an observation window that extends to the full space for s → ∞.
Central limit theorems for some Poisson functionals were established, for example, in [2,4,5,10,16,17,18,20,25,27,28,30,33].Since the proofs require lower variance bounds as discussed above, these papers also study the asymptotic behaviour of the variance.Often convergence of the variance to a non-degenerate (i.e.non-zero) asymptotic variance constant is shown.Investigating the behaviour of the variance usually requires a lot of effort.This is the reason why we want to treat the problem of lower variance bounds as a separate issue from establishing central limit theorems in this paper.To this end, we provide a lower variance bound, which can be seen as the counterpart to the Poincaré inequality.
As mentioned above, a common problem is to show that the asymptotic variance constant is positive.But even if one has an explicit representation for the latter, it can be hard to show positivity because positive and negative terms could cancel out.Therefore, proving the non-degeneracy of the asymptotic variance can be a different problem than computing the limiting constant of the variance.In this case, it can be helpful to employ lower bounds for variances to deduce positivity of the asymptotic variance constant.
Since the covariance matrix Σ s ∈ R m×m of Poisson functionals F for all α = (α 1 , . . ., α m ) ∈ R m , one can use lower bounds for variances to establish positive definiteness of the asymptotic covariance matrix Σ = lim s→∞ Σ s if it exists.Knowing the positive definiteness of Σ is of interest since it ensures that none of the Poisson functionals can be written asymptotically as a linear combination of the others.Furthermore, some bounds for the quantitative multivariate normal approximation (see e.g.[33]) require the positive definiteness of the covariance matrix of the limiting normal distribution.
In order to present our main result, we need some notation and some further background on Poisson functionals.For x ∈ X the difference operator of a Poisson functional F = F (η) is defined by where δ x denotes the Dirac measure concentrated at x.In general, the n-th iterated difference operator D n is recursively defined by for n > 1 and x 1 , . . ., x n ∈ X.In particular, for x, y ∈ X the iterated, second-order difference operator equals D 2 x,y F = D x (D y F ) = F (η + δ x + δ y ) − F (η + δ x ) − F (η + δ y ) + F (η).
For F ∈ L 2 η define f n (x 1 , . . ., x n ) = 1 n! E [D n x1,...,xn F ] for x 1 , . . ., x n ∈ X and n ∈ N.Then, f n is symmetric and square-integrable for all n ∈ N and the Fock space representation of F is given by where • n denotes the norm on L 2 (λ n ) (see, for example, [21,Theorem 1.1] or [22,Theorem 18.6]).Using this representation, one can directly derive The problem with this lower variance bound is that the difference operator can in general be positive or negative and, thus, can have expectation zero.To overcome this issue, we provide in this paper a counterpart to the well-known Poincaré inequality for F ∈ L 2 η (see, for example, [22,Theorem 18.7]).In the following main result we give a condition under which the variance of F can be bounded from below by a constant times the right-hand side of the Poincaré inequality, whence we can think of it as a reversed Poincaré inequality.
for some constant α ≥ 0. Then The inequality (1.5) provides a non-trivial lower bound for the variance as soon as one can show that the difference operator is non-zero with positive probability.To this end, one can construct special point configurations that lead to a non-zero difference operator and occur with positive probability.This is often much easier than to verify that the expectation of the difference operator is non-zero as required in (1.2).
Let us discuss some alternative approaches to derive lower variance bounds for Poisson functionals or statistics arising in stochastic geometry.In [20,Theorem 5.2], a general lower bound for variances of Poisson functionals is established, where, for fixed k ∈ N and I 1 , I 2 ⊆ {1, . . ., k}, one has to bound from below for x 1 , . . ., x k ∈ X.Since here more than one point can be added, which allows to enforce particular point configurations, this expression is often easier to control than the expectation of the first difference operator in (1.2).But one still has the problem that the difference within the expectation can be both positive and negative.
In [5,25,27,28], lower bounds for variances of so-called stabilising functionals of Poisson processes and sometimes also binomial point processes were deduced.These results have all in common that generalised difference or add-one-cost operators are required to be non-degenerate.This is similar to our work, but the random variable that has to be non-degenerate is more involved than the difference operator and, moreover, the results apply only to stabilising functionals and not to general Poisson functionals.
A further approach is to condition on some σ-field and to bound the variance from below by the expectation of the conditional variance with respect to this σ-field.In the context of stochastic geometry this was used, for example, in [2] or [3,30].By conditioning on the σ-field it is sufficient to consider some particular point configurations similarly as in our Theorem 1.1.In the recent preprint [11], a condition requiring that some conditional expectations are not degenerate is used to establish lower variance bounds for stabilising functionals.
In order to demonstrate how Theorem 1.1 can be applied, we derive lower variance bounds for specific examples from stochastic geometry: Spatial random graphs.We consider degree and component counts of random geometric graphs and edge length functionals and degree counts of k-nearest neighbour graphs.By proving lower bounds for variances of linear combinations of such statistics, we show the positive definiteness of asymptotic covariance matrices.Combining these findings with the results from [33, Section 3] provides quantitative multivariate central limit theorems for the corresponding random vectors.

Random polytopes. By taking the convex hull of the points of a homogeneous
Poisson process in the d-dimensional unit ball, one obtains a random polytope.We study the L p surface area, which generalises volume and surface area.For two different L p surface areas we show positive definiteness of the asymptotic covariance matrix and, as a consequence, a result for the multivariate normal approximation.In particular, this allows to study the joint behaviour of volume and surface area of the random polytope.
Poisson shot noise processes.We provide a lower variance bound for the volume of excursion sets of a Poisson shot noise process.In comparison to the works [9], [16] or [17] we modify the assumptions on the kernel function of the Poisson shot noise process.
The considered statistics of spatial random graphs fit into the framework of stabilising functionals of Poisson processes, whence the results for the non-degeneracy of the asymptotic variance of stabilising functionals discussed above might be applicable.The L p surface area is still stabilising, but here the variance does not scale like the intensity of the underlying Poisson process, whence the previously mentioned results are not available any more.Finally, in case of general Poisson shot noise processes we do not have stabilisation at all.In order to apply Theorem 1.1, one has to bound the left-hand side of (1.4) from above.In case of the spatial random graphs and the random polytope, this can be done easily by employing results from [18] due to stabilisation.
This paper is organised as follows.Our main result Theorem 1.1 is proven in Section 2. The following three sections are devoted to applications, statistics of spatial random graphs in Section 3, the L p surface area of random polytopes in Section 4 and the excursion sets of a Poisson shot noise processes in Section 5. Finally, we recall some facts about stabilising functionals in the appendix.

Proof of Theorem 1.1
The proof of Theorem 1.1 relies upon using the Fock space representations of F and its first two difference operators.
Proof of Theorem 1.1.For n ∈ N let f n denote the kernels of the Fock space representation of F .Recall that First we assume α > 0. Then we know by assumption (1.4) that F, D x F, D 2 x,y F ∈ L 2 η for λ-a.e.x, y ∈ X.Using Fubini's theorem, the monotone convergence theorem and applying the Fock space representation (1.1) to the first and second order difference operator provides and, similarly, Therefore, assumption (1.4) means that which provides the lower bound for the variance in (1.5) for α > 0.
For α = 0 we have that D x,y F = 0 almost surely for λ-a.e.x, y ∈ X.Hence, all difference operators of order greater than or equal to 2 vanish almost surely for λa.e.x, y ∈ X.Therefore, f n n = 0 for all n ∈ N with n ≥ 2. It follows from the representation of the difference operator in terms of the kernels of the Fock space representation (see e.g.[19,Theorem 3]) that D x F = f 1 (x) almost surely for λ-a.e.x ∈ X, which provides the bound in Theorem 1.1 for α = 0. Remark 2.1.Note that Fock space representations also exist for functionals of isonormal Gaussian processes and for functionals of Rademacher sequences (i.e.sequences of independent random variables with values ±1).For these one can also define operators D and D 2 whose Fock space representations are as in the Poisson case.Since our proof of Theorem 1.1 only requires the Fock space representations of F , DF and D 2 F , the statement of Theorem 1.1 continues to hold for functionals of isonormal Gaussian processes and for functionals of Rademacher sequences if we rewrite the integrals with respect to λ in a proper way.For more details on the Fock space representations and the operators D and D 2 we refer the reader to, for example, [24] for the Gaussian case and [15] for the Rademacher case.

Spatial random graphs
In the following sections we apply our main result to problems from stochastic geometry.Therefore, we interpret Poisson processes as collections of random points in X, which is why we write from now on for A ⊆ X under abuse of notation Analogously, we use η ∩ A and η\A.Throughout this paper, we denote by λ d the ddimensional Lebesgue measure and by κ d the volume of the d-dimensional unit ball for d ≥ 1.The d-dimensional closed ball with centre x and radius r is denoted by B d (x, r).
Let W ⊂ R d be a non-empty compact convex set with λ d (W ) > 0. For s ≥ 1 let η s be a homogeneous Poisson process on W with intensity s, i.e. a Poisson process on R d with intensity measure λ = sλ d | W , where λ d | W denotes the restriction of the Lebesgue measure to W .In the following we study the asymptotic behaviour as s → ∞.

Random geometric graph
In this section we consider the vector of degree counts and the vector of component counts of a random geometric graph.For both examples we know from [33, Section 3.2] that, after centering and with a scaling of s −1/2 , they fulfil a quantitative central limit theorem in d 2 -and d convex -distance if the corresponding asymptotic covariance matrix is positive definite.In the following we show that the asymptotic covariance matrix is indeed positive definite.
Let G rs denote the random geometric graph that is generated by η s and has radius r s = ̺s −1/d for a fixed ̺ > 0, i.e. the vertex set of the graph is η s and two distinct vertices v 1 , v 2 ∈ η s are connected by an edge if v 1 − v 2 ≤ r s .For j ∈ N 0 let V rs j be the number of vertices of degree j in G rs , i.e. a) For s → ∞ the asymptotic covariance matrix of the vector of degree counts 1 √ s (V rs j1 , . . ., V rs jn ) for distinct j i ∈ N 0 , i ∈ {1, . . ., n}, is positive definite, i.e. for any α = (α 1 , . . ., α n ) ∈ R n \{0} there exists a constant c > 0 such that for s sufficiently large Before we prove the theorem, we introduce the following lemma that provides condition (1.4).It gives an estimate for the expected integral of the squared second-order difference operator of a stabilising Poisson functional.We call a Poisson functional F s stabilising if it can be written as a sum of scores, i.e.
Proof of Theorem 3.1.For x ∈ W and j ∈ N 0 the difference operators are given by Let m = argmax i∈{1,...,n}:αi =0 j i and x ∈ W .For a) we consider configurations where Then, it follows for any y ∈ η s with y ∈ The degrees of all the other points are not affected by adding x.Thus, in this situation only the numbers of points with degree j m and j m + 1 change.Due to the choice of m, we have where c > 0 depends on W, α, ̺, k and d.
Both functionals can be written as sums of scores as in (3.1).For j ∈ N 0 , y ∈ η s and s ≥ 1 the score for the degree count of degree j is given by ξ s (y, η s ) = ½{deg(y, η s ) = j} and for j ∈ N, y ∈ η s and s ≥ 1 the score for the number of components of size j is ξ s (y, η s ) = 1 j ½{|C(y, η s )| = j}.
These scores clearly fulfil a (4 + p)-th moment condition and are by [33, proofs of Theorem 3.5 (b) and Theorem 3.6 (b)] exponentially stabilising.Therefore, we can apply Lemma 3.2, which completes together with Theorem 1.1 the proof.

k-nearest neighbour graph
Central limit theorems for the total edge length of a k-nearest neighbour graph of a Poisson process are derived in e.g.[2,5,18,20,28,29,33].The first quantitative result can be found in [2].This convergence rate was further improved in [29] before in [20] the presumably optimal rate was shown.In [33] this result was transferred to the multivariate case of a vector of edge length functionals but it was left open to show in general that its asymptotic covariance matrix is positive definite.For edge length functionals of nonnegative powers this is proven in the following section.
We consider the k-nearest neighbour graph for k ∈ N that is generated by the Poisson process η s , i.e. the undirected graph with vertex set η s , where each vertex is connected with its k-nearest neighbours.The set of all k-nearest For q ∈ [0, ∞) let L q denote the edge length functional of power q of the k-nearest neighbour graph generated by η s which is defined by where η 2 s, = denotes the set of all pairs of disjoint points of η s and N (y, η s ) is the set of all k-nearest neighbours of y in the k-nearest neighbour graph generated by η s .Let F q = s q/d L q be its scaled version.Theorem 3.3.For s → ∞ the asymptotic covariance matrix of 1 √ s (F q1 , . . ., F qn ) for distinct q i ≥ 0, i ∈ {1, . . ., n}, is positive definite, i.e. for any α = (α 1 , . . ., α n ) ∈ R n \{0} there exists a constant c > 0 such that for s sufficiently large In order to prove this theorem, we need the following lemma, which considers a slightly more general situation since it will be also employed in a further proof.Lemma 3.4.Let k ∈ N and j ≥ 1 be fixed.Then there exist constants c 1 , c 2 > 0 depending on k, j, d and W such that for all ε > 0 and x ∈ W with B d (x, 2(j + 1)ε) ⊂ W , [20,Lemma 7.4] there is a constant c W > 0 only depending on W such that for some constant c > 0. For t ∈ N 0 there exist constants c1 , c2 > 0 such that z t e −z ≤ c1 e −c2z for all z > 0. Hence, using the Mecke formula and spherical coordinates, we get Proof of Theorem 3.3.Let e i denote the d-dimensional standard unit vector in the i-th y) is defined as in Lemma 3.4.Then, for q ≥ 0 the difference operator of F q is given by x − y q .
Inserting j = 1 in Lemma 3.4 provides for some constants c 1 , c 2 > 0. Now, let m = argmax i∈{1,...,n}:αi =0 q i and assume without loss of generality α m > 0. If α i ≥ 0 for all i ∈ {1, . . ., n}, we choose ε = cs −1/d with c ≥ 1 large enough such that we have for the configurations mentioned above and c 1 e −sc2ε d < 1 2 .Otherwise, let ℓ = argmax i∈{1,...,n}:αi<0 q i .Then, q m > q ℓ and it follows for the configurations mentioned above for s 1/d ε ≥ 1, In this case, choose ε = s −1/d c > 0 with c ≥ 1 large enough such that c 1 e −sc2ε d < 1 2 and )) and η s (A 1,ε (x, y)) for y ∈ η s \B d (x, ε) and x ∈ A s and by Lemma 3.4 we have for s large enough such that λ Our functionals can be written as sums of scores as in (3.1).For y ∈ η s , q ≥ 0 and s ≥ 1 the corresponding score of F q is given by The scores (ξ s ) s≥1 fulfil a (4 + p)-th moment condition (see the proof of [18, Theorem 3.1]) and are by [33, proof of Theorem 3.1] exponentially stabilising.Therefore, we can apply Lemma 3.2, which completes together with Theorem 1.1 the proof.
In the following we consider a second statistic of k-nearest neighbour graphs, namely the number of vertices with a given degree.Similarly to the previous example, it was shown in [33,Theorem 3.3] that a vector of these degree counts fulfils a quantitative multivariate central limit theorem in d 2 -and d convex -distance if its asymptotic covariance matrix is positive definite.
For j ∈ N 0 let V k j denote the number of vertices of degree j in the k-nearest neighbour graph generated by η s , i.e.
Proof.First note that the degrees j 1 , . . ., j n are chosen in such a way that they can occur in a k-nearest neighbour graph.A vertex can have k neighbours if it is only connected to its k nearest neighbours and can have up to k max neighbours by the definition of k max .
All degrees in between can occur as well as can be seen from the following construction.
Assume we have a configuration where x has k max neighbours.Then we delete 1 ≤ t ≤ k max − k vertices which are connected to x but are not one of the k nearest neighbours of x and all other vertices that are not connected to x.Consequently, we obtain a configuration where x has degree k max − t.This means that P(deg(x, β ji ∪ {x}) = j i ) > 0 for i ∈ {1, . . ., n}, where β ji denotes a binomial point process of j i independent random points uniformly distributed in B d (0, 1).Obviously, these probabilities do not change if we take a binomial point process on any other ball.
The difference operator of V k j is given by for x ∈ W . Denote I = {i ∈ {1, . . ., n} : α i = 0} and m = argmin i∈I j i .We can assume α m > 0 without loss of generality.In the following we distinct several cases that are illustrated in Figure 1.
Case 1: defined as in Lemma 3.4.Applying Lemma 3.4 for j = 3 provides . Then, using independence properties we have for p m = P (deg(x, Case 2: j m = k.
If it exists, we denote by ℓ ∈ {1, . . ., n} the index with Let ε > 0 and let x ∈ W be such that B d (x, 8ε) ⊂ W .We consider four different configurations to deal with all possible vectors α = (α 1 , . . ., α n ) ∈ R n \{0} (see Figure 1).Let e i denote the d-dimensional standard unit vector in the i-th direction.
The scores (ξ s ) s≥1 clearly fulfil a (4 + p)-th moment condition and are by [33, proof of Theorem 3.3] exponentially stabilising.Therefore, we can apply Lemma 3.2, which completes together with Theorem 1.1 the proof.Remark 3.6.Throughout this section we assume that the underlying Poisson processes have the intensity measures sλ d | W for s ≥ 1.However, we can generalise our results from these homogeneous Poisson processes to a large class of inhomogeneous Poisson processes.Let µ be a measure with a density g : W → [0, ∞) such that c ≤ g(x) ≤ c for all x ∈ W and constants c, c > 0. All results of this section continue to hold for Poisson processes with intensity measures sµ for s ≥ 1.We only have to slightly modify the proofs by bounding the intensity measure by scλ d | W from below or by scλ d | W from above depending on whether a lower or an upper bound is required in our estimates.Consequently, some of the constants might change.

Random Polytopes
The study of the convex hull of random points started with the works [31] and [32].In [30] central limit theorems for the volume and number of k-faces as well as variance bounds were shown.Variance asymptotics and central limit theorems for all intrinsic volumes of the convex hull in a ball were derived in [10].In [18] the rates of convergence for the central limit theorems were further improved.
The L p surface area measure for a convex body was introduced in [23], where the L p Minkowski problem was described.The Minkowski problem asks for conditions for a Borel measure on the sphere under which this measure is the L p surface area of a convex body.The discrete L p Minkowski problem is obtained in the special case, where this convex body is a polytope.This situation can, for example, be found in [14] and the references therein.In [13] the expected L p surface area of random polytopes was considered as a special case of T -functionals of random polytopes.
In this section the two-dimensional vector of L p surface areas of a random polytope for different p 1 , p 2 ∈ [0, 1] is considered and lower variance bounds for linear combinations as well as a result on the multivariate normal approximation are derived.For s ≥ 1 let η s be a homogeneous Poisson process on B d (0, 1) with intensity s, i.e. a Poisson process on R d with intensity measure λ = sλ d | B d (0,1) , where λ d | B d (0,1) denotes the restriction of the Lebesgue measure to B d (0, 1).We consider the random polytope Q generated by η s ∪{0}, i.e.Q is the convex hull Conv(η s ∪{0}).For p ∈ [0, 1] its L p surface area is given by where dist(0, F ) stands for the distance of F to the origin 0 (see for instance [13, Section 1]).
Theorem 4.1.The asymptotic covariance matrix of the vector s (d+3)/(2(d+1)) (A p1 , A p2 ) for p 1 , p 2 ∈ [0, 1] with p 1 = p 2 is positive definite, i.e. for any α = (α 1 , α 2 ) ∈ R 2 \{0} there exists a constant c > 0 such that for s sufficiently large Note that we add the origin as an extra point to the Poisson process mainly for technical reasons to ensure a useful definition of the L p surface area.However, since we are in this section only interested in asymptotic statements for s → ∞, this does not make a difference.Let Q denote the random polytope that is generated by η s , i.e.Q = Conv(η s ), and let A p ( Q) be defined by the right-hand side of (4.1), which is also well-defined if the origin does not belong to the polytope.Since one can choose m disjoint sets U 1 , . . ., U m ⊂ B d (0, 1) for some m ∈ N with λ d (U i ) > 0, i ∈ {1, . . ., m}, such that 0 ∈ Conv(ξ) for all ξ ∈ N with ξ ∩ U i = ∅ for all i ∈ {1, . . ., m}, we have for s ≥ 1 with suitable constants c 1,q , c 2,q > 0. Therefore, the triangle inequality and the estimate |A p (Q) and similarly for x ∈ η s , where F denotes the set of all facets of Q.Therefore, we have  We establish that the scores ξs have some crucial properties.For exact definitions we refer to Appendix A.
Lemma 4.2.The scores ξs are exponentially stabilising with α stab = d + 1, decay exponentially fast with the distance to the boundary ∂B d (0, 1) with α K = d + 1 and fulfil a q-th moment condition for q ≥ 1.
Proof.Analogously to [18, Lemma 3.10, Lemma 3.11 and Lemma 3.12] one can show that the scores are exponentially stabilising and decay exponentially fast with the distance to the boundary ∂B d (0, 1).
Let R(x, η s ∪{x}) denote the corresponding radius of stabilisation with respect to the d max -distance that is derived in [18, p. 963] and let ξd−1,s denote the slightly adjusted version of the score ξ d−1,s , which is defined as ξ s in (4.5).
In order to show a q-th moment condition for p ∈ [0, 1] we use that where B d max denotes the ball with respect to the d max -distance.Recall that F stands for the set of all facets of the random polytope.Hence, due to monotonicity of the surface area of convex sets we have Let H be the hyperplane through ∂B d x, R(x, η s ∪ {x}) ∩ ∂B d (0, 1).By the definition of the radius of stabilisation in [18, p. 963], we know that for each vertex x of the random polytope with R(x, η s ∪ {x}) ≤ 1, [0, x] intersects H, where [0, x] denotes the line connecting 0 and x.Moreover, we get with [18, p. 963] that for a vertex x the distance of the origin to a facet that contains x is at least as large as the distance from the origin to the hyperplane H. Hence, for a facet F that contains x we have since the radius of the (d − 1)-dimensional ball H ∩ B d (0, 1) can be bounded from above by R(x, η s ∪ {x}).The bound in (4.7) is obviously also true for R(x, η s ∪ {x}) > 1.
In order to derive Theorem 4.1 from Theorem 1.1, we consider the situation that adding an additional point increases the random polytope by exactly one simplex over an existing facet.Lemma 4.3 allows us to control the corresponding change of the L p surface area.The main challenge of the following proof is to show that the described situation is sufficiently likely.

√
as −1/(d+1) for s sufficiently large.Therefore, we can choose ε ℓ , ε h > 0 small enough such that for all t ∈ [0, 1/2] and s sufficiently large, with a constant c h,l > 0.Here the constant c h,l does not depend on a, while the lower bound for s such that the inequality holds may depend on a.The same applies to the inequalities and constants in the sequel if not stated otherwise.Moreover, using again (4.15) as well as ε h , ε ℓ ≤ 1/4, we have for a suitable constant c h,u > 0, t ∈ [0, 1/2], and s sufficiently large.By e.g.[7, Section 6, p. 367] the k-dimensional volume λ k of a k-dimensional regular simplex S k with edge length 2ℓ is for k ∈ N. By definition T i with i ∈ {1, . . ., d} is a regular (d − 2)-dimensional simplex of side length 2ℓ = 2 √ as −1/(d+1) .We know that the (d − 2)-dimensional volume of a (d − 2)-dimensional regular simplex of side length 2 √ a in R d is continuous with regard to translations of the vertices.Therefore, we can choose a cube around each vertex small enough such that moving each vertex within the corresponding cube changes the (d − 2)-dimensional volume of the (d − 2)-dimensional simplex only slightly.Due to homogeneity we can transfer this result to a regular simplex of side length 2 √ as −1/(d+1) for all s ≥ 1, where each side of the cubes is scaled by s −1/(d+1) .Hence, we can choose ε h , ε ℓ ∈ (0, 1/4) small enough such that with (4.18) for s sufficiently large, and for a suitable constant c T,u > 0. Together with (4.17), it holds and with (4.14), for i ∈ {1, . . ., d}.Hence, we have for j ∈ {1, . . ., d + 1} and s sufficiently large, for a suitable constant c F,u > 0. Analogously, we have for s sufficiently large, for a suitable constant c F,l > 0 and j ∈ {1, . . ., d + 1}.Due to the fundamental theorem of calculus we have for x > y > 0, We can assume without loss of generality that p 1 < p 2 and that (α In the following we distinct the cases α 1 = −α 2 and α 1 = −α 2 .For α 1 = −α 2 , we have, by Lemma 4.3, for suitable constants cd , cd,p1,p2 > 0, where we used that ρ d+1 ≥ 1 2 for s sufficiently large.Hence, we can fix a > 0 large enough such that this estimation provides for α 1 = −α 2 the existence of a constant c1 > 0 for which for s sufficiently large and t ∈ [0, 1/2]. For α 1 = −α 2 we fix a ∈ (0, 1).To use the second part of Lemma 4.3 we need an estimate for ρ i − ρ d+1 for i ∈ {1, . . ., d}.Let u i be the projection of 0 to F i for 0 Figure 4: Decomposition of the projection of 0 to F i i ∈ {1, . . ., d + 1} and note that xd+1 , which we introduced as the projection of x (d+1) to F d+1 , is also the projection of 0 on F d+1 .Then, for every i ∈ {1, . . ., d}, there exist a constant β i ≥ 0 and a vector v i orthogonal to u d+1 such that and, thus, Let ū be the projection of u d+1 to F d+1 , while z0 is the intersection point of F d+1 with the line through 0 and z (d+1) (see Figure 3).We show that we can choose ε h > 0 small enough such that u d+1 is very close to z0 to ensure a minimum distance from u d+1 to T i .
It holds Hence, we can choose ε h ∈ (0, 1/4) small enough such that since a ∈ (0, 1).For ε ℓ > 0 small enough such that for s sufficiently large, ℓ,  4).Hence, with the intercept theorem we have together with (4.14) and (4.17), Hence, i.e. altogether we have for s sufficiently large with a suitable constant c ρ,l,a > 0 that depends on a.
For the application of Theorem 1.1 we consider the situation that z (1) , . . ., z (d) are points of the Poisson process and the point z (d+1) is added.To ensure that the change of α 1 A p1 + α 2 A p2 is given by s(α 1 ∆ p1 + α 2 ∆ p2 ) we require that no further points of η s are present which prevent that z (1) , . . ., z (d) is a facet of the random polytope or which could be connected to z (d+1) by edges.Therefore, we consider the set for some constant c a > 0, which might depend on a.
Due to translation invariance, the same configuration of sets can be constructed for for each x as the suitable rotated regions.Define Combining our previous considerations leads to for s sufficiently large.Together with (4.29) we obtain for s sufficiently large Due to the definition of and therefore ) is at most of order s −1 .Therefore, since the Poisson process has intensity s, the order of the whole term in (4.33) can be bounded from below by a multiple of s −1 λ d (A), where for a suitable constant c > 0 and s sufficiently large.Altogether we have for some constant C > 0 and s sufficiently large.
Next, we check condition (1.4).Due to Lemma 4.2 we can apply the results in [18, Lemma 5.5 and Lemma 5.9], i.e. there exists a constant C > 0 satisfying for U ⊂ B d (0, 1) with |U | ≤ 1 and for any β > 0, for some constants C β , c β > 0 and x ∈ B d (0, 1).Note that the statements of [18, Lemma 5.9] contain typos since the exponent α of d s (x 1 , K) is missing in the upper bounds.
As a consequence of the lower variance bound in Theorem 4.1, one can derive bounds for the multivariate normal approximation of two L p surface areas.Therefore, we define the d convex -distance.Let I be the set of indicators of measurable convex sets in R 2 .Then, for the two-dimensional random vectors Y and Z the d convex -distance is defined as Theorem 4.4.Let (A p1 , A p2 ) be the vector of L p surface areas for p 1 , p 2 ∈ [0, 1] with p 1 = p 2 .Denote by Σ(s) the covariance matrix of s (d+3)/(2(d+1)) (A p1 , A p2 ).Let N Σ(s) be a centred Gaussian random vector with covariance matrix Σ(s).Then there exists a constant c > 0 such that for s ≥ 1.
Analogously to the calculation at the end of the proof of Theorem 4.1 one can show (i), while (ii) follows from (4.8).
In order to establish (iii), we assume that there is a subsequence (s n ) n∈N such that Σ(s n ) −1 op → ∞ and s n → ∞ as n → ∞.From the Poincaré inequality (see (1.3)), (4.34), [18, (5.8) in Lemma 5.10] and (i), one deduces that all variances and, thus, all covariances of the components of Z s are uniformly bounded for s ≥ 1.By (ii) the same holds for the entries of Σ(s).Thus, there exists a subsequence (s n k ) k∈N and a matrix Σ ∈ R 2×2 such that Σ(s n k ) → Σ as k → ∞.From Theorem 4.1 it follows that Σ is positive definite as α T Σα = lim k→∞ α T Σ(s n k )α > 0 for any α ∈ R 2 \{0}.Thus, Σ −1 op is well-defined and Σ(s n k ) −1 op → Σ −1 op as k → ∞.Since this is a contradiction to the assumption, we have shown that Σ(s) −1 op is uniformly bounded for s sufficiently large, which is (iii) and completes the proof of (4.36).
Moreover, let Z s = s (d+3)/(2(d+1)) (A p1 , A p2 ) = s −(d−1)/(2(d+1)) (sA p1 , sA p2 ).It follows from the triangle inequality that Since the first term on the right-hand side vanishes exponentially fast by (4.8) and the second one was treated in (4.36), it remains to study the third term.We have that where N I is distributed according to a two-dimensional standard normal distribution.
From [6,Corollary 3.2] one obtains that the right-hand side is bounded by a constant times  Lower and upper variance bounds of the same order as in Theorem 4.1 were already derived for the volume in [30].For binomial input, analogous variance bounds for intrinsic volumes were shown in [3].The case of an underlying Poisson process and, in particular, variance asymptotics for intrinsic volumes were discussed in [10].We expect that variance asymptotics for the L p surface area and especially the positivity of the asymptotic variance can be derived using the same method as in [10].However, the proof in [10] cannot be directly transferred to the linear combination of two L p surface areas because for a linear combinations with scalars of different sign the monotonicity argument in [10, p. 100] does not work.
In [12] the multivariate normal approximation of the vector of all intrinsic volumes and all numbers of lower-dimensional faces of the convex hull of Poisson points in a smooth convex body is considered.As in Theorem 4.4, one compares with a multivariate normal distribution with the same covariance matrix, but as the so-called d 3distance is studied no information about the regularity of the asymptotic covariance matrix is required.In the same work positive linear combinations of intrinsic volumes were considered since for coefficients with different signs it could not be ensured that the corresponding asymptotic variance is positive.For the special case of volume and surface area and an underlying ball, this problem is resolved by Theorem 4.1.In contrast to the findings in [12], Theorem 4.4 deals with non-smooth test functions and the obtained bounds are of a better order since a logarithmic factor could be removed.The rates of convergence derived in [18,Section 3] for the univariate normal approximation of intrinsic volumes in Kolmogorov distance are also of the order s −(d−1)/(2(d+1)) .Remark 4.5.The results of this section prevail if we assume that the Poisson processes have underlying intensity measures sµ for s ≥ 0, where µ is a measure with a density g : B d (0, 1) → [0, ∞) satisfying c ≤ g(x) ≤ c for all x ∈ B d (0, 1) and some constants c, c > 0 (see also Remark 3.6).Moreover, we expect that it is possible to replace the d-dimensional unit ball by a compact convex non-empty subset of R d with C 2 -boundary and positive Gaussian curvature.Since the boundaries of these sets as the boundary of the unit ball are locally between two paraboloids, we believe that similar arguments as in [18,Subsection 3.4] allow to prove our results for this larger class of underlying bodies.However, we did not pursue this approach in order to not further increase the length and complexity of the proofs in this section.

Excursion sets of Poisson shot noise processes
Excursion sets of random fields are an important topic of probability theory and have many applications, for example in biology or engineering.For an introduction into this topic see for instance [1].The most common underlying random fields are Gaussian random fields, but a further prominent choice are Poisson shot noise processes as we consider in this section.for x ∈ R d .We denote (f η (x)) x∈R d as Poisson shot noise process and note that it is translation invariant.Its excursion set at level u > 0 consists of all x ∈ R d such that f η (x) ≥ u.The corresponding volume of the excursion set in an observation window B d (0, s) with s ≥ 1 is given by Now one is interested in the behaviour of F s as s → ∞, i.e. if the observation window is increased.In [9] variance asymptotics and central limit theorems for the volume of excursion sets of quasi-associated random fields were considered, which include a large class of Poisson shot noise processes (see [9,Proposition 1]).More recently, asymptotics for the variance and central limit theorems for the volume, the perimeter and the Euler characteristic of the excursion sets of Poisson shot-noise processes were shown in [16, Section 4], while the paper [17] studied the same questions for smoothed versions of volume and perimeter.
We use the following assumption on the kernel function g.Assumption 1.There exist constants c g , c g , δ, γ > 0 and c g ≥ 1 such that δ + d/2 > γ ≥ δ > 3d and for all x ∈ R d with x ≥ c g .By using our Theorem 1.1, we derive lower bounds for variances, which complement the findings from [9,16]; see the discussion below for more details.Replacing g by g(• − z) for any z ∈ R d leads to a translation of the Poisson shot noise field and, thus, by translation invariance, to a Poisson shot noise process with the same distribution.Thus, the assumption g(0) > 0 is no loss of generality because any g that can take positive values can be modified accordingly, while the case of a non-positive function g is trivial because then the level set for u > 0 becomes empty.
Since the volume of the excursion set can be written as integral over indicator functions, one obtains with Fubini's theorem and translation invariance of the Poisson shot noise process Note that λ d ({y ∈ R d : y, y + z ∈ B d (0, s)})/λ d (B d (0, s)) ≤ 1 for all z ∈ R d and that it converges to one as s → ∞ for all z ∈ R d .Thus, the dominated convergence theorem yields if the integral on the right-hand side is well-defined.However, this explicit formula for the asymptotic variance does not imply the statement of Theorem 5.1 since the difference under the integral could take both negative and positive values in such a way that the integral becomes zero.Since statements of the form that the variance is at least of the order of the volume of the observation window as in Theorem 5.1 were already proven in [9, Proposition 1] and [16, Theorem 4.1], let us compare the assumptions of Theorem 5.1 a) with those made before.In [9, Proposition 1], it is required that g is a bounded and uniformly continuous function on R d with |g(x)| ≤ c x α for some constant c > 0 and α > 3d (as in our Assumption 1).A crucial difference is that we allow g to take positive and negative values, while it has to be non-negative in [9], where this assumption might be essential since it ensures that the Poisson shot noise process is positively associated.A lower bound on the decay of |g| as in Assumption 1 is not present in [9], but we use it only to ensure the boundedness of the density of f η (0), which is supposed in [9].The result in [9] deals with marks in the sense that in (5.1) each summand is multiplied by an i.i.d.copy of a non-negative random variable.It might be possible to generalise our results in this direction as well.The assumptions in [16,Theorem 4.1] seem to be more restrictive than in our case.So it is supposed that g depends only on the norm of its argument and that has an upper bound as in Assumption 1 but with δ = 11d.Instead a lower bound on |g|, a rather technical assumption (see (4.3) in [16]) is made, which even requires differentiability of g.We are not aware of any results dealing with the situation of part b) of Theorem 5.1.The compact support implies that f η (0) does not possess a density.We prepare the proof of Theorem 5.1 with the following lemma.Lemma 5.2.Let g : R d → R be a continuous, bounded function with g(0) > 0 that fulfils Assumption 1.Then, f η (x) has a bounded density for x ∈ R d .
Proof.We use the fact that f η (x) has a bounded density if its characteristic function ϕ is integrable.By [8, Chapter 1, Lemma 3.7] the characteristic function of f η (x) is given by ϕ and, therefore, has a bounded density.
Proof of Theorem 5.1.Since g is continuous and g(x) → 0 as x → ∞, there exists a ball B d ( t, r) with centre t ∈ R d and radius r > 0 such that g(t) ∈ [c 1 , c 2 ] for all t ∈ B d ( t, r) with 0 < c 1 < c 2 < g(0).For z ∈ R d we shall consider D z F s .The following inequalities are independent from the choice of z.Let ε ∈ (0, min{1, r}) be small enough such that g(x − z) ≥ g(0) − c 1 for all x ∈ B d (z, ε).  for some constant c d,ε > 0.
In the following we consider the second-order difference operator to check (1.4).
For z 1 , z 2 ∈ R d with z 1 = z 2 we have From Assumption 1 and the continuity of g it follows that g is bounded by a constant C 2 > 0. Using the decay of |g| and δ > 3d in Assumption 1, we have for x ∈ B d (0, s) that where d(x, K) denotes the distance from x to K with respect to the semi-metric d and γ is from (A.1).In contrast to the definitions in [18], those in [33] and in this appendix A require that one can add up to nine additional points instead of seven, but this difference is not essential and all results from [18] we refer to throughout this paper are still valid.For more details on stabilising functionals we refer to [18] or [33] and the references therein.

Theorem 1 . 1 .
Let F ∈ L 2 η be a Poisson functional satisfying , η s ) = j}, where deg(y, η s ) stands for the degree of y in G rs .Moreover, let C rs j denote the number of components of size j in G rs , i.e. , η s )| = j}, where |C(y, η s )| is the number of vertices of the component C(y, η s ) of y in G rs .

(4. 4 )
Thus, we consider A p ( Q) instead of A p (Q) throughout this section and, especially, in the proof of Theorem 4.1.We work in the general framework described in Appendix A with the underlying space X = B d (0, 1) and the metricd max (x, y) = max { x − y , | x − y |} for x, y ∈ B d (0,1).To prove condition (1.4), we start with writing the difference of the surface area of the ball B d (0, 1) and the L p surface area of the random polytope Q as a sum of scores.The following arguments are mostly analogously to [18, Section 3.4], where similar representations for intrinsic volumes were derived.Especially, because the surface area is twice the (d − 1)-st intrinsic volume, it was shown in [18, Lemma 3.8] that s(λ d−1 (∂B d (0, 1)) − λ d−1 (∂ Q)) = 2 x∈ηs ξ d−1,s (x, η s )with the scores ξ d−1,s as in [18, last display on p. 960] for s ≥ 1 and where ∂A denotes the boundary of a set A ⊆ B d (0, 1).We consider analogous scores ξ s for the L p surface area, i.e.

Theorem 4 . 1 and
Theorem 4.4 especially provide a lower variance bound and a result on the multivariate normal approximation for the vector of surface area and volume of a random polytope since A 0 = dV d and A 1 = S d−1 , where V d and S d−1 denote the volume and surface area, respectively.

For a stationary
Poisson process η on R d with intensity measure λ d and an integrable function g : R d → R let f η (x) = y∈η g(x − y) (5.1)

Theorem 5 . 1 .
Let g : R d → R be a continuous function with g(0) > 0. a) If g fulfils Assumption 1, there exists a constant c > 0 such that Var[F s ] ≥ cs d for s ≥ 1. b) Assume that g has compact support S.Then, there exists a constant c > 0 such that Var[F s ] ≥ cs d for s ≥ 1.