Thin-shell theory for rotationally invariant random simplices

For fixed functions $G,H:[0,\infty)\to[0,\infty)$, consider the rotationally invariant probability density on $\mathbb{R}^n$ of the form \[ \mu^n(ds) = \frac{1}{Z_n} G(\|s\|_2)\, e^{ - n H( \|s\|_2)} ds. \] We show that when $n$ is large, the Euclidean norm $\|Y^n\|_2$ of a random vector $Y^n$ distributed according to $\mu^n$ satisfies a Gaussian thin-shell property: the distribution of $\|Y^n\|_2$ concentrates around a certain value $s_0$, and the fluctuations of $\|Y^n\|_2$ are approximately Gaussian with the order $1/\sqrt{n}$. We apply this thin shell property to the study of rotationally invariant random simplices, simplices whose vertices consist of the origin as well as independent random vectors $Y_1^n,\ldots,Y_p^n$ distributed according to $\mu^n$. We show that the logarithmic volume of the resulting simplex exhibits highly Gaussian behavior, providing a generalizing and unifying setting for the objects considered in Grote-Kabluchko-Th\"ale [Limit theorems for random simplices in high dimensions, ALEA, Lat. Am. J. Probab. Math. Stat. 16, 141--177 (2019)]. Finally, by relating the volumes of random simplices to random determinants, we show that if $A^n$ is an $n \times n$ random matrix whose entries are independent standard Gaussian random variables, then there are explicit constants $c_0,c_1\in(0,\infty)$ and an absolute constant $C\in(0,\infty)$ such that \[\sup_{ s \in \mathbb{R}} \left| \mathbb{P} \left[ \frac{ \log \mathrm{det}(A^n) - \log(n-1)! - c_0 }{ \sqrt{ \frac{1}{2} \log n + c_1 }}<s \right] - \int_{-\infty}^s \frac{e^{ - u^2/2} du}{ \sqrt{ 2 \pi }} \right|<\frac{C}{\log^{3/2}n}, \] sharpening the $1/\log^{1/3 + o(1)}n$ bound in Nguyen and Vu [Random matrices: Law of the determinant, Ann. Probab. 42 (1) (2014), 146--167].

We show that when n is large, the Euclidean norm Y n 2 of a random vector Y n distributed according to µ n satisfies a thin-shell property, in that its distribution is highly likely to concentrate around a value s 0 minimizing a certain variational problem. Moreover, we show that the fluctuations of this modulus away from s 0 have the order 1/ √ n and are approximately Gaussian when n is large. We apply these observations to rotationally invariant random simplices: the simplex whose vertices consist of the origin as well as independent random vectors Y n 1 , . . . , Y n p distributed according to µ n , ultimately showing that the logarithmic volume of the resulting simplex exhibits highly Gaussian behavior. Our class of measures includes the Gaussian distribution, the beta distribution and the beta prime distribution on R n , provided a generalizing and unifying setting for the objects considered in Grote-Kabluchko-Thäle [Limit theorems for random simplices in high dimensions, ALEA, Lat. Am. J. Probab. Math. Stat. 16, 141-177 (2019)].
Finally, the volumes of random simplices may be related to the determinants of random matrices, and we use our methods with this correspondence to show that if A n is an n × n random matrix whose entries are independent standard Gaussian random variables, then there are explicit constants c 0 , c 1 ∈ (0, ∞) and an absolute constant C ∈ (0, ∞) such that 1. Introduction 1.1. High-dimensional probability and random simplices. High-dimensional probability theory is concerned with random objects, their characteristics, and the phenomena that accompany both as the dimension of the ambient space tends to infinity. It is a flourishing area of mathematics not least because of numerous applications in modern statistics and machine learning related to high-dimensional data, for instance, in form of dimensionality reduction [14], clustering [46], principal component regression [53], community detection in networks [22,42], topic discovery [20], or covariance estimation [16,58]. Highdimensional probability bears strong connections to geometric functional analysis and convex geometry and this propinquity is typically reflected both in the flavor of a result and the methods used to obtain it. One of the early results of the theory is commonly known as the Poincaré-Maxwell-Borel Lemma (see [19]) and states that the first k coordinates of a point uniformly distributed over the n-dimensional Euclidean ball (or sphere) of radius √ n are independent standard normal variables in the limit as n → ∞ with k fixed. Ever since, a variety of limit theorems has been obtained, many of those with the purpose to understand the geometry of high-dimensional convex bodies. Among others, there is Schmuckenschläger's central limit theorem related to the volume of intersections of n p -balls [52] and its multivariate version by Kabluchko, Prochno, and Thäle who also obtained moderate and large deviations principles [36,37]. Then there is the prominent central limit theorem for convex bodies proved by Klartag, showing that most lower-dimensional marginals of a random vector uniformly distributed in an isotropic convex body are approximately Gaussian [41], and a number of other results in which limit theorems related to analytic and geometric aspects of high-dimensional objects have been established [3,4,5,7,10,12,15,23,31,33,35,38,39,40,44,50,51,54,56,57].

1.2.
Rotationally invariant random simplices. The focus of the current paper is rotationally invariant random simplices. Suppose p, n ∈ N with 1 ≤ p ≤ n and that y 1 , . . . , y p are vectors in R n , and consider the simplex ∆(y 1 , . . . , y p ) := p i=1 s i y i : s 1 , . . . , s p ≥ 0 and p i=1 s i ≤ 1 , whose vertices are given by {0, y 1 , . . . , y p }. Whenever p ≤ n and the vectors y 1 , . . . , y p are linearly independent, this simplex is a p-dimensional convex body within n dimensional Euclidean space with non-zero p-volume, and this volume may be written in terms of the representation Vol p (∆(y 1 , . . . , y p )) = 1 p! p det i,j=1 where ·, · is the standard Euclidean inner product on R n and Vol p the p-dimensional Lebesgue measure. The primary focus of this paper is in the study of the asymptotics of the random variable W n,p := log Vol p ∆(Y n 1 , . . . , Y n p ) given by the logarithmic volume of a simplex whose vertices Y n 1 , . . . , Y n p are independent random vectors in R n .
Before proceeding, let us mention here that various related models for random simplices have been considered in the literature. Recently Akinwande and Reitzner obtained multivariate central limit theorems for random simplicial complexes [1], Gusakova and Thäle studied the logarithmic volume of simplices in high-dimensional Poisson-Delaunay tessellations and obtained several types of limit theorems [26], and Grote, Kabluchko, and Thäle [25] investigated the logarithmic volume for other classes of random simplices such as those generated by Gaussian, Beta or the spherical distribution. We should remark here that with a view to drawing on connections with random matrices, we study random simplices for which the origin is a fixed vertex, where as in the chief focus of Grote, Kabluchko, and Thäle are random simplices all of whose vertices are random. A central limit theorem for random simplices arising from product distributions with sub-exponential tails was treated by Alonso-Gutiérrez et al. in [2].
In view of the recent works [2] and [25], we work more generally, making the sole restriction that the law of the simplex is invariant under rotations of the underlying space, which occurs whenever the vectors Y n 1 , . . . , Y n p are drawn independently according to a probability distribution µ on R n that is rotationally invariant, in the sense that µ(T (A)) = µ(A) for every Borel subset A of R n and every linear orthogonal transformation T : R n → R n . Given that a random variable Y is distributed according to a rotationally invariant probability distribution, we can decompose Y so that where R d = Y 2 is a [0, ∞)-valued random variable independent of the random vector Θ uniformly distributed on the Euclidean unit sphere S n−1 := (x i ) n i=1 : n i=1 x 2 i = 1 . Here and elsewhere d = refers to equality in distribution. We would like to emphasize that this framework encompasses the spherical, Gaussian, beta and beta prime models considered in [25], where in these natural contexts as in many others the distribution of the radial part R tends to vary with the underlying dimension n.
The radial decoupling (2) behaves agreeably with the determinant, in that if {Y i = R i Θ i : i = 1, . . . , p} are independent and identically distributed decompositions of the law µ, by (1) we have the decoupling of the volume In particular, there are two independent sources of variance that contribute to the simplicial volume: the product p k=1 R k and the spherical determinant det p i,j=1 Θ i , Θ j 1/2 . Before going any further, we take a moment to focus on this latter term, which we would obtain in (3) if µ was the uniform distribution on the unit sphere S n−1 in R n , so that in the above decomposition each R i would be equal to 1 almost surely. In this case it was first observed by Miles [45] that we have the distributional identity where the terms in the product on the right-hand side are independent random variables such that each β (n−j)/2,j/2 is beta distributed with shape parameters (n − j/2, j/2). Recall now that a random variable follows a beta distribution with shape parameters α, β > 0 if it has Lebesgue density on [0, 1] given by In view of (3) and (4), the logarithmic volume thus satisfies This representation of the log-volume of a spherical random simplex as a sum of independent random variables is utilized by Grote et al. [25] is the Kolmogorov-Smirnov distance between a random variable X and N . Throughout this paper, N denotes a standard Gaussian random variable.
In fact, Grote et al. [25] find representations analogous to (4) suitable for simplices with Gaussian, beta, and beta prime distributed vertices, and prove analogous Berry-Esseen bounds in these settings.

1.
3. An overview of our results. In this paper we work in a broad setting, considering random simplices whose vertices are random vectors distributed according to one of a large class of rotationally invariant probability distributions. We find that the volumes of these simplices exhibit a certain interplay of high dimensional phenomena creating what might be described as extremely Gaussian behavior. With a view to outlining these phenomena here, by combining the decoupling representation (3) with the spherical identity (4) and taking logarithms, we obtain the distributional identity constituting the starting point for our analysis. Since W n,p has a representation of order p independent random variables, it is natural to expect, provided the moments are sufficiently regular, that when appropriately normalized, W n,p converges at a speed 1/ √ p in distribution to a standard Gaussian random variable; see Figure 1 for an illustration. . . , Y n p ) , properly centered and standardized (as in [48]), for p = 300 and n = 1000. Left: Y n i uniformly distributed on the unit sphere. Right: Y n i with i.i.d. standard normal components.
We find that in a host of natural settings (including Gaussian, beta and beta prime simplices) the dimension n of the ambient space also contributes to creating Gaussian behavior at a speed much faster than the 1/ √ p speed predicted by the Berry-Esseen theorem. We prove this by way of a pincer strategy, handling differently the distinct sums on the right-hand side of (5). More concretely, we will focus on random simplices in which the random vectors Y n 1 , . . . , Y n p are distributed according to a probability measure of the form where are functions satisfying some mild conditions and Z n is a normalization constant. This class of probability measures includes the beta and beta prime models, as well as the Gaussian model (which is obtained after rescaling by 1/ √ n). Let us briefly outline our results here in the case where p ≤ θn for a fixed θ < 1: • Consider the random variable Using a Fourier-analytic approach, we prove a fast Berry-Esseen bound for the random variable W Sph n,p , the word 'fast' being used here to indicate that the speed of the bound exceeds 1/ √ p for p uniformly bounded away from n. • The next major step in our work is a thin-shell type result for rotationally invariant probability distributions of the form (6). Namely, we show that if X 1 , X 2 , X 3 . . . is a sequence of independent random vectors such that each X i takes values in R i and is distributed according to µ i , then the sequence of their standardized log-radiĩ converges in distribution to a standard Gaussian random variable as i → ∞. Theorem B below says something stronger however: there is a constant L = L(G, H) ∈ (0, ∞) depending on the functions G and H but independent of p and n such that if X n 1 , . . . , X n p are independent and identically distributed according to µ n , andR n 1 , . . . ,R n p are their associated normalised log-radii, then we have the fast Berry-Esseen bound where c > 0 is an absolute constant. • The two prior results state that both p j=1 log R n j and are both within certain Kolmogorov-Smirnov distances of Gaussian random variables with certain means and variances. These results may be combined fairly quickly using a triangle inequality for Kolmogorov-Smirnov distances, leading to a proof of Theorem C. Finally, we drop assumption (6), which is used to relate the distribution of the log-radius of the vectors Y n i to a Gaussian distribution, and consider more general rotationally invariant random vectors Y n i . In particular, the log-radius might be in the domain of attraction of an infinite variance stable distribution. Depending on the relation between the tails of the log-radius of Y n i and the variance of W Sph n,p we find that the properly normalized log Vol p ∆(Y n 1 , . . . , Y n p ) either converges to a normal limit, an infinite variance stable limit or a mixture between those two.
Overview of the remainder of the paper. The rest of the paper is structured as follows. In Section 2, we present our main results. First, we present a fast Berry-Esseen theorem for the log-volume of the spherical simplex which is then extended to rotationally invariant random simplices. As a byproduct, we prove a Berry-Esseen type result for the sum of iid random variables whose density resembles the Gaussian density.
In Section 2.3, we provide limit theory for the log-volume of the rotationally invariant random simplices under general conditions, also allowing for very heavy-tailed distributions. Section 2.4 highlights the connection of our findings to random matrix theory. As an application of our results we prove convergence of the logarithmic determinant of an iid standard Gaussian random matrix at speed (log n) −3/2 . Section 3-7 are devoted to the proofs of the results in Section 2. In Section 3, we begin with a careful analysis of random simplices whose vertices are p points chosen uniformly on the unit sphere in S n−1 , culminating in a proof of Theorem A. In Section 4, we introduce our probabilistic approach to the Laplace method, ultimately working towards a proof of Theorem B. Section 5 combines our work in the prior two sections together to prove Theorem C. In the next section, Section 6, we give a short proof of Theorem G, using some of the machinery developed in Section 3. Finally, all results from Section 2.3 are proved in Section 7.

Main results
In this section we state our results in full.

2.1.
A fast Berry-Esseen theorem for the log-volume of the spherical simplex. Our first result is a Berry-Esseen bound for the spherical random simplex. Here, for integers p ≤ n let W Sph n,p := denote the standardized log-volume of the spherical random simplex associated with p points chosen independently and uniformly at random from S n−1 .
Theorem A. Let p, n ∈ N such that p ≤ n and W Sph n,p be the normalized log-volume of a spherical simplex. Then there is a universal constant C ∈ (0, ∞) such that whenever p ≥ 41, where θ := θ(p, n) := p−1 n . In fact, we may take C = 28. We take a moment to unpack the bound in Theorem A by looking at the following easily verified consequences: • Fix φ ∈ (0, 1). Using the inequality log 1 1−θ −θ ≥ θ 2 /2 for θ ∈ [0, 1), it is easily verified that whenever where C φ := 2 √ 2C/(1 − φ). • On the other hand, for all p ≤ n by setting q := n − p + 1 (so that q = n(1 − θ)), we have Let us remark here that an analogous result to Theorem A appears in Section 3 of Grote et al. [25,Theorem 3.6], who in contrast to us consider random simplices not having the origin as a fixed vertex. They obtain a similar bound in the case where p−1 n is bounded away from one, though their bound is weaker in the n − p = o(n) case; they obtain C/ log 1/2 (n/q) in the setting of (9).

2.2.
A Berry-Esseen theorem for the Laplace method. Our next result concerns the highly Gaussian behavior of the sums of the log-radii. Here we take a moment to give a brief digression on the Laplace method, which states that when g and h are suitably regular functions with h attaining a global minimum at some x 0 ∈ (a, b), then we have the asymptotics as n → ∞. See e.g. [6]. The key conceptual point in the Laplace method is that, thanks to the Taylor expansion nh( , the integral in (10) behaves roughly like a Gaussian integral around x 0 .
Theorem B develops this idea further, stating that when n is large, random variables whose probability distributions take the form (10) are approximately Gaussian. To set this up, we require some conditions on the functions. For a fixed pair (g, h), we consider probability density functions of the form where the Z n ∈ (0, ∞) are normalization constants, and the ordered pair of functions g, h : is admissible per the following definition. (a) The density function ρ n is differentiable almost-everywhere, and has a unique maximum at a point x 0 in R such that x 0 is a minimum of h. Moreover, we assume that ρ n is increasing on (−∞, x 0 ] and decreasing on [x 0 , ∞).
then we have (c) Outside of this neighborhood, i.e., for each As an immediate consequence of Theorem B below, all probability distributions with densities of the form (11) that satisfy Definition 2.1 are in the domain of attraction of the normal law. In particular, they include the Gaussian distribution, the Gamma distribution and the beta distribution.
Our Berry-Esseen theorem for the Laplace method states that when n is large, the normalized sum of p independent random variables distributed according to an admissible density ρ n is close in distribution to a standard Gaussian random variable N .
Theorem B. For an admissible pair (g, h) there is a constant C g,h ∈ (0, ∞) and n 0 ∈ N such that for all n ≥ n 0 we have the following: if X n 1 , . . . , X n p are independent random variables with density ρ n given by (11), then The value of Theorem B lies in its application to a sort of thin-shell property for a large class of radial densities on R n . To this end, we say a pair ( Suppose now for a fixed radially admissible pair (G, H) for each n ∈ N we have a rotationally invariant probability density on R n of the form where Z n is the normalizing constant. Then by virtue of a straightforward calculation involving the polar integration formula, if X n is a random vector distributed according to µ n , then its log radius log X n 2 has the density on the real line, where Z n is again a normalizing constant. This observation is one of the key ingredients in synthesizing Theorem B with Theorem A to obtain the following general result.
Theorem C. For each radially admissible pair G, H there is a constant C G,H ∈ (0, ∞) and n 0 ∈ N such that for all n ≥ n 0 we have the following: if Y n 1 , . . . , Y n p are p independent random vectors in R n distributed according to µ n as it appears in (12), then for W G,H n,p := log Vol p ∆(Y n 1 , . . . , Y n p ) it holds that where θ := θ(p, n) := p−1 n . We take a moment to highlight two special cases of Theorem C.
• The case where G(x) = 1 is the identity map and H(x) = x 2 /2 corresponds to the Gaussian distribution with covariance matrix 1 n I n , where I n denotes the n × n identity matrix. • The case where G(x) = 1 is the identity map and H(x) = 1+φ 2 log(1 + |x| 2 ) corresponds to the so called Beta prime distribution on R n with parameter ν = φn, where φ > 0.

2.3.
Fluctuations of the log-volume under general conditions. In this subsection, we work more generally and drop assumption (11), which was used to relate the distribution of the log-radius of the vectors Y n i to a Gaussian distribution. We consider iid, rotationally invariant random vectors Y n i , which we collect in the data matrix Y := Y n := (Y n 1 , . . . , Y n p ) .
The main focus is no longer on deriving fast Berry-Esseen bounds for the convergence to the Gaussian distribution. Our goal is to study the asymptotic distribution of log Vol p (∆Y) for a wide range of radial laws. For the number of points constituting our simplex, we consider the asymptotic regime p = p n → ∞ and p ≤ n , as n → ∞ . (14) To simplify notation, we define the random variable R (n) = Y n 1 2 and set R (n) = log R (n) . For the field (β i/2,j/2 ) i,j∈N of independent random variables such that β i/2,j/2 is Beta(i/2, j/2) distributed, we write β i/2,j/2 = log β i/2,j/2 .
Our next result provides conditions on the radius R (n) under which the fluctuations of the log-volume about its mean are asymptotically Gaussian.
Theorem D (Normal limit). Under the growth condition (14), consider the data matrix Y defined in (13) with independent and rotationally invariant columns, i.e (2) holds. Assume there exists a sequence of positive constants σ n such that, as n → ∞, and Let (b n ) n≥1 be a sequence satisfying, as n → ∞, Then we have Theorem D characterises the distributions of radii such that the logarithmic volume satisfies a central limit theorem. In fact, since Petrov's [49] infinite smallness condition is always satisfied in our model, a slightly stronger result holds under the assumptions of Theorem D and a sequence of positive constants σ n . Namely, the existence of a non-random sequence (b n ) such that (18) holds is equivalent to (σ n ) satsifying (15) and (16). If (15) and (16) hold, we may choose b n as in (17).
Our next result shows that the logarithmic volume can also have an α-stable limit. In particular, this is the case when R (n) has power law tails with index α < 2. To the best of our knowledge, the most general setting in which the limiting distribution of the log-volume (or equivalently the log-determinant) was derived was in [9,59] who assumed that the entries of Y possess a finite fourth moment, which is the typical assumption in papers on linear spectral statistics. We refer to [8,18,27,28,21,11,29,30] for collections of results which show the stark differences in the asymptotic behavior under infinite fourth moments.
In order to present our stable limit theorem, we introduce the auxiliary sequence which one may interpret as the critical variance sequence.
Theorem E (α-stable limit). Under the growth condition (14), consider the data matrix Y defined in (13) with independent and rotationally invariant rows, i.e (2) holds. For some α ∈ (0, 2) and c 1 , c 2 ≥ 0 with c 1 + c 2 > 0, assume that there exists a sequence of positive constants σ n such that, as n → ∞, ω n /σ n → 0 and Then we have the following weak convergence to an α-stable limit: Finally, there is an interesting mixed case, when the variances of the two sums on the right-hand side of (5) are of the same order.

2.4.
The random matrix perspective. While we have discussed our results so far from the perspective of the volumes of random simplices, the framework we considered is intimately related to the determinants of random matrices. Indeed, we saw in (1) that the volume of a simplex with vertices Y n 1 , . . . , Y n p in R n may be expressed in terms of a determinant. Developing this equation slightly, we may write where Y := Y n is the n × p matrix whose columns are given by Y n 1 , . . . , Y n p . In particular, we may invert the relation to obtain various statements about the log-determinants of random matrices Y whose columns are rotationally invariant random vectors. We remark that for our decomposition in (3) it is important that the rows of Y are independent rotationally invariant random vectors. If instead the columns of Y were rotationally invariant, one cannot separate the radius from the direction as in (3) even though Y Y and YY have the same non-zero eigenvalues. The phenomenon that the roles of rows and columns are not interchangeable is illustrated in Figure 2. Before discussing the applications of our results to the determinants of random matrices, we take a moment to highlight just a single result from the large body of work on the asymptotic distribution of the logarithms of such determinants [24,55,9,59,48]. Namely, Nyugen and Vu [47] consider the logdeterminant of an n × n random matrix A n with independent and identically distributed entries with zero mean, unit variance and finite fourth moment. They show that, as n → ∞, Nguyen and Vu speculate that (log n) −1/3 could be the optimal rate of convergence for such a theorem, though suggest that this could be potentially improved to (log n) −1/2 with a finer correction for the expectation of the log determinant. It transpires that when the entries are further assumed to be independent standard Gaussians, the rate of convergence can be improved to (log n) −3/2 . To this end, we require estimates on the mean and variance of log | det(A n )| that are fine up to constant order. To this end, let γ := lim n→∞ ( n k=1 1/k − log n) denote the Euler-Mascheroni constant, and define the constants and We believe the following result to be new.
Theorem G. Let n ∈ N and A n be an n × n matrix whose entries are independent standard Gaussian random variables. Then, we have where C ∈ (0, ∞) is an absolute constant.
Theorem G is proved directly in Section 6. A weaker version of Theorem G, without explicit estimates for the mean and variance of log | det(A n )| is actually an indirect consequence of a more general result concerning the log-determinants of random matrices whose columns are distributed according to a rotationally invariant probabillity density µ n on R n . Namely, the following result is an immediate corollary of Theorem C, using (20) to restate the result in terms of determinants of random matrices rather than volumes of random simplices.
Theorem H. Let A n,p be an n × p matrix whose p columns Y n 1 , . . . , Y n p are independent and identically distributed according to a probability density of the form µ n , with µ n as in (12) and (G, H) a radially admissible pair. Then there is a constant C G,H ∈ (0, ∞) such that where θ := θ(p, n) := p−1 n . That completes the section on random matrices.

Extremely Gaussian behavior for spherical random simplices
The chief focus of this section will be in analyzing the Gaussian behavior of the log-volume of random simplices whose vertices are uniformly distributed on the unit sphere. We begin in the next section by discussing the polar integration formula and radial laws.
3.1. The polar integration formula and radial laws. Throughout we will use the following polar integration formula. Let f : R n → [0, ∞) be an integrable function on R n depending only on the Euclidean norm, in the sense that f (s) = f ( s 2 ) for some f : [0, ∞) → [0, ∞). Then the polar integration formula states that Given a Borel subset A of [0, ∞), define the Borel subset rad n (A) of R n by setting rad n (A) := {s ∈ R n : s 2 ∈ A}.
Given any probability distribution µ on R n , we define the radial law ν associated with µ to be the probability measure on [0, ∞) defined by setting We now record the following simple lemma on the radial laws of rotationally invariant distributions of standard form.
Lemma 3.1. Let µ n be a rotationally invariant probability distribution on R n of the form Then the radial law ν n associated with µ n is given by Proof. Let A be a Borel subset of [0, ∞). Then where for s ∈ R n , f (s) := 1 radn(A) (s)C n g( s 2 ) e −nh( s 2 ) . The result follows after applying (23).

3.2.
Miles' identity. Integral to our analysis is the distributional identity (4) which is a consequence of the following proposition, which was recently given in Grote, Kabluchko and Thäle [25, Theorem 2.4(d)], though similar identities date back (at least) to Miles [45].
Proposition 3.2. Let Θ n 1 , . . . , Θ n p be points chosen independently and uniformly from the Euclidean unit sphere S n−1 in R n . Then we have the following identity in law It is immediate from Proposition 3.2 that the log-volume of the spherical random simplex may be written as 3.3. Polygamma functions. For complex ζ with positive real part, let Γ(ζ) := ∞ 0 u ζ−1 e −u du be the gamma function. Then the k-th polygamma function is given by The zero th polygamma function ψ 0 , better known as the digamma function, has the following integral representation A simple calculation involving the gamma integral tells us that we have the sandwich inequality Finally, we note from (27) that for ζ in C with Re(ζ) > 0 we have |ψ k (ζ)| ≤ |ψ k (Re(ζ))|. In particular, with C k = (k − 1)! + 2k! we may extract from (28) the upper bound 3.4. Moments of log-beta random variables. In this subsection, we provide all moments of the log-beta random variables in terms of combinatorial expressions involving the polygamma functions. To this end, we need the one-dimensional Faà di Bruno formula (see, e.g., [34] for this and its multivariate form). To set this up, recall that a partition of {1, . . . , k} is a collection of disjoint subsets (called blocks) of {1, . . . , k} whose union is equal to {1, . . . , k}. Let P k be the collection of set partitions of {1, . . . , k}. For partitions π in P k , let #π denote the number of blocks in π. For a block Γ of some π, let #Γ denote the number of elements of {1, . . . , k} contained in Γ. Namely, if k ∈ N and f, g : R → R are k times differentiable functions, then Faà di Bruno's formula states that the k-th derivative of the composition f • g is given by where for j ∈ N, f (j) and g (j) denote the j-th derivatives of f and g respectively. We note that in particular, when f (ζ) = e ζ , we have d k dζ k e g(ζ) = e g(ζ) We are now equipped to give a combinatorial representation for the moments of the logarithm of a beta random variable in terms of set partitions and the polygamma functions. Lemma 3.3. Let β ζ,η be beta distributed with parameters (ζ, η). Then where for integers j ∈ N, q j (ζ, η) := ψ j−1 (ζ) − ψ j−1 (ζ + η).
Proof. First we make the observation that In particular, by taking the derivative outside the integral, we may write where g(ζ) := log Γ(ζ) − log Γ(ζ + η). The result follows by using (30) and the definition (25) of the polygamma functions.
Our second lemma gives us the centered moments. Assume that β ζ,η is beta distributed with parameters (ζ, η). Then Proof. We begin with the observation that where we wrote q j := q j (ζ, η) for simplicity. For each subset S of {1, . . . , k}, we may expand E[(log β ζ,η ) #S ] using Lemma 3.3 so that Each partition π of S has a canonical extensionπ to {1, . . . , k} by lettinḡ Let A(π) be the set of x ∈ {1, . . . , k} such that the singleton {x} is a block of π. It follows that T = k − S is a subset of A(π). In particular, reindexing the sum in (31), we have Now note that It follows that the sum in (32) is supported only on partitionsπ in P k such that A(π) is empty, i.e., contains no singletons.
Moreover, we have the following upper bound on the centered absolute third moment Proof. The equations for the mean and variance follow from respectively setting k = 1 in Lemma 3.3 and k = 2 in Lemma 3.4. The upper bound for the centered absolute third moment is obtained by setting k = 4 in Lemma 3.4 and using Lyapunov's inequality.
If W Sph n,p is the log-volume of a spherical random simplex associated with p points sampled independently and uniformly from S n−1 , then by Lemma 3.5 and (24) At several stages below we will require the following lower bound on the variance σ Sph n,p 2 , which follows easily from (28).
Corollary 3.6. Let p, n ∈ N with p ≤ n. Then we have Setting θ := θ(p, n) := p−1 n , whenever p ≥ 7 we have the rougher bound Proof. Using (28) to obtain the first inequality below we have σ Sph n,p where ζ is the largest integer less than ζ. Using the fact that for ζ > 0, 1 ζ ≥ 1 ζ , and then performing the resulting integral, the bound (33) follows.
As for the second bound, suppose p ≥ 7. Now rewriting (33) to obtain the first inequality below, and using the fact that p ≥ 7 to obtain the second, we have The result now follows from using the inequality log 1 1−θ − θ − θ 2 /2 ≥ 0. That completes the section on the moments of the log-gamma random variables. In the next section we undertake a careful analysis of the characteristic function of the log-beta random variable, which is the most delicate step in proving Theorem A.
3.5. The characteristic function of the log-beta random variable. Our proof of Theorem A involves a Fourier-analytic approach based on a careful analysis of the characteristic function of W Sph n,p . We begin with the following lemma giving a useful representation for the characteristic function of a recentering of log β (n−j)/2,j/2 . Lemma 3.7. For j, n ∈ N such that j < n let βn−j 2 , j 2 be a beta distributed random variable with shape parameters ( n−j 2 , j 2 ), let Y n,j := log βn−j 2 , j 2 , and set V n,j := . Then, for all t ∈ R, the characteristic function of V n,j is given by where ψ 3 is as in (25) and σ 2 n,j := ψ 1 n−j 2 − ψ 1 n 2 . Proof. We begin by studying the characteristic function of Y n,j for t ∈ R. It is a straightforward computation using the definition of the beta-integral to see that By integrating in the complex plane and using the definition of the digamma function ψ 0 , we have and similarly we obtain (by simply setting t = 0 in the previous display and changing the sign) that In view of (35), this means that we may write Performing a second integral in the complex plane, as using the definition of ψ 1 , we obtain We now turn to extracting the characteristic function of V n,j from that of Y n,j . First of all from the definition of V n,j we plainly have where µ n,j and σ 2 n,j denote respectively the mean and variance of log β (n−j)/2,j/2 , which were identified in Lemma 3.5 above. We now note that we may usefully represent µ n,j as an integral via so that plugging (37) into (36) to obtain the first equality below, and performing another integration to obtain the second, we have Using σ 2 n,j = ψ 1 Plugging this into (38), we have E e itV n,j = exp −t 2 /2 + The result follows from a final integration step.
We now turn to studying the characteristic function of the sum of the log-beta random variables. First we note that if W n,p := W Sph n,p − µ Sph n,p /σ Sph n,p , then with ϕ n,j as in Lemma 3.7 the characteristic function of W n,p is given by where the final line above follows from a brief calculation using Lemma 3.7. Clearly, by virtue of the centering, the random variable W n,p has zero mean and unit variance. The following lemma compares the logarithms of the characteristic funtions of W n,p and a standard Gaussian random variable, where we recall that the latter is t → e −t 2 /2 . Lemma 3.8. Let p, n ∈ N with p ≤ n and φ n,p be the characteristic function of W n,p . Then, for all t ∈ R, log φ n,p (t) + t 2 /2 ≤ ε n,p |t| 3 , where ε n,p := 7 96 1 (σ Sph n,p ) 3 Proof. Using (39) to obtain the first inequality below, and using the upper bound in (29) with k = 3 (and hence C 3 = 14) to obtain the second, we have The latter integrand above is independent of t 1 , t 2 , t 3 . In particular, since the simplex {(t 1 , t 2 , t 3 ) ∈ R 3 : 0 < t 3 < t 2 < t 1 < a} has volume a 3 /6, we have log φ n,p (t) + t 2 /2 ≤ 7 48 The result in question follows by performing the s integral.

Proof of Theorem A.
We are now ready to prove Theorem A.
Proof of Theorem A. Theorem A follows from the statement which we now prove. By the Berry smoothing inequality (see, e.g., [17,Section 7.4]), the Kolmogorov-Smirnov distance between W n,p and a standard Gaussian random variable N may be bounded via for any T > 0. Setting T := (4ε n,p ) −1 and appealing to Lemmas 3.10 and 3.8, we have The result follows by using the fact that ∞ −∞ t 2 e −t 2 /2 dt = 4 √ π ≤ 8, and then using that 48/π ≤ 16.

Central limit theory and the Laplace method
4.1. Statement. With a view to proving Theorem B, in this section we will be considering probability density functions of the form where the Z n ∈ (0, ∞), n ∈ N are normalization constants, and the ordered pair of functions g, h : [0, ∞) → [0, ∞) is admissible in the sense of Definition 2.1. Recall in particular that the function h has a global minimum at a point x 0 ∈ R. By changing the normalization constant Z n if necessary, we may assume without loss of generality that h(x 0 ) = 0 and g(x 0 ) = 1. Moreover, since the random variables in the statement of Theorem B are recentered, whenever the statement of Theorem B holds for a density 1 Z n g(x)e −nh(x) it also holds for the rescaled and recentered density λ Z n g(λx + µ)e −nh(λx+µ) . In particular, we may assume without loss of generality that the global minimum occurs at zero, i.e. x 0 = 0, and that h (x 0 ) = 1.
In summary, without loss of generality we restrict ourselves to considering densities of the form where Q n ∈ (0, ∞) is a normalizing constant and where by the assumptions of Definition 2.1, r, q : R → R have the following properties: first, by part (b) of Definition 2.1 there exists some δ > 0 such that where as by part (c) there exist constants α, c, C ∈ (0, ∞) such that Again, without loss of generality, (since the random variables in the statement of Theorem B are centered), we may change variable x → x/ √ n, so that we consider for n ∈ N densities J n : R → [0, ∞) of the form where D n ∈ (0, ∞), n ∈ N are normalizing constants. For a moment it will be useful to consider the unnormalized function J n (x) := (D n ) −1 J n (x). Our next lemma states two things. First of all, that in a large interval containing the origin, J n is within distance O(1/ √ n) of the standard Gaussian density. It also states that outside of this large interval, J n has well behaved tails. All O(·) terms refer to a constant that may depend on g and h but is independent of x and n. • For all |x| ∈ [0, (δ √ n) ∧ n 1/6 ], we have • For all |x| ∈ [(δ √ n) ∧ n 1/6 , δ √ n], we have • For all |x| ∈ [δ √ n, ∞), we have Proof. First we control J n (x) for local x. With δ as in (45) we observe that whenever |x| ≤ (δ √ n) ∧ n 1/6 , we have Thus, in particular, e −nr(x/ √ n) = 1 + O( |x| 3 √ n ) uniformly for |x| ≤ (δ √ n) ∧ n 1/6 ]. Moreover, again by (45) we clearly have q(x/ √ n) = O(|x|/ √ n) uniformly for |x| ≤ (δ √ n) ∧ n 1/6 . It follows that uniformly for |x| ≤ (δ √ n) ∧ n 1/6 , we have Moreover, by (45) we have Combining (48) with (49) in the definition ofJ n , we havẽ In particular, restricting the bound in (50) to |x| ≥ n 1/6 , we obtaiñ In particular, since 5 4 1 √ 2π ≤ 1, we obtain (47). Finally, we note that for |x| ≥ δ √ n, from (46) we have as required.
Our next result utilizes Lemma 4.1, which stated that the unnormalized function J n was similar to the Gaussian distribution, to control the moments of the probability density J n (x) = D n J n (x).
Proof. By the first point in Lemma 4.1, for k = 0, 1, 2 we have By the second point in Lemma 4.1, for k = 0, 1, 2 we have Finally, by the third point in Lemma 4.1 there is a constant C ∈ (0, ∞) independent of x and n such that for k = 0, 1, 2 we have By changing variable, we now show that the integral on the right-hand side of (53) decays exponentially in n. Indeed, which decays exponentially as n increases. In particular, combining (53) and (54) we obtain It now follows from setting k = 0 in (51), (52) and (55) that The claimed facts about the mean and variance of J n follow from combining (56) with setting k = 1 and k = 2 in (51), (52) and (55).
So far, in Lemmas 4.1 and 4.2 we have seen that roughly speaking J n is within O(1/ √ n) of the standard Gaussian density. In the following, we will consider a corrected version of J n to have zero mean and unit variance. Indeed, with µ n and σ n as in Lemma 4.2 we have, for all x ∈ R, It is plain from the definition that for k = 0, 1, 2 Our next lemma is essentially an analogue of Lemma 4.1 for I n rather than J n , stating that I n is close to the Gaussian density on a large interval containing the origin, and that I n has well behaved tails. Lemma 4.3. We have the following three bounds.
• For all |x| ∈ [0, (δ √ n) ∧ n 1/6 − 1], we have • For all |x| ∈ [(δ √ n) ∧ n 1/6 − 1, δ √ n − 1], we have Proof. To prove the first point, note that (σ n As for the second point, we note that since for sufficiently large n, we have we may use I n (x) = σ n D n J n (µ n + σ n x) in conjunction with the second part of Lemma 4.1.
Finally the third claim follows quickly from the third claim of Lemma 4.1 since To recapitulate on our work in this section, we have shown that if X n is a random variable distributed according to ρ n as in Equation (11), where (g, h) is an admissible pair, then the normalized variable is distributed according to a probability density I n that has zero mean and unit variance, and is close to the standard Gaussian density in the sense that Lemma 4.3 holds.
In the next section we utilize the similarity of I n with the Gaussian density in order to show that the characteristic function of X n is similar to that of the standard Gaussian density.

4.2.
Characteristic functions. Our next lemma states that when n is large, the characteristic function of X n is similar to that of the standard Gaussian density. Lemma 4.4. Recall from (57) that I n is a rescaling of ρ n that has zero mean and unit variance. Let ϕ n be the associated characteristic function, that is Then for a constant C g,h independent of n and t we have Proof. Expressing e −t 2 /2 as an integral, we have Now by Taylor's expansion, there is a function θ : R → C satisfying |θ(u)| ≤ 1 for all u ∈ R such that for all t, x ∈ R. In particular, since the mean and variance of I n (x) agree with that of the standard Gaussian density, i.e., (58) holds, we have Note that by virtue of Lemma 4.3, we obtain the following.
While Lemma 4.4 was concerned with the characteristic function of X n , in our next lemma we look at the characteristic function of the normalized sum where X n 1 , . . . , X n p are independent and identically distributed according to probability density I n . Roughly speaking, where Lemma 4.4 stated that the characteristic function of X n was within O(1/ √ n) of the Gaussian characteristic function, our next result states that this bound improves to O(1/ √ pn) when considering a normalized sum of p copies. Lemma 4.5. Whenever p ≥ 2, n ≥ 64 e 4 C 2 , and |t| ≤ 2 √ p, we have where C g,h ∈ (0, ∞) is as in Lemma 4.4.
Proof. Note that whenever u, v are complex numbers such |u|, |v| ≤ a, we have the bound |u p − v p | ≤ p|u − v|a p−1 . With this inequality in mind, let u = ϕ n (t/ √ p) and v := e −t 2 /2p . Then using Lemma 4.4 both u and v are bounded in modulus by a := e −t 2 /2p +C |t| 3 √ np 3/2 . In particular, again using Lemma 4.4 to bound |u − v|, we have Now provided |t| ≤ 2 √ p, we may bound the internal term in the exponent, so that Provided n ≥ 64 e 4 C 2 , 1 − 4 e 2 C √ n ≥ 1/2. Moreover, whenever p ≥ 2, p−1 p ≥ 1/2, so that under these conditions We will ultimately like to use the Berry smoothing inequality to show that S n is within O(1/ √ pn) of a standard Gaussian random variable. To this end, we need control over the characteristic function ϕ n (t/ √ p) p of S n in a region of size order √ np.
Proof. Whenever a density function f is differentiable on R, it is easily verified by integration by parts that Now since ρ n has a unique maximum, so does the normalized density I n , and since D n = 1 + O(1/ √ n) this maximum has takes the form 1 In particular, the characteristic function ϕ n satisfies the inequality |ϕ n (t)| ≤ 1/|t|. for all t ∈ R \ {0}. The result for ϕ n (t/ √ p) p follows.
In the next section we complete the proof of Theorem B.

4.3.
Proof of Theorem B. We now prove Theorem B.
Proof of Theorem B. By setting T = ∞ in the Berry smoothing inequality (see, e.g., [17,Section 7.4]), the Kolmogorov-Smirnov distance between S n p and a standard Gaussian random variable G may be bounded via Using Lemmas 4.5 and 4.6 to respectively control the integrand inside and outside of [−2 √ p, 2 √ p], we have Performing each of the integrals, we find that there is a constant C ∈ (0, ∞) independent of n and p such that completing the proof.

Proof of Theorem C
In this section we prove Theorem C. Let W G,H n,p be the log-volume of a random simplex whose vertices Y p 1 , . . . , Y n p are independent and identically distributed according to µ n as in (12). Then, by the distributional equality (5), where log R n j are independent and identically distributed with the law log Y n 2 , where Y n ∼ µ n . The proof of Theorem C hinges on the idea that both terms on the right-hand side of (60) are close in distribution to a standard Gaussian random variable, and these facts may be synthesized by the following parallelogram inequality for Kolmogorov-Smirnov distances.
Lemma 5.1. Let X, X , Y, Y be independent real-valued random variables. Then Proof. It is immediate from the definition that Kolmogorov-Smirnov distances satisfy the triangle inequality.
The inequality (61) may be proved by letting A = X + Y , B = X + Y and C = X + Y , and subsequently using (62) followed by (63).
Specializing to distances from Gaussian random variables, we have the following corollary.
Corollary 5.2. Let X, Y be independent random variables with zero mean and unit variance. Then for real numbers σ, τ (not zero simultaneously) and N a standard Gaussian, we have We are now ready to prove Theorem C.
Proof of Theorem C. We will show that when p and n are large, both terms on the right-hand-side of (60) are close in distribution to standard Gaussian random variables. Indeed, considering the sum over j first, by using the polar integration formula, it follows that for r > 0 we have P ( Y n 1 2 ∈ dr) = C n nr n−1 G(r) e −nH(r) dr.
for some constant C n ∈ (0, ∞). Transforming, it is verified that log Y n 1 2 is then distributed according to the probability measure on R whose density function is given by where we recall that g(r) = G(e r ) and h(r) = H(e r ) − r.
In particular, since (G, H) are radially admissible, i.e., (g, h) are admissible, so that Theorem B applies. In particular, there is a constant C G,H ∈ (0, ∞) and n 0 ∈ N depending on (G, H) such that for all n ≥ n 0 we have On the other hand, using Theorem A we have Combining (64) and (65) , G The result follows from the observation that the former bound is finer than the latter. That is, for all 1 ≤ p ≤ n, there is a constant C ∈ (0, ∞) such that with θ = p−1 n , we have 1 √ pn for some constant C ∈ (0, ∞). That completes the proof of Theorem C.

Proof of Theorem G
In this section we provide a direct proof of Theorem G, which states that if A n is an n × n matrix with standard Gaussian entries, then With the exception of a few definitions and bounds relating to the polygamma functions that we import from Section 3.3, this section is independent of the remainder of the paper, though several parts run closely in parallel with ideas seen in Section 3. Now let A n be an n × n matrix whose entries are independent standard Gaussian random variables. The starting point of our analysis is the well known identity in law dating back to Goodman [24], where R 1/2 , . . . , R n/2 are independent random variables such that R j/2 has the Gamma distribution with shape parameter j/2 and unit scale parameter. Taking logarithms of (66), we may express the log-determinant of |det(A n )| in terms of an independent sum of log-gamma random variables: We now compute the characteristic function of a normalized log-gamma random variable. The following lemma is an analogue of Lemma 3.7 with the log-gamma random variable in place of the log-beta random variable. Since the proof is rather similar -and simpler -we will be content to sketch just a few key details.
Lemma 6.1. Let W λ be gamma distributed with parameter λ > 0, and define Then, for all t ∈ R,

Proof. A basic calculation tells us that
In particular Pulling out a factor of t 2 /2 from the integrand to obtain the first inequality below, and packaging the difference as in integral to obtain the second, we have completing the proof of Lemma 6.1.
We now note that if φ n (t) is the characteristic function of the centering of log det(A n ), then using (67) we have where ϕ j/2 are defined as in Lemma 6.1 and S n := n k=1 ψ 1 (k/2). Our next lemma expresses how similar the centering of log det(A n ) is to a standard Gaussian random variable.
Proof. Using (68) in conjunction with Lemma 6.1 we have Using (29), and the fact that the simplex {(t 1 , t 2 , t 3 ) ∈ R 3 : 0 < t 3 < t 2 < t 1 < a} has volume a 3 /6, we have log φ n (t) + t 2 /2 ≤ 1 6 . Now using the lower bound in Lemma 28 we have S 2 n := n k=1 ψ 1 (k/2) ≥ n k=1 2/k ≥ 2 log n. Using the fact that 32π 2 36 1 2 3/2 ≤ 4, the result follows. We are now equipped to prove a version of Theorem G with implicit means and variances. Theorem 6.3. Let A n be an n × n matrix with independent standard Gaussian entries. Then Var[log det(A n )] , N ≤ C/ log 3/2 n.
In order to complete the proof of Theorem G, we require fine estimates on the mean and variance of log det(A n ). To this end we have the following lemma.

Proof. Recall that
We begin with the integral formula where It is easily verified that and that moreover there is a universal constant C ∈ (0, ∞) such that We turn to the proof of (69), which is similar. Recall first that Differentiating through (71) and using the identity 1 z 2 = ∞ 0 t e −zt dt, we have In particular, It is easily verified that and that moreover there is a universal constant C ∈ (0, ∞) such that Again using the fact that n j=1 1/j = log n + γ + O(1/n), the second equation follows. We are almost ready to prove Theorem G from its implicit version, Theorem 6.3. The final tool in sewing our work together is the following lemma, the proof of which we relegate to the appendix. Lemma 6.5. Let σ,σ > 0 and µ,μ ∈ R. Assume that X is a random variable such that d KS ((X−µ)/σ, N ) ≤ ε, where N is a standard Gaussian. Then it holds We now prove Theorem G.
Proof of Theorem G. The proof of Theorem G follows immediately from using Theorem 6.3 and Lemma 6.

Proofs of Theorems D, E and F
We use the notation R i = Y n i 2 , R i = log R i andR i = log R i − E log R i , as well as β i/2,j/2 = log β i/2,j/2 ,β i/2,j/2 = log β i/2,j/2 − E[log β i/2,j/2 ].
All limits and asymptotic equivalences in this section are for n → ∞ unless stated otherwise.
From [25, Theorem 3.1] and its proof we obtain the following lemma.
Lemma 7.1. If (β i/2,j/2 ) i,j∈N are independent random variables such that β i/2,j/2 is Beta(i/2, j/2) distributed and p = p n → ∞ is an integer sequence, then it holds Proof. Let X 1 , . . . , X p+1 be independent random points in R n that are uniformly distributed on the sphere of radius 1 centered at the origin of R n . Let V n,p denote the p-dimensional volume of the simplex with vertices X 1 , . . . , X p+1 . Then we have by Theorem 2.5(d) in [25] that where the random variable ξ ∼ Beta(n/2, p(n − 2)/2) is independent of everything else. As in [25], we set L n,p = log(p!V n,p ). Taking logarithm in (73) where is a sum of independent random variables, an application of Theorem B.1 shows that T n converges in distribution to a standard normal variable N , as n → ∞. In conjunction with (74), the desired result (18) follows. 7.2. Proof of Theorem E. Define c n = a n + ∞ −∞ x 1 + x 2 dP( R (n) ≤ σ n (x + a n )) .
From (5) We treat the terms on the right-hand side separately. In view of (7) and (8) Observe that by Lemma 7.1 we have Var[W Sph n,p ]/ω n → 1. In combination with Theorem A, we get that Z n,p d → N as n → ∞ for a standard normal random variable N . Using ω n /σ n → 0, we conclude that By virtue of Slutsky's theorem (see, e.g., [13]) and (75), it remains to show that where the limit random variable Z α = Z α (c 1 , c 2 ) has the characteristic function (19). Since p → ∞, condition (85) is satisfied so that an application of Theorem B.2 proves (76). The proof of Theorem E is now complete.
7.3. Proof of Theorem F. Recall the notations from the proof of Theorem E. Using ω n /σ n → q ∈ (0, ∞), we see that 2πτ 2 e −u 2 /2τ 2 du ≤ 3 8 Proof. For a proof of (81) see Lemma 2.5 and Proposition 2.6 of [33] The KS distance between two Gaussians with the same variance but different means may be bounded as follows.
Lemma A.2. Let x > 0. Then the KS distance between two unit variance Gaussian RVs, one with mean x, the other with mean 0 is bounded above by x. That is, Expanding the power series for sinh and using the triangle inequality, this is bounded further by where N is a standard Gaussian. By Jensen's inequality, 2n+2 .
Proof. Without loss of generality we may restrict ourselves to the case σ n = 1; otherwise replace X nk with X nk /σ n . For σ n = 1 and noting that (85) is Petrov's so-called infinite smallness condition, [49, Theorem IV.2.8] yields the existence of a sequence of constants b n such that kn k=1 X nk − b n converges in distribution to an infinitely divisible random variable Z α with Lévy spectral function L(x) = c 1 |x| −α 1 {x<0} −c 2 x −α 1 {x>0} for x ∈ R. By [49, Theorem IV.2.5], b n may be chosen as in (86). From the form of L(x) we can deduce by [49,Theorem IV.3.11] that the limit variable Z α has a stable distribution with characteristic function where L is the Lévy spectral function from above. Finally, by parts (i) and (iv) of [32,Theorem 3.3] (with c 2 = c + and c 1 = c − ), this expression equals the right-hand side in (87). We mention that an alternative proof of the last step can be furnished by using [