Fortuin-Kasteleyn representations for threshold Gaussian and stable vectors

We study the question of when a $\{0,1\}$-valued threshold process associated to a mean zero Gaussian or a symmetric stable vector corresponds to a divide and color (DC) process. This means that the process corresponding to fixing a threshold level $h$ and letting a 1 correspond to the variable being larger than $h$ arises from a random partition of the index set followed by coloring all elements in each partition element 1 or 0 with probabilities $p$ and $1-p$, independently for different partition elements. For example, the Ising model with zero external field (as well as a number of other well-known processes) is a DC process where the random partition corresponds to the famous FK-random cluster model. We first determine in general the exact lack of uniqueness of the representing random partition. This amounts to determining the dimensions of the kernels of a certain family of linear operators. We obtain various results in both the Gaussian and symmetric stable cases. For example, it turns out that all discrete Gaussian free fields yield a DC process when the threshold is zero; this follows quite easily from known facts. For general n-dimensional mean zero, variance one Gaussian vectors with nonnegative covariances, the zero-threshold process is always a DC process for n=3 but this is false for n=4. The answers in general are quite different depending on whether the threshold level $h$ is zero or not. We show that there is no general monotonicity in $h$ in either direction. We also show that all discrete Gaussian free fields yield DC processes for large thresholds. In the stable case, among other results, if we stick to the simplest case of a permutation invariant, symmetric stable vector with three variables, we obtain a phase transition in the stability exponent $\alpha$ at the surprising point 1/2.


Abstract
We study the question of when a {0, 1}-valued threshold process associated to a mean zero Gaussian or a symmetric stable vector corresponds to a divide and color (DC) process. This means that the process corresponding to fixing a threshold level h and letting a 1 correspond to the variable being larger than h arises from a random partition of the index set followed by coloring all elements in each partition element 1 or 0 with probabilities p and 1 − p, independently for different partition elements. For example, the Ising model with zero external field (as well as a number of other well known processes) is a DC process where the random partition corresponds to the famous FK-random cluster model.
We first determine in general the exact lack of uniqueness of the representing random partition. This amounts to determining the dimensions of the kernels of a certain family of linear operators.
We obtain various results in both the Gaussian and symmetric stable cases. For example, it turns out that all discrete Gaussian free fields yield a DC process when the threshold is zero; this follows quite easily from known facts. For general n-dimensional mean zero, variance one Gaussian vectors with nonnegative covariances, the zero-threshold process is always a DC process for n = 3 but this is false for n = 4.
The answers in general are quite different depending on whether the threshold level h is zero or not. We show that there is no general monotonicity in h in either direction. We also show that all discrete Gaussian free fields yield DC processes for large thresholds. Moving to n = 3, we characterize exactly which mean zero, variance one Gaussian vectors yield DC processes for large h.
In the stable case, among other results, if we stick to the simplest case of a permutation invariant, symmetric stable vector with three variables, we obtain a phase transition in the stability exponent α at the surprising point 1/2; if the index of stability is larger than 1/2, then the process yields a DC process for large h while if the index of stability is smaller than 1/2, then this is not the case. 1 Introduction, notation, summary of results and background

Introduction
A very simple mechanism for constructing random variables with a (positive) dependency structure is the so-called generalized divide and color model introduced in its general form in [12] but having already arisen in many different contexts.
Definition 1.1. A {0, 1}-valued process X := (X i ) i∈S is a generalized divide and color model or color process if X can be generated as follows. First choose a random partition π of S according to some arbitrary distribution, and then independently of this and independently for different partition elements in the random partition, assign, with probability p, all the variables in a partition element the value 1 and with probability 1 − p assign all the variables the value 0. This final {0, 1}-valued process is then called the color process associated to π and p. We also say that π and p is a color representation of X.
As detailed in [12], many processes in probability theory are color processes; examples are the Ising model with zero external field, the fuzzy Potts model with zero external field, the stationary distributions for the Voter Model and random walk in random scenery.
While certainly the distribution of the color process determines p, it in fact does not determine the distribution of π. This was seen in small cases in [12], and this lack of determination will essentially be completely determined in Section 2.
Since the dependency mechanism in a color process is so simple, it seems natural to ask which {0, 1}-valued processes fall into this context. We mention that it is trivial to see that any color process has nonnegative pairwise correlations and so this is a trivial necessary condition.
In this paper, our main goal is to study the question of which threshold Gaussian and threshold stable processes fall into this context. More precisely, in the Gaussian situation, we ask the following question. Given a set of random variables (X i ) i∈I which is jointly Gaussian with mean zero, and given h ∈ R, is the {0, 1}-valued process (X h i ) i∈I defined by a color process? In the stable situation, we simply replace the Gaussian assumption by (X i ) i∈I having a symmetric stable distribution. (We will review the necessary background concerning stable distributions in Subsection 1.4.) For the very special case that I is infinite, h = 0 and the process is exchangeable, this question was answered positively, both in the Gaussian and stable cases, in [12]. As we will see later (see Theorem 1.15), the set of threshold stable vectors is a much richer class than the set of threshold Gaussian vectors. As such, it is reasonable to study both classes. Since all the marginals in a color process are necessarily equal, if h = 0, then a necessary condition in the Gaussian case for (X h i ) i∈I to be a color process is that all the X i 's have the same variance. Therefore, when considering h = 0, we will assume that all the (X i )'s have variance one. However, it will be convenient not to make this latter assumption when considering h = 0. For the stable case, we will simply assume that all the marginals are the same.
It has been seen in [12] that p = 1/2 and p = 1/2 (corresponding to h = 0 and h = 0 in the Gaussian setting) behave very differently generally speaking. We will continue to see this difference throughout this paper.
We finally note that the questions looked at here significantly differ from those studied in [12]. In the latter paper, one looked at what types of behavior (ergodic, stochastic domination, etc.) color processes possess while in the present paper, we analyze which random vectors (primarily among threshold Gaussian and threshold stable vectors) are in fact color processes.

Notation and some standard assumptions
Given a set S, we let B S denote the collection of partitions of the set S. We denote {1, 2, 3, . . . , n} by [n] and if S = [n], we write B n for B S . |B n | is called the nth Bell number. We denote by P n the set of partitions of the integer n.
While perhaps not standard terminology, we adopt the following definition.
Definition 1.2. We call a Gaussian vector standard if each marginal has mean zero and variance one.
Standing assumption. Whenever we consider a Gaussian or symmetric stable vector, we will assume it is nondegenerate in the sense that for all i = j, P (X i = X j ) = 1. Some further notation which we will use is the following.
ν 0 k (h) or ν h (0 k ) . as an illustration, will denote, given a Gaussian or stable vector (X 1 , . . . , X n ) the probability that the h-threshold process is identically zero; i.e., the probability that P (X 1 ≤ h, X 2 ≤ h, . . . , X k ≤ h). More generally, for h ∈ R, we let ν h = L(X h ).
q 13,2 . as an illustration, will denote, given a random partition with n = 3, the probability that 1 and 3 are in the same partition and 2 is in its own partition.
If we have a partition of a set of more than three elements, q 13,2 will then mean the above but with regard to the induced (marginal) random partition of {1, 2, 3}.
N (0, A) . will denote a Gaussian vector with mean zero and covariance matrix A.

Description of results
In Section 2, the linear operator A p,n defined above is studied in detail. The range is characterized and we obtain formulas for the rank and hence the nullity in all cases. The formula differs depending on whether p = 1/2 or p = 1/2. The notions of a formal solution and a nonnegative solution will be given in that section. A nontrivial kernel of A p,n corresponds exactly to nonuniqueness of a formal solution. A nontrivial kernel is also very closely related to, but not exactly the same as, nonuniqueness of a nonnegative solution. Nonuniqueness of a nonnegative solution corresponds exactly to a color process arising from more than one random partition. We then restrict to the situation where we have invariance under all permutations of [n] and we again characterize the range and give the rank and hence nullity of the appropriate operator.
In Section 3, we present positive results concerning the question of the existance of a color representation for the threshold zero case for discrete Gaussian free fields and more generally for Gaussian vectors whose covariance matrices are so-called inverse Stieltjes, meaning that the off-diagonal elements of the inverse covariance matrix are nonpositive. (See subsubsection 3.2.1 for the definition of a discrete Gaussian free field.) This essentially follows from the known fact that the signs of a discrete Gaussian free field conditioned on their absolute values is an Ising Model with nonnegative interaction constants depending on the conditioned absolute values. The latter fact has been observed in [6]. However, it turns out that a threshold zero Gaussian process can be a color process even if its covariance matrix is not inverse Stieltjes. We also relate the class of inverse Stieltjes vectors with the set of tree-indexed Gaussian Markov chains.
In Section 4, we provide an alternative proof that threshold zero tree-indexed Gaussian Markov chains are color processes using the Ornstein-Uhlenbeck process. This proof has the advantage that the method leads to our first result for stable vectors, namely that a threshold zero tree-indexed symmetric stable Markov chain is also a color process; in this case, we however use subordinators.
In Section 5, we view our Gaussian vectors from a more geometric perspective and obtain a number of negative (and some positive) results for thresholds h = 0. In this section, we will obtain our first example where we have a nontrivial phase transition in h. This will be elaborated on in more detail in Theorem 5.9 but we state perhaps what is the main import of that result. Theorem 1.3. There exists a four-dimensional standard Gaussian vector X so that X h is a color process for small positive h but is not a color process for large h.
Remark 1.4. Given the above it is natural to ponder over the possible monotonicity properties in h. Proposition 5.5 implies that there is no three-dimensional Gaussian vector with such a phase transition among those that are not fully supported, while simulations indicate that there is also no fully supported threedimensional Gaussian vector with such a phase transition. On the other hand, Corollary 8.5(iii) tells us that there are three-dimensional Gaussian vectors which are not color processes for small h but are color processes for large h. This together with the previous result rules out any type of monotonicity, in either direction. Perhaps however monotonicity holds (in one direction) for fully supported vectors. See the open questions section.
Returning to the threshold zero case, we recall that Proposition 2.12 in [12] implies that for any three-dimensional Gaussian vector with nonnegative correlations, the corresponding zero threshold process is a color process. Our next result says that this is not necessarily the case for four-dimensional Gaussian vectors.
Theorem 1.5. There exists a four-dimensional standard Gaussian vector X with nonnegative correlations so that X 0 is not a color process. X can be taken to either be fully supported or not.
In Section 6, we extend the study of the example given in the proof of the previous theorem to the stable case and derive as a consequence some surprising properties of integrals, which we state here. Two ingredients which are used in the proof is [8] and [12].
Theorem 1.7. If X := (X 1 , X 2 , . . . , X n ) is a discrete Gaussian free field which is standard Gaussian, then X h is a color process for all sufficiently large h.
In Section 8, we obtain detailed results concerning the existence of a color representation when the threshold h → 0 and when h → ∞ in the general Gaussian case when n = 3. In the fully supported case, we have the following result which gives an exact characterization of which Gaussian vectors have a color representation for large h. Note that if two of the covariances are zero, then we trivially have a color representation for all h. Theorem 1.8. Let X be a fully supported three-dimensional standard Gaussian vector with Cov(X i , X j ) = a ij ∈ [0, 1) for 1 ≤ i < j ≤ 3 and positive definite covariance matrix A = (a ij ).
If a ij > 0 for all i < j, then X h has a color representation for sufficiently large h if and only if one of the following (nonoverlapping) conditions holds.
Furthermore, if exactly one of the covariances is equal to zero, then X h does not have a color representation for large h.
The assumption in (i) of Theorem 1.8, i.e. that 1 T A −1 > 0, is sometimes called the Savage condition (with respect to the vector 1 = (1, 1, . . . , 1)). When A = (a ij ) is the covariance matrix of a (nontrivial) two-dimensional standard Gaussian vector, then 1 T A −1 (1) = 1 T A −1 (2) = (1 + a 12 ) −1 > 0, and hence the Savage condition always holds in this case. If A = (a ij ) is the covariance matrix of a three-dimensional standard Gaussian vector, then one can show that and it follows that the Savage condition holds if and only if When 1 T A −1 ≥ 0, we will refer to this as the weak Savage condition. This for example holds for all discrete Gaussian free fields. The rest of the results we describe in this section concern the stable (non-Gaussian) case. To justify their study, we would want to know that the collection of threshold processes which can be obtained from stable vectors is not the same as those which can be obtained from Gaussian vectors. In fact, as described later on in this subsection, the former is a much larger class, thereby justifying the study of the stable case.
In Section 9, we first look at the n = 2 case. While it is trivial that having a color representation is equivalent to having a nonnegative correlation when n = 2, in the stable case it is not obvious, even when n = 2, which spectral measures yield a threshold vector with a nonnegative correlation. This contrasts with the Gaussian case where nonnegative correlation in the threshold process is simply equivalent to the Gaussian vector having a nonnegative correlation.
The following result from [10] is relevant in this context. Theorem 4.6.1 in [10] and its proof yield Part (i) and Theorem 4.4.1 in [10] (see also (4.4.2) on p. 188 there) easily yields (ii). We denote the standard one-dimensional symmetric α-stable distribution with scale one by S α (1, 0, 0); see the next subsection for precise definitions. Theorem 1.9. Let α ∈ (0, 2) and let X be a symmetric 2-dimensional α-stable random vector with spectral measure Λ, which is such that both marginals have distribution S α (1, 0, 0). Then the following hold.
(i) Λ has support only in the first and third quadrants if and only if X h1 1 and X h2 2 are nonnegatively correlated for all h 1 , h 2 ∈ R.
(ii) If Λ has some support strictly inside the first quadrant, then X h 1 and X h 2 have strictly positive correlation for all sufficiently large h.
Interestingly, (i) is not equivalent to having X h 1 and X h 2 being nonnegatively correlated for all h ∈ R. While we have not obtained a characterization of this in terms of the spectral measure, the following natural example shows that one does not need to have the spectral measure supported only in the first and third quadrants. Proposition 1.10. Let S 1 , S 2 ∼ S α (1, 0, 0) be independent and let a ∈ (0, 1). Set (This insures that X 1 , X 2 ∼ S α (1, 0, 0).) Then the following are equivalent.
(i) X 0 1 and X 0 2 have nonnegative correlation We now study the question of the existence of a color representation in the symmetric stable case when h → ∞. Our first result shows that there is a fairly large class for which the answer is affirmative and here the method of proof comes from that used in Theorem 1.7. Theorem 1.11. Let X be a symmetric stable distribution whose spectral measure has some support properly inside each orthant. Furthermore, assume that where x (2) denotes the second largest coordinate of the vector x. Then X h is a color process for all sufficiently large h.
The integral condition in (3) will hold for example if the spectral measure is supported sufficiently close to the coordinate axes.
Next, we surprisingly obtain, in the simplest nontrivial stable vector with n = 3, a certain phase transition in the stability exponent where the critical point is α = 1/2. We state it here although relevant definitions will be given later on. Theorem 1.12. Let α ∈ (0, 2) and let S 0 , S 1 , S 2 , S 3 be i.i.d. each with distribution S α (1, 0, 0). Furthermore, let a ∈ (0, 1) and for i = 1, 2, 3, define . (X α is then a symmetric α-stable vector which is invariant under permutations; it is one of the simplest such vectors other than an i.i.d. process.) (i) If α > 1/2, then X h is a color process for all sufficiently large h.
(ii) If α < 1/2, then X h is not a color process for any sufficiently large h.
The critical value of 1/2 above was independent of the parameter a, as long as a ∈ (0, 1). If we however move to a family which has two parameters, but is still symmetric and transitive, we can obtain a phase transition at any point in (0, 2).
(i) If c 2 ≤ c 1 , then, for all α ∈ (c 1 , 2), X h α is a color process for all sufficiently large h.
(ii) If c 2 ≥ 2, then, for all α ∈ (c 1 , 2), X h α is not a color process for any sufficiently large h.
(iii) If c 2 ∈ (c 1 , 2), then, for all α ∈ (c 1 , c 2 ), X h α is not a color process for any sufficiently large h while for all α ∈ (c 2 , 2), X h α is a color process for all sufficiently large h.
In particular, for any α c ∈ (0, 2) and < α c , we can choose a and b so that c 1 = and c 2 = α c , in which case X α is defined for all α ∈ ( , 2) and where the question of whether the large h threshold is a color process has a phase transition at α c . Remark 1.14. The case a > b = 0, which is not included above, corresponds to the fully symmetric case studied in Theorem 1.12.
In Section 10, we look at a somewhat different question. We ask which {0, 1}-valued random vectors (X 1 , . . . , X n ), which are {0, 1}-symmetric, arise as the zero threshold of a Gaussian or stable vector. The answer turns out to be completely different in the two cases, which perhaps is not surprising since the stable vectors have a much larger parameter space.
We first point out that given any symmetric n-dimensional Gaussian vector X and any α ∈ (0, 1), there is an n-dimensional α-stable symmetric vector Y so that X 0 and Y 0 have the same distribution. Given X, we simply let Y := W 1/2 X where W ∼ S α/2 (2(cos πα/4) 2/α , 1, 0). By [10], p. 78, Y is an α-stable symmetric vector, and clearly X 0 and Y 0 have the same distribution since W ≥ 0 as α/2 < 1. In fact, more generally, if we let T D α,n be the set of distributions on {0, 1} n which can be obtained by taking the zero threshold of some symmetric α-stable random vector, then for any α > α we have that Next, we give a simple example of a {0, 1}-symmetric process with n = 4 which cannot be represented by a zero-threshold Gaussian process. Let (X 1 , X 2 , X 3 , X 4 ) be uniformly distribution on the set of eight configurations which have three 1's and one −1 or three −1's and one 1. One quickly checks that (X 1 , X 2 , X 3 , X 4 ) is pairwise independent. Hence if it corresponded to a zero-threshold Gaussian process, all of the six correlations would have to be zero in which case (X 1 , X 2 , X 3 , X 4 ) would be i.i.d.
The above process is not a color process, but there are even color processes which cannot be represented by a zero-threshold Gaussian process when n = 4 based on dimension counting. The dimension of the space of standard Gaussians with n = 4 is 6-dimensional. On the other hand, Theorem 2.2(ii) implies that the dimension of the set of color processes is 7-dimensional when n = 4.
When n = 2, it is clear that all {0, 1}-symmetric processes arise from some zero-threshold Gaussian vector. When n = 3, the relevant sets are each 3dimensional and one can in fact show that, unlike when n = 4, the set of {0, 1}-symmetric processes corresponds bijectively to the set of zero threshold Gaussian processes.
For stable vectors, the situation is completely different as stated in the following theorem.
Theorem 1.15. Given any probability measure µ on {0, 1} n and any > 0, there exists an n-dimensional stable vector X such that the distribution of X 0 is within of µ. If desired, we could scale things so that each marginal has scale one. Moreover, if µ has a {0, 1}-symmetry, then X can be taken to be a symmetric stable.
Finally, we mention the following interesting application of Theorem 1.15 which again distinguishes stable vectors from Gaussian vectors. We know this result is false for n = 3 in the sense that if n = 3 then α c = 0. Theorem 1.16. (i). For all n ≥ 4, there exists a critical value α c (n) ∈ (0, 2] so that for all α ∈ (α c (n), 2), the zero-threshold process for any fully transitive and symmetric α-stable vector of length n whose marginals have scale one is a color process provided the threshold process has pairwise nonnegative correlations while for all α ∈ (0, α c ), there exists a fully transitive symmetric α-stable vector of length n whose marginals have scale one and whose zero-threshold has pairwise strictly positive correlations but does not have a color representation. (ii). For all n ≥ 4 and > 0, there exists α c (n, ) < 2 such that for all α ∈ (α c (n, ), 2), the zero-threshold process for any fully transitive and symmetric α-stable vector of length n whose marginals have scale one whose distance (in some fixed metric) to an i.i.d. standard normal vector is at least is a color process provided the threshold process has pairwise nonnegative correlations.

Background on symmetric stable vectors
We refer the reader to [10] for the theory of stable distributions and will just present here the background needed for our results.
Definition 1.17. A random vector X := (X i ) 1≤i≤d in R d has a stable distribution if for all n, there exist a n > 0 and b n so that if (X 1 , . . . , X n ) are n i.i.d.
copies of X, then 1≤i≤n X i D = a n X + b n .
See [10] for the formula when α = 1. One should be careful and keep in mind that different authors use different parameterizations for the family of stable distributions. Throughout this paper, we will only consider symmetric stable random variables corresponding to β = µ = 0 and sometimes often assume σ = 1. The above then simplifies to a random variable having distribution S α (σ, 0, 0) which means its characteristic function is f (θ) = e −σ α |θ| α . In the symmetric case, this formula is also valid for α = 1.
The structure of stable vectors is much more complicated than for Gaussian vectors which we illustrate in three ways.
First, the set of two-dimensional Gaussian vectors, where each marginal has mean zero and variance one, is one-dimensional, parameterized by the correlation. However, for fixed α, the set of symmetric two-dimensional stable vectors (symmetric means that the distribution is invariant under x → −x) where each marginal has distribution S α (1, 0, 0) is infinite dimensional, corresponding essentially to the set of probability measures on the unit circle which are invariant under x → −x.
Secondly, if (X 1 , X 2 ) is jointly Gaussian with X 1 D = X 2 , then we know that (X 1 , X 2 ) D = (X 2 , X 1 ); this is false in general for symmetric stable random vectors.
Thirdly, if (X 1 , X 2 ) is jointly Gaussian, then either (i) (X h 1 , X h 2 ) is nonnegatively correlated for all h (which corresponds to X 1 and X 2 being nonnegatively correlated) or (ii) (X h 1 , X h 2 ) is nonpositively correlated for all h (which corresponds to X 1 and X 2 being nonpositively correlated); this is false in general for symmetric stables in that the thresholds can be positively correlated for some h and negatively correlated for other h. This will be further discussed in Section 9.
Finally, a random vector in R d has a symmetric stable distribution with stability exponent α if and only if its characteristic function f (θ) has the form for some finite measure Λ on S d−1 which is invariant under x → −x. Λ is called the spectral measure corresponding to the α-stable vector. For α ∈ (0, 2) fixed, different Λ's yield different distributions. This is not true for α = 2.
In a number of cases, we will have a symmetric α-stable vector X := (X 1 , . . . , X d ) which is obtained by having where A is a d×m matrix and Y = (Y 1 , . . . , Y m ) are i.i.d. random variables with distribution S α (1, 0, 0). In such a case, there is a simple formula for the spectral measure Λ for X. Consider the columns of A as elements of R d , denoted by x 1 , . . . , x m . Then Λ is obtained by placing, for each i ∈ [m], a mass of weight x i α 2 /2 at ±x i / x i 2 . See p. 69 in [10].

General set up
In order to understand when one has uniqueness of a color representation, it is natural to consider signed measures on B n so that one can place the question in a vector space context where uniqueness (essentially) corresponds to the kernel of the associated operator being trivial (see [12]). This is done as follows.
We let Φ n,p be the map taking random partitions of [n] to probability vectors on {0, 1} n ; this just sends the random partition to its corresponding color process. For each n and p, we have a natural linear mapping A p,n from R Bn to R {0,1} n which extends Φ n,p . We can identify B n with the natural basis for R Bn and hence A p,n is uniquely determined by describing the image of each σ ∈ B n . Given σ ∈ B n and a binary string ρ ∈ {0, 1} n , we write σ ρ if ρ is constant on the partition elements of σ. We clearly have where σ is equal to the number of partition elements in the partition σ and c = c(σ, ρ) is the number of partition elements on which ρ is 1. For ρ ∈ {0, 1} n , we write −ρ to denote the binary string where the zeros and ones in ρ are switched, i.e. −ρ = 1 − ρ.
In [12], the dimension of the kernel of A p,n , and hence of the range, was determined for a few small values of n. The following result gives the formula for these in general, and also gives an explicit description of the range of A p,n for all n and p. The analysis of the p = 1/2 case is much easier and we do that in the next subsection while the p = 1/2 case is done in the subsequent subsection. In the last subsection, we look at the situation where there is invariance with respect to permutations of [n].
We end this subsection with a few more elementary remarks. A p,n sends nonnegative vectors to nonnegative vectors and if q = (q σ ) σ∈Bn is sent to ν under A p,n , then ρ∈{0,1} n ν ρ = σ∈Bn q σ . Definition 2.1. Given n and p, if ν ∈ R {0,1} n corresponds to a probability vector (i.e., the coordinates of ν are nonnegative and add to one), an element q ∈ R Bn so that A p,n q = ν is called a formal solution while a nonnegative element q ∈ R Bn so that A p,n q = ν is called a nonnegative solution.
Of course, there is a nonnegative solution if and only if q is a color process. It will however be convenient in the analysis to allow formal solutions (which are not necessarily nonnegative) which one might show afterwards are in fact nonnegative solutions.
Lastly, we explain the relationship between nontriviality of the kernel of A p,n and uniqueness of a color representation (i.e., uniqueness of a nonnegative solution). First observe that A p,n has a nontrivial kernel if and only if for any ν in the range, there are an infinite number of formal solutions. Hence if the kernel is trivial, there is always at most one nonnegative solution. The converse is not true since an i.i.d. process clearly has at most one color representation even if the kernel is nontrivial. However, as also explained in [12], if the kernel is nontrivial, ν is in the range and there exists a nonnegative solution all of whose coordinates are positive, then one has infinitely many nonnegative solutions since we can add a small constant times an element in the kernel. More generally, if ν is in the range and there exists a nonnegative solution q for ν, then there is another nonnegative solution (and then infinitely many) if and only if there is an element q in the kernel whose negative-valued coordinates are contained in the support of q. otherwise.
(Obviously, this condition is not necessary for there to be some color representation.) (ii). The range of A 1 2 ,n is As a consequence, A 1 2 ,n has rank 2 n−1 and hence nullity |B n | − 2 n−1 . Proof of (i). It is elementary to check that {q σ } above yields a formal solution. Note that one has This clearly yields the second statement of (i).
Proof of (ii). Clearly every element of the range must satisfy the symmetry condition (7) since p = 1/2. Using (i) and the fact that the symmetric probability vectors in (i) generate the subspace given by (7), it follows that the range of A 1 2 ,n is precisely this set. The statement concerning the rank and nullity follow immediately.
Remark 2.3. We check what (7) amounts to when X := (X i ) 1≤i≤n is a standard Gaussian vector with covariance matrix (a ij ) for which Y = X 0 is of course {0, 1}-symmetric. We will see later (21) that P (Y i = Y j ) = arccos aij π which is known as Sheppard's formula (see [11]). Next Hence the inequality needed in Theorem 2.2 will be satisfied whenever

2.3
Formal solutions for the p = 1/2 case Theorem 2.4. For p ∈ {0, 1/2, 1}, A p,n has rank 2 n − n and hence nullity |B n | − (2 n − n). The range of A p,n is equal to (The vector space defined by (8) is the analogue of the marginal distributions each being pδ 1 + (1 − p)δ 0 .) In particular, if ν is a probability vector on {0, 1} n , all of whose marginals are pδ 1 + (1 − p)δ 0 , then ν is in the range of A p,n . (Of course, there might be no probability vector q = (q σ ) σ∈Bn which maps to ν; i.e. ν need not be a color process.) Proof.
Step 1. The rank of A p,n is at least 2 n − n.
Proof of Step 1. We will modify our system of equations in such a way that things become more transparent. First, for σ ∈ B n and T ⊆ [n], we let σ T be the restriction of σ to T . Also, let We now consider the system of linear equations given by By an inclusion-exclusion argument, one sees that (9) is equivalent to (5). Letting A p,n be the corresponding 2 n × |B n | matrix for this system, it suffices to show that the rank of A p,n is at least 2 n − n.
Let σ ∅ be the partition into singletons and for each T ⊆ [n] with |T | > 1, let σ T ∈ B n be the unique partition with exactly one non-singleton partition element given by T . If e.g. n = 5 we would have that σ {1,2,3} = (123, 4, 5). One easily verifies that σ T S = |S\T | + (1 ∧ |S ∩ T |) for T = ∅ or |T | > 1. Consider the equation system and let A = A p,n be the corresponding 2 n × (2 n − n) matrix. Define If we order the rows (from top to bottom) and columns (from left to right) of B such that the sizes of the corresponding sets are increasing, then B is a lower triangular matrix with B(S, S) = 1 for all S ⊆ [n]. In particular, this implies that B is invertible for all p ∈ (0, 1), and hence A and BA (also a 2 n ×(2 n −n) matrix) have the same rank. Moreover, for any S, T ⊆ [n] with |T | = 1 we get In the case S ⊆ T , we can simplify further to obtain Note that since p = 1/2, if S ⊆ T , then (BA )(S, T ) = 0 if and only if |S| = 1.
If we order the rows (from top to bottom) and columns (from left to right) of BA so that the corresponding sets are increasing in size, it is obvious that the (2 n − n) × (2 n − n) submatrix of BA obtained by removing the rows corresponding to |S| = 1 has full rank. This implies that BA has rank at least 2 n − n which implies the same for A since B is invertible. Finally, since A is a submatrix of A p,n , we obtain the desired lower bound on the rank of the latter.
Step 2. The rank of A p,n is at most 2 n − n.
Proof of Step 2. We first claim that if ν = {ν ρ } ρ∈{0,1} n is in the range, then it is in the set defined in (8). To see this, let ν = A p,n q and fix an i. The expression in the left hand side of (8) becomes With σ fixed, let T i σ : {ρ : ρ(i) = 0} → {ρ : ρ(i) = 1} be the bijection which flips ρ on the partition element of σ which contains i. It is clear that for all ρ with ρ(i) = 0, we have and hence the previous expression is A p,n is mapping into a 2 n -dimensional vector space and each of the n equations in (8) gives one linear constraint. It is easy to see that these n constraints are linearly independent (for example, one can see this by just looking at the number of times each of the vectors 0 k 1 n−k appears on the two sides). It follows that the rank of A p,n is at most 2 n − n.
With Steps 1 and 2 completed, together with the claim at the start of Step 2, we conclude that the rank is as claimed and the range is characterized as claimed. Finally, the claim concerning probability vectors follows immediately.
(i) The argument for the p = 1/2 can equally well be carried out with minor modifications for the p = 1/2 case but we preferred the simpler argument which even gives more.
(ii) This last proof shows that, when dealing with formal solutions, we only need to use partitions which have at most one nonsingleton partition element. This is in large contrast to the earlier proof of the p = 1/2 case where we only needed to use partitions which have at most two partition elements.
(iii) The rank of an operator as a function of its matrix elements is not continuous but it is easily seen to be lower semicontinuous. We see this lack of continuity at p = 1/2 as well as of course at p = 0 and p = 1.
Since we will often deal with the case n = 3, it will be convenient to have at our disposal the formula for the unique formal solution when (1) p = 1/2 and (2) the distribution ν of (X 1 , X 2 , X 3 ) satisfies (8) with marginals p as well as the general solution for the case when the distribution is {0, 1}-symmetric (i.e., satisfies (7) and hence p = 1/2). One can immediately check the validity of the following. The form of the kernel (that there is one free variable and it appears as it does) is discussed in [12]. If p ∈ {0, 1/2, 1}, then A p,3 q = ν has a unique solution (q σ ) σ∈B3 given by where t ∈ R is a free variable.

The fully invariant case
It is sometimes natural to consider situations where one has some further invariance property. One natural case is the following. The symmetric group S n acts naturally on B n , {0, 1} n , P(B n ), P({0, 1} n ), R Bn and R {0,1} n where P(X) denotes the set of probability measures on X. (Of course P(B n ) ⊆ R Bn and the action on the former is just the restriction of the action on the latter; similarly for P({0, 1} n ) ⊆ R {0,1} n .) To understand uniqueness of a color representation when we restrict to S n -invariant probability measures, it is natural to again extend to the vector space setting, which is done as follows.
Let Q Inv n := {q ∈ R Bn : g(q) = q ∀g ∈ S n } and V Inv n := {ν ∈ R {0,1} n : g(ν) = ν ∀g ∈ S n }. We next let A Inv p,n be the restriction of A p,n to Q Inv n . It is elementary to check that A Inv p,n maps into V Inv n and furthermore, it is easy to check (see for example Proposition 3.11 in [12] for something related) that Recalling that P n is the set of partitions of the integer n, we have an obvious mapping from B n to P n , denoted by σ → π(σ). R Pn can then be canonically identified with Q Inv where a π is the number of σ's for which π(σ) = π. We divide by a π in order that probability vectors will be identified with probability vectors. In an analogous way, V Inv n can be canonically identified with R n+1 in such a way that probability vectors are identified with probability vectors; namely, and ρ is the number of ones in the binary string ρ.
The following is the analogue of Theorem 2.4. Again, in [12], this was done for some small values of n. Theorem 2.6. (i). For p ∈ {0, 1/2, 1}, A Inv p,n has rank n and hence nullity |P n | − n. The range of A Inv p,n (after identifying V Inv (ii) A Inv 1 2 ,n has rank n/2 + 1 and hence nullity Proof. (i). Denoting by U n the subset of R n+1 satisfying (13), we claim that Since U n is clearly an n-dimensional subspace of R n+1 , the proof of (i) will then be done. To see this, first take ν Inv ∈ V Inv n and let ν be the corresponding element in U n . We first need to show that (8) is satisfied ν Inv . Fixing any i, we have Hence, since ν ∈ U n , (8) holds. In view of (12), this shows ⊆ in (15) holds. Now fix ν Inv ∈ A Inv p,n (Q Inv n ). Clearly ν Inv ∈ V Inv n and by Theorem 2.4, (8) holds. The above computation shows that the corresponding ν ∈ R n+1 satisfies (13) and hence is in U n . This shows ⊇ in (15) holds as well.
(ii). Denoting now by U n the subset of R n+1 satisfying (14), we claim that Since U n is clearly an ( n/2 + 1)-dimensional subspace of R n+1 , the proof of (ii) will then be done. However, in view of (7) in Theorem 2.4 and (12), this is immediate.
3 Stieltjes matrices, discrete Gaussian free fields and tree-indexed Gaussian Markov chains 3.1 Inverse Stieltjes covariance matrices give rise to color processes for h = 0 Definition 3.1. A Stieltjes matrix is a symmetric positive definite matrix with non-positive off-diagonal elements.
We will see later that the following result implies that for all discrete Gaussian free fields X (to be defined later), X 0 is a color process.
The proof is based upon noting that the observation from [6] that the signs of a discrete Gaussian free field is an average of Ising Models is still true if one just assumes that one has a Stieltjes matrix.
Proof. Note first that as (b ij Then the conditional probability density function of (σ i ) given This is a ferromagnetic Ising model with parameters β ij = −b ij y i y j ≥ 0 and no external field. It is well known that the (Fortuin Kastelyn) random cluster model yields a color representation for the Ising model after we identify −1 with 0. Since an average of color processes is a color process, we are done. With nonzero thresholds, this argument would lead to Ising model with a varying external field. The marginals of this (conditioned) process are not in general equal, which precludes it from being a color process, and even if the marginals were equal, there is no known color representation in this case in general.
We end this subsection by pointing out that there are fully supported Gaussian vectors whose threshold zero processes are color processes but whose inverse covariance matrix is not a Stieltjes matrix.
To see this, let a ∈ (0, 1) and ε ∈ (0, 1). Then the matrix has eigenvalues 1−a 2 +ε and Hence, A is not an inverse Stieltjes matrix for any ε > 0, since for any ε > 0 we have that A −1 (2, 3) > 0. Consequently, if 0 < ε < 1 − a 2 , then A is symmetric, positive and positive definite but not an inverse Stieltjes matrix. Finally, the fact that the threshold zero process is a color process follows from Proposition 2.12 in [12] which states that for n = 3, any {0, 1}-symmetric process with nonnegative pairwise correlations is a color process.

Examples of Stieltjes matrices: discrete Gaussian free fields and embeddings into tree-indexed Gaussian Markov chains
In this subsection, we provide two (partially overlapping) examples of fully supported Gaussian vectors whose inverse covariance matrices are Stieltjes.

Discrete Gaussian free fields
An extremely important class of Gaussian processes are the so-called discrete Gaussian free fields which we now define. Let n ∈ N. Given n, let (c({x, y})) {x,y}⊆[n], x =y and (κ(x)) x∈[n] be two sets of nonnegative real numbers, called conductances and killing rates respectively. Let Z = (Z t ) t∈[0,ζ] be a continuous time sub-Markovian jump process on [n], with transition rate between x ∈ [n] and y ∈ [n] given by c(x, y), and κ(x) being the transition rate from x ∈ [n] to a cemetery point outside of [n]. When this latter transition occurs, we think of the process as being killed, and denote this random time by ζ. Let (G(x, y)) x,y∈[n] be the Green's function for Z, i.e.
We will restrict consideration to the case when G is finite for all x and y. It is well known that G is a symmetric and positive definite matrix. The mean zero multivariate Gaussian vector X whose covariance matrix is G is called the discrete Gaussian free field (DGFF) associated to {c(x, y)} and {κ(x)}.
Letting C denote the matrix It is well known and easy to check that C = G −1 . Consequently, the inverse covariance matrix of a DGFF X ∼ N (0, G) is a Stieltjes matrix, yielding the following corollary of Theorem 3.2.
Corollary 3.4. The threshold zero process corresponding to a DGFF is a color process.
We point out that if X ∼ N (0, G) with G invertible and if B := G −1 is a Stieltjes matrix with positive row sums, then X is a DGFF. To see this, one simply lets c(x, y) := −b(x, y) for x = y and κ(x) := y b(x, y) and then checks that B is the matrix C given above using these rates. Hence G = C −1 as desired.
Does the set of inverse Stieltjes matrices give us a larger class than the set of DGFFs? The answer is yes and no. If we look at the process, the answer is yes. For example is a Stieltjes matrix but cannot correspond to a DGFF since the sum of the elements in the first row is negative. However, if we look at the corresponding threshold zero processes, one obtains the same collection. Therefore Theorem 3.2 is equivalent to the result for DGFFs. To see the previous statement, let X G be a Gaussian vector whose inverse covariance matrix B = G −1 is a Stieltjes matrix. It is known (see e.g. K 35 in [9]) that there exists a positive diagonal matrix D so that BD has strictly positive row sums. It follows that DBD has strictly positive row sums and it is easy to check that DBD is still a Stieltjes matrix. By the earlier discussion, the Gaussian vector Y with covariance matrix (DBD) −1 is a DGFF. On the other hand, Y = D −1 X and hence the sign processes are the same.

Embeddings into tree-indexed Gaussian Markov chains
We now consider a certain class of tree-indexed Markov chains as follows. Let (T, ρ) be a finite or infinite tree with a designated root ρ. Fix a state space S and assume that for each edge e, we have a Markov chain M e on S. Assume that there is a probability measure µ on S which is a stationary distribution for each M e . This yields a measure on S T by choosing the state at ρ according to µ and then independently running the Markov chains going outward from ρ. Note that the final process is independent of ρ if and only if µ is reversible with respect to each M e . Now let S = R and let M e be given by s → b e s + (1 − b 2 e ) 1/2 W where b e ∈ (0, 1) and W is a standard normal random variable. Clearly the standard Gaussian distribution is reversible with respect to each of these chains. The final tree-indexed process is clearly a mean zero, variance one Gaussian process whose covariance between two vertices is the product of the b e values along the path between the two vertices. If the tree is finite with covariance matrix A, it is straightforward to show that the inverse covariance matrix is given by Therefore, since A is an inverse Stieltjes matrix and since every principal submatrix of an inverse Stieltjes matrix is known to be an inverse Stieltjes matrix (see e.g. [7]), we obtain the following corollary.
Corollary 3.5. If X is embeddable into a tree-indexed Gaussian Markov chain (meaning that there is a tree-indexed Gaussian Markov chain and a subset U of the vertices of the tree such that the process restricted to U corresponds to X), then the covariance matrix for X is an inverse Stieljtes matrix and hence the corresponding threshold zero process is a color process.
Of course, if we just wanted to know it is a color process, one does not need the result from [7] since the covariance matrix for the full tree is, as we saw, an inverse Stieljtes matrix and a subset of a color process is of course a color process.

Relationship between the two classes
Although certainly not disjoint, we now show that for the two set of processes in the previous subsubsections, neither class contains the other when n ≥ 4 but they are the same when n ≤ 3.

There is a tree-indexed Gaussion Markov chain with four vertices which
is not a DGFF.
2. There is a four-dimensional DGFF whose marginals have variance one which cannot be embedded into any tree-indexed Gaussian Markov chain.
3. Let A be a covariance matrix for a zero mean, variance one Gaussion vector with n ≤ 3. Then the following are equivalent. A is an inverse Stieltjes matrix, X corresponds to a DGFF, and X is embeddable in a tree-indexed Gaussian Markov chain.
1. It is immediate from the discussion of the DGFF and the form of the inverse covariance matrix in the previous subsubsection, that a tree-indexed Gaussian Markov chain is a DGFF if and only if for all i, with strict inequality for some i and in this case, the DGFF would have conductances given by the b e 's and the killing rates κ(i)'s given by the expression above. It easily follows that for any tree-indexed Gaussian Markov chain which has a vertex v with three edges emanating from it all of which have b e -value larger than 1/2, the chain cannot be a DGFF. This clearly can be achieved with a tree on 4 vertices.
2. Consider a four-dimensional mean zero Gaussian vector X with covariance matrix given by whose inverse covariance matrix is therefore given by Since the latter is a Stieltjes matrix whose row sums are positive, we know that X A corresponds to a DGGF. We will now show that X A is not embeddable into a tree-indexed Gaussian Markov chain. Assume on the contrary that there exists a tree-indexed Gaussian Markov chain and vertices v 1 , v 2 , v 3 , v 4 such that the distribution at these 4 points corresponds to X.
Given any three points x, y, z in a tree, there exists a unique meeting point w with the property that there are edge-disjoint paths from each of x, y and z to w. It is clear that if the covariance between z and x is the same as the covariance between z and y, then w must be on the midpoint between x and y in the sense that the covariance between w and x is the same as the covariance between w and y.
It follows that the meeting point m for v 1 , v 2 and v 3 is the same as the meeting point for v 2 , v 3 and v 4 . Moreover, the correlations between v 1 , v 2 , v 3 , v 4 and m must be given by 2 3. We only do the case n = 3. We first show if a variance one Gaussian vector has a covariance matrix which is inverse Stieljtes, then it can be embedded into a tree-indexed Gaussian Markov chain on 4 vertices.
Consider a three-dimensional mean zero, variance one Gaussian vector with covariances a 12 , a 13 and a 23 , each in (0, 1]. (The case that some covariance is zero is easily handled separately.) Consider a tree on four vertices consisting of a root, labelled 4, with three edges coming out to the vertices labelled 1,2, and 3. For i ∈ {1, 2, 3}, let b i ∈ (0, 1] be assigned to the edge between i and 4 and consider the associated Gaussian Markov chain. If we can solve the equations then the distribution at the three leaves will correspond to our original Gaussian vector, demonstrating the embeddability. One checks that the unique solution is given by To insure these lie in (0, 1], we need that However, this is the so-called path product property, known to hold for all inverse Stieltjes matrices (see e.g. [4]).
For the other two implications, we have seen in Subsubsection 3.2.1 that a DGFF necessarily has an inverse covariance matrix which is a Stieltjes matrix. Consequently, the desired conclusion follows if we can show that when n = 3, any Gaussian vector which is embeddable into a treeindexed Gaussian Markov chain is a DGFF. To this end, suppose that is embeddable into a tree-indexed Markov chain at vertices labelled by v 1 , v 2 and v 3 respctively. Let m be the meeting point of v 1 , v 2 an v 3 , and further, for i ∈ {1, 2, 3}, let b i be the product of the b e 's associated to the edges on the shortest path between m and v i in the tree. Note that since our variables are always distinct, at most one of the b i 's is equal to 1. Then the covariance matrix of X is given by and it is easy to verify that the inverse covariance matrix is given by This is clearly a Stieltjes matrix. To see that X is in fact a DGFF, note that the row sums of this matrix are given by and it is easy to check that since at most one b i is 1, all these row sums are strictly positive, as desired.
We end this subsection by discussing a simple Gaussian vector and show that different points of view can lead to very different color representations. To this end, consider the fully symmetric multivariate normal X := (X 1 , X 2 , . . . , X n ) with covariance matrix A = (a ij ) where a ij = a ∈ (0, 1) for i = j and a ii = 1 for all i. (While not needed, it is immediate to check that this is a DGFF on the complete graph with conductances c(i, j) = a (1+(n−1)a)(1−a) for all i = j and constant killing rates κ(i) = 1 1+(n−1)a .) It is easy to check that otherwise.
Since this is a Stieltjes matrix, X 0 is a color process by Theorem 3.2 and moreover, by the proof, the resulting color representation has full support on all partitions. (The fact that this particular example is a color process is also covered by Section 3.5 in [12] using a different method.) Now suppose we would add a variable X 0 with a 00 = 1 and a i0 = √ a for all i ∈ {1, 2, . . . , n}. One can check that this defines a Gaussian vector (X 0 , X 1 , X 2 , . . . , X n ) and it is easy to check that this is a tree-indexed Gaussian Markov chain where the tree is a vertex with n edges coming out with each b e = √ a. If we let A 0 be the covariance matrix of Y := (X 0 , X 1 , X 2 , . . . , X n ), then its inverse is given by This is also a Stieltjes matrix (which also follows from Corollary 3.5), and hence Y 0 has a color representation by Theorem 3.2. (This is a DGFF for some values of a and n and not for others but this will not concern us.) If we take the resulting color representation of Y 0 and restrict to {1, 2, . . . , n}, it is clear that we have support only on partitions with at most one non-singleton cluster. In particular, this implies that when n = 4, these color representations will assign different probabilities to the partition (12,34), and hence the representations are distinct.

An alternative embedding proof for tree-indexed Gaussian Markov chains which extends to the stable case
The purpose of this section is twofold: first to give an alternative proof of the fact, contained in Corollary 3.5, that tree-indexed Gaussian Markov chains are color processes and then to use a variant of this alternative method to obtain a result in the context of stable random variables.

The Gaussian case
Alternative proof of Corollary 3.5. We give this proof only for a path where the correlations between successive variables are the same value a. The extension to the tree case and varying correlations is analogous. To show that X := (X 1 , X 2 , . . . , X n ) has a color representation for any n ≥ 1, we want to construct, on some probability space, a random partition π of [n] and random variables Y = (Y 1 , Y 2 , . . . , Y n ) so that (i) X and Y have the same distribution (which implies that their corresponding sign processes have the same distribution) and (ii) (Y 0 , π) is a color process (for p = 1/2) with its color representation.
To do this, let (Z t ) be the so-called Ornstein-Uhlenbeck (OU) process defined by where (W t ) t≥0 is a standard Brownian motion. It is well known and immediate to check that Z t ∼ N (0, 1) for any t ∈ R and that Cov(Z s , Z t ) = e −|s−t| for any s, t ∈ R. Now, given n, consider the random vector Y given by Z log(1/a) , Z 2 log(1/a) , . . . , Z n log(1/a) and consider the random partition π of {1, 2, . . . , n} given by i ∼ j if Z t does not hit zero between times i log(1/a) and j log(1/a). It is immediate from the Markovian structure of both vectors and the covariances in the OU process that (i) holds. Next, (ii) is clear using the reflection principle (which uses the strong Markov property) and the fact that the hitting time of 0 is a stopping time.
Remark 4.1. This argument (also) does not work for any threshold other than zero. For it to work, one would need that for h > 0 and any time t ≥ 0, the probability that an OU process started at h is larger than h at time t is equal to the unconditioned probability. This however does not hold.
Remark 4.2. In [5], the author studies a similar construction as the construction above for discrete Gaussian free fields. More precisely, the author shows that one can obtain a color representation for a DGFF X as follows. Given X, for each pair of adjacent vertices he adds a Brownian bridge with length determined by their coupling constant. Two vertices are then put in the same partition element if the corresponding Brownian bridge does not hit zero. Since DGFF's have no stable analogue, this does not generalize to any class of stable distributions.

The stable case
We now obtain our first result for stable vectors. Given α ∈ (0, 2) and a ∈ (0, 1), let U have distribution S α (1, 0, 0) and consider the Markov chain on R given by s → as + (1 − a α ) 1/α U . It is straightforward to check that U is a stationary distribution for this Markov chain. Hence, given a tree T and a designated root, we obtain a tree-indexed α-stable Markov chain on T . Interestingly, unlike the Gaussian case, this process depends on the chosen root as this Markov Chain is not reversible. In particular, if (X 0 , X 1 ) are two consecutive times for this Markov chain started in stationarity, then (X 0 , X 1 ) and (X 1 , X 0 ) have different distributions; one can see this by looking at the two spectral measures. Proposition 4.3. Fix α ∈ (0, 2), a ∈ (0, 1), a tree T with designated root ρ and consider the corresponding tree-indexed α-stable Markov chain X on T . Then X 0 is a color process.
Proof. We give the proof only for a path and with ρ being the start of the path. The extension to the tree case is analogous. As in the previous proof, we want to construct, on some probability space, a random partition π of [n] and random variables Y = (Y 1 , Y 2 , . . . , Y n ) so that (i) (X 1 , . . . , X n ) and Y have the same distribution, and (ii) (Y 0 , π) is a color process (for p = 1/2) with its color representation.
Define Y i for i ∈ {2, . . . , n} inductively by It is clear from the above discussion that (i) holds. Now we extend this process to all times t ∈ [1, n] as follows. Let, for t ∈ (i, i + 1), is left-continuous and has jumps exactly at the integers. Note also that this process never jumps over the x-axis.
Next, considering the random partition π of {1, 2, . . . , n} given by i ∼ j if Y t does not hit zero between times i and j. Again using the reflection principle, properties of Brownian motion and the fact that (Y t ) never jumps over the x-axis it is clear that (ii) holds.
We apply this to a particular symmetric, fully symmetric stable n-dimensional vector. To this end, let S 0 , S 1 , . . . , S n be i.i.d. each having distribution S α (1, 0, 0) and for i = 1, 2, . . . , n let We claim that (X 0 1 , X 0 2 , . . . , X 0 n ) is a color process. To see this, consider Proposition 4.3 with a homogeneous n-ary tree and α and a being as above. By that proposition, the threshold zero process for the corresponding tree-indexed Markov chain is a color process.

The geometric picture of a Gaussian vector
In this section we will switch to a more geometric perspective and view a mean zero Gaussian vector of length n as the values of a certain random function at a set of n points in R k for some k. More precisely, let k ≥ 1, x 1 , . . . , x n ∈ R k , and W ∼ N (0, I k ) be a standard normal distribution in R k . If we now let then X is a Gaussian vector with mean zero and covariances Cov(X i , X j ) = x i · x j . Note that X i having variance one corresponds to x i being on the unit sphere S k−1 in R k .
The above representation can always be achieved with k = n. To see this, let X := (X i ) 1≤i≤n be a Gaussian vector with mean zero and covariance matrix A. Then for W ∼ N (0, I n ) we have that where √ A is the nonnegative square root of A. Now for i = 1, . . . , n, let x i = √ A(i, ·) in R n be the ith row of √ A. Then for any i, j ∈ [n] we have that x i · x j = a ij as desired. Such a representation can be achieved in R k if and only if X lives on a k-dimensional subspace of R n . We say that X has dimension k if k is the smallest integer where one has this representation. When we have x 1 , . . . , x n ∈ R k as above, without loss of generality, we will always assume that x 1 , . . . , x n spans R k so that the dimension of X is k. Now given a standard Gaussian vector X := (X i ) 1≤i≤n (recall this means the marginals have mean zero and variance one) and h ∈ R, let (X h i ) 1≤i≤n be, as before, the threshold process defined by X h i := I(X i > h). It will be useful to have a simple way to generate (X h i ) 1≤i≤n which can be done as follows. Assume that X is k-dimensional with variances all being one. We take n points x 1 , x 2 , . . . , x n on S k−1 corresponding to (X i ) 1≤i≤n as described above. Let Z ∼ N (0, I k ). It is well known that when Z is written in polar coordinates (r, θ) with r ≥ 0 and θ ∈ S k−1 , then r and θ are independent with θ uniform on S k−1 and r having the distribution of the square root of a χ-squared distribution with k degrees of freedom. We then have that X h i = 1 if and only if x i · Z > h. Note that {x : x · Z = h} is a random hyperplane H h in R k perpendicular to θ(Z) and so X h is equal to one for points on S k−1 which lie on one side of H h and zero for points lying on the other side. Note that when h = 0, the hyperplane goes through the origin and it is the points on the same side as θ(Z) that get value one; in particular, when h = 0, the value of X h i only depends on θ(Z) and not on r(Z). However, when h > 0, the hyperplane H h can go through any point of the one-sided infinite line from the origin going through θ(Z). In particular, H h might not intersect S k−1 at all; this would correspond exactly to r(Z) < h.

Gaussian vectors canonically indexed by the circle
Proposition 5.1. Consider n points x 1 , . . . , x n on S 1 satisfying x i · x j ≥ 0 for all i, j; this is equivalent to the correlations a ij of the corresponding Gaussian process X being nonnegative. Then X 0 is a color process.
Proof. Using the nonnegative correlations, it is easy to check that the n points {x 1 , x 2 , . . . , x n } ⊆ S 1 must lie on an arc of length at most π/2. Since the distribution of a Gaussian process is invariant under rotations, we may assume that the n points lie on the arc 0 ≤ θ ≤ π/2. Hence we can assume that x j = e iθj with 0 ≤ θ 1 < θ 2 < . . . < θ n ≤ π/2.
We will couple X 0 with a color process together with its color representation in such a way that X 0 and the color process match exactly. We first show how one uniform point U on S 1 generates a color process together with its color representation. Let noting that the first and last arcs might be trivial. Letting I θ k be I k rotated counterclockwise by θ, we note that I θ k : k ∈ {1, . . . , n + 1}, θ ∈ {0, π/2, π, 3π/2} k , we partition {x 1 , x 2 , . . . , x n } into the two sets J 1 := {x 1 , . . . , x k−1 } and J 2 := {x k , . . . , x n } with the obvious caveat when k ∈ {1, n+1}. Next we color J 1 and J 2 as follows.
If U is in [0, π/2], we color each cluster 1, if U is in [π/2, π], we color J 1 0 and J 2 1, if U is in [π, 3π/2], we color each cluster 0 and if U is in [3π/2, 2π], we color J 1 1 and J 2 0. This clearly yields a color process (with p = 1/2) together with its color representation. Finally observe that this color process is exactly X 0 if we use U for θ(Z).
Remark 5.2. It is clear from the construction that the probability that the color process is constant is at least 1/2 (corresponding to when U falls in the top-left or bottom-right arcs) and so Theorem 2.2(i) could also be used here. In addition, the color representation given here corresponds to that given in Theorem 2.2(i). The above description will however be useful when dealing with the case h = 0 as in Proposition 5.5.
Remark 5.3. For any color process (Y i ) with p = 1/2, for any i and j it is clear that and that In the case of Proposition 5.1, it is clear that and hence that Since |θ j − θ i | = arccos a ij it follows that This is of course one of many ways to derive this last expression which is known as Sheppard's formula (see [11]). This discussion also leads as well to the formula The proof of the following elementary lemma, based on inclusion-exclusion, is left to the reader.
In particular, using (20), if X corresponds to threshold zero for a mean zero Gaussian vector, the above is equal to Proposition 5.5. Consider n points x 1 , . . . , x n on S 1 satisfying x i · x j ≥ 0 for all i, j. Then X h does not have a color representation for any h = 0, n ≥ 3.
Proof. It suffices to prove this for h > 0 and n = 3. Since h > 0, it is clear from the construction of X A,0 described above that (0, 1, 0) has positive probability but that (1, 0, 1) has probability zero. However, it is immediate that no color process can have this property.
Remark 5.6. By taking the three points very close together, we see that there is no analogue of Theorem 2.2(i) when we don't have {0, 1}-symmetry, even in the threshold Gaussian case (i.e., when h = 0).

A general obstruction for having a color representation for h > 0
The following is precisely a higher dimensional analogue of Proposition 5.5. The latter is the special case n = 3 together with the fact that any three points on the circle are in general position.
Theorem 5.7. The standard Gaussian process X associated to n points x 1 , . . . , x n ∈ S n−2 in general position (equivalently not contained in an (n − 2)-dimensional hyperplane) is such that X h is not a color process for any h > 0. More generally, if X := (X 1 , X 2 , . . . , X n ) is a random vector such that • (X 1 , X 2 , . . . , X n−1 ) is fully supported on R n−1 • there is (a 1 , a 2 , . . . , a n ) ∈ R n \{0} such that a.s.
then X h is not a color process for any h > 0.
Remark 5.8. Any n-dimensional standard Gaussian vector which is not fully dimensional can be represented by points on S n−2 . When the n points are not in general position, which can only happen if n ≥ 4, in which case the above result is not applicable, we will see in Corollary 7.3 that nonetheless X h is not a color process for large h. Perhaps the simplest example of a four-dimensional Gaussian vector which is not fully dimensional but does not correspond to points on S 2 in general position appears in Figure 3. In the next subsection, we will see in Theorem 5.9 that this case will lead us to an important example for which we will have a phase transition.
Proof of Theorem 5.7. We will first observe that the second statement implies the first. One can order the n points x 1 , . . . , x n ∈ S n−2 in general position such that the first n − 1 points are linearly independent. This implies that the corresponding Gaussian vector X = (X 1 , . . . , X n ) satisfies the first condition. Next, since x 1 , . . . , x n are linearly dependent (as they sit inside R n−1 ) there exists (a 1 , a 2 , . . . , a n ) ∈ R n \{0} such that For the second statement, note first that we can assume that |a i | > 0 for i = 1, 2, . . . , n since we can remove the X i 's for which a i = 0. If a j > 0 for all j (with a similar argument if a j < 0 for all j), then for all h > 0, ν 1 n (h) = 0 in which case there clearly cannot be any color representation. We hence assume that there are both positive and negative values among the a j 's. Furthermore since n i=1 a i X i = 0 and (X 1 , X 2 , . . . , X n−1 ) is fully supported, for any i, if we define I i = {1, 2, . . . , n}\{i}, then the vector (X j ) j∈Ii is fully supported. This implies in particular that we, possibly after reordering the random variables and changing all the signs, can assume that i : ai>0 and that a n > 0.
Fix now h > 0. Now define the binary string ρ by ρ(i) = I(a i < 0) and let E be the event that ∀j < n : X j > h if a j < 0 and X j ≤ h if a j > 0.
Since (X 1 , X 2 , . . . , X n−1 ) is fully supported, the probability of the event E is strictly positive. Since i : ai>0 this implies that on E, X n = − j<n a j X j a n = j<n : aj <0 |a j |X j a n − j<n : aj >0 |a j |X j a n ≥ h · j<n : aj <0 |a j | a n − j<n : aj >0 |a j | a n > h which in particular implies that ν ρ = 0.
On the other hand, since (X 1 , X 2 , . . . , X n−1 ) is fully supported, the event has strictly positive probability for any α ∈ (0, 1) and β ∈ (1, ∞). On this event we have that Since j<n : aj <0 |a j | − i<n : ai>0 |a i | > a n it follows that X n > h if α and β are both sufficiently close to one. In particular, this implies that ν −ρ > 0. Since ν ρ = 0 but ν −ρ > 0, it follows that X h cannot have a color representation.

A four-dimensional Gaussian exhibiting a non-trivial phase transition
In this subsection we will study an example, corresponding to four points on S 2 , for which the existence of a color representation for positive h is not ruled out by Theorem 5.7. To this end, let θ ∈ (0, π/2] and define x 1 , and for i = 1, 2, 3, 4, let Geometrically, this corresponds to having four points in a square on a 2-sphere at the same latitude, and it follows easily that Note that A has nonnegative entries if and only if θ ≤ π/4. The following theorem implies Theorem 1.3.
Since this is a color representation by assumption, q σ ≥ 0 for all σ, which is equivalent to (29). This proves the necessity in the first part of the lemma. To see that we also have sufficiency, let q = (q 123 , q 12,3 , q 13,2 , q 1,23 , q 1,2,3 ) be a color representation of (X θ,h 1 , X θ,h 2 , X θ,h 3 ) which satisfies the inequalities in (29). Define q σ for σ ∈ B 4 by (30) and (31) and extend to all partitions by making it invariant under the dihedral group. Since (29) holds, q σ ≥ 0 for all σ ∈ P 4 . Also, one checks that they add to one and the projection onto {1, 2, 3} is q above. Using the fact that ν 010· = ν 0100 , one can check that the probability of any configuration is determined by the three-dimensional marginals. From here, one verifies that this yields a color representation of X θ,h , as desired.
Proof of Theorem 5.9. To see that (i) holds, let h = 0. We will apply Lemma 5.10. By (11), the process (X θ,0 1 , X θ,0 2 , X θ,0 3 ) has a signed color representation given by for some free variable t ∈ R. This will give a color representation for all t which is such that q σ ≥ 0 for all σ ∈ B 3 . Using (20) and (24), one easily verifies that in a Gaussian setting, the set of equations above can equivalently be written as Rearranging, we see that these are all nonnegative if and only if In our specific example, we have that θ 12 = θ 23 = arccos cos 2 θ θ 13 = 2θ ≥ θ 12 and hence (32) simplifies to 0 ∨ 2 arccos cos 2 θ + 2θ π − 1 ≤ t ≤ 2 arccos cos 2 θ − 2θ π .

The desired conclusion now follows.
To see that (ii) holds, note first that by Theorem 8.1 and a computation, the value of the free parameter t corresponding to the limit of h → 0 is given by Using the proof of (i), it follows that it suffices to show that 2 arccos cos 2 θ − π/2 π < 1 − arccos 2 sin 4 θ (1+cos 2 θ) 2 − 1 π < 2(2θ − arccos cos 2 θ) π for all sufficiently small θ. To this end, note first that at θ = 0 the first expression is equal to −1/2 while the second and third expression are both equal to zero, and hence the first inequality is strict for all sufficiently small θ. To compare the last two expressions, one verifies that the derivatives of these two expressions at θ = 0 are given by 0 and 4 − 2 √ 2 respectvely, and hence (ii) is established. Finally, (iii) follows from Corollary 7.3.

A four-dimensional Gaussian with nonnegative corre-
lations whose 0 threshold has no color representation In this subsection, we study a particular example which will in particular yield a proof of Theorem 1.5; see (ii) and (iii) below.
Theorem 5.11. Let (X 1 , X 2 , . . . , X n−1 ) be a fully symmetric multivariate mean zero variance one Gaussian random vector with pairwise correlation a ∈ [0, 1), and let insuring that X n has mean zero and variance one. In addition, nonnegative pairwise correlations is immediate to check. If X a := (X 1 , X 2 , . . . , X n ), then the following hold.
(ii) When n ≥ 4 and a is sufficiently close to zero (or zero), X a,0 is not a color process.
(iii) For n ≥ 4, there exists a fully supported multivariate mean zero variance one Gaussian random variable X with nonnegative correlations for which X 0 is not a color process.
(iv) When n ≥ 4 and a is sufficiently close to one, X a,0 is a color process.
(v) For any n ≥ 3, a ∈ [0, 1) and h > 0, X a,h is not a color process.
Proof. (i). The claim for n = 3 follows immediately from Proposition 5.1 or Proposition 2.12 in [12].
(ii). We first consider n ≥ 4 and a = 0 and obtain the result in this case. If X 0 is a color process, then it must be the case that the color representation gives weight 1/(n − 1) to each of the n − 1 partitions which consist of all singletons except n is in a block of size 2. This is because (1) since X 1 , X 2 , . . . , X n−1 are independent, none of 1, 2, . . . , n − 1 can ever be in the same cluster, (2) if n is in its own cluster with positive probability, then ν 0 n−1 1 > 0 which contradicts the fact that X 1 , X 2 , . . . , X n−1 all negative and X n positive is impossible and . The conclusion is that if it is a color process, then This is true for n = 3 (as it must be) but we show this is false for all n ≥ 4. Rearranging, this is equivalent to (i.e. at x ≈ 0.338247 and x ≈ 0.941057) and that f (0) = 0 < 1 = g (0) and f (1) = π < ∞ = g (1). This easily implies that {x : f (x) > g(x)} is of the form (b, 1). Hence we need only check that (33) fails for n = 4 with the left side being larger. However, this is immediate to check. Finally, to obtain the result for small a depending on n, one just uses the fact that the set of color processes is closed.
(iii). Fix n ≥ 4, take a = 0 and replace X n by X n := Z + (1 − 2 ) 1/2 X n where Z is another standard Gaussian independent of everything else. Then for every > 0, the resulting vector X is fully supported with nonnegative correlations. However, for small , X 0 cannot be a color process since the color processes are closed and the limit as → 0 is not a color process by (ii).
For (iv), note that by Theorem 2.2(i), a sufficient condition for being a color process is that ν 0 n ≥ 1/4. In our case, we clearly have that for any n, ν 0 n → 1/2 as a → 1, and hence the desired conclusion follows.

An extension to the stable case and two interesting integrals
In this section, we explain to which extent the results in the last subsection of the previous section can be carried out for the stable case. We assume now that X 1 , X 2 , . . . , X n−1 are i.i.d. each with distribution S α (1, 0, 0) for some α ∈ (0, 2) and we let X n = (X 1 + X 2 + . . . + X n−1 )/(n − 1) 1/α and X := (X 1 , X 2 , . . . , X n ). Proposition 2.12 in [12] implies, as before, that when n = 3, X 0 is a color process (the {0, 1}-symmetry is obvious and the nonnegative correlations being an easy consequence of Harris' inequality).
Concerning whether X 0 can be a color process for some n ≥ 3 and h > 0, Theorem 5.7 implies that it cannot be except perhaps when α = 1.
We now look at the case n ≥ 4 and h = 0. The same argument as in the previous case allows us to conclude that if X 0 is a color process, then it must be the case that the color representation gives weight 1/(n − 1) to each of the n − 1 partitions which consist of all singletons except n is in a block of size 2. Next (18) easily yields that the probability that 1 and n are in the same block is E[sgn(X 1 ) sgn(X n )] and hence it must be the case that E[sgn(X 1 ) sgn(X n )] = 1/(n − 1).
Restricting now to n = 4, using Mathematica, we draw this integral as function of α in Figure 4 and it appears to be strictly increasing. (We will see later it is constant if n = 3!) If we knew that to be the case, we could conclude that there is at most one value of α for which X 0 is a color process. Exactly as in the previous subsection, using the fact that the set of color processes is closed, once we know that there is a least one α for which it is not a color process, we can easily construct fully supported symmetric stable distributions with n = 4, where the necessary pairwise nonnegative correlations of the signs holds, whose zero thresholds are not color processes.
In [12], a very special case of Corollary 3.4 was obtained, namely the case where we have exchangability (i.e., full permutation invariance). As described below, a consequence of that analysis, the argument in the proof of Theorem 5.11(ii) and a result of I. Molchanov yields a proof of Theorem 1.6 from Section 1.
Proof of Theorem 1.6. The independence in α of the first integral and its value is established as follows. Let S 1 , S 2 , S 3 be i.i.d. each having distribution S α (1, 0, 0) and let X 1 := S 1 + S 2 2 1/α and , it follows easily from the discussion in Section 3.5 in [12] that this expectation is 1/3 for each α. On the other hand, Corollary 6.12 in [8] implies, after some work, that the above expectation is, for a given α, the first integral above times 2/π 2 . Hence this integral is independent of α with value π 2 /6.
The independence in α of the second integral and its value is established as follows. Let S 1 , S 2 be i.i.d. each having distribution S α (1, 0, 0) and let Consider (S 0 1 , S 0 2 , X 0 ). It is easy to see that these random variables have pairwise nonnegative correlations (e.g., using Harris' inequality) and is {0, 1}symmetric. It follows from Proposition 2.12 in [12] that this is a color process. It is clear, using e.g. the argument in the proof of Theorem 5.11(ii) that the random partition must satisfy q 13,2 = q 23,1 = 1/2. Next (18) easily yields q 13,2 = E[sgn(S 1 ) sgn(X)]. On the other hand, Corollary 6.12 in [8] implies, after some work, that the above expectation is, for a given α, the second integral above times 2/π 2 . Hence the second integral is independent of α with value π 2 /4. Remark 6.1. (i) Without this theory, one can show directly, although it is nontrivial, that the first integral is independent of α and equal to π 2 /6 (as shown by Jack D'Aurizio, see https://math.stackexchange.com/ questions/2698982/why-does-this-integral-not-depend-on-the-parameter).
(ii) Given the analysis in [12], we knew that any formula for E[sgn(X 1 ) sgn(X 2 )] would have to be independent of α, in particular the formula given in Corollary 6.12 in [8] which is the above integral. However we would have guessed that the independence in α of such a formula would have appeared in a more transparent way; surprisingly this was not the case.
(iii) Extending the analysis of the first integral, letting one can consider E[sgn(X 1 X 2 · · · X n )], the analogue of E[sgn(X 1 ) sgn(X 2 )]. Using Section 3.5 in [12], one can show that the former integral is 1/(n+1) for n even and zero for n odd. One can then combine this and Corollary 6.12 in [8] to obtain an infinite number of higher-dimensional integrals which also suprisingly do not depend upon α.

Results for large thresholds and the discrete Gaussian free field
In the first subsection of this section, we show that non-fully supported Gaussian vectors do not have color representations for large h. On the other hand, in the second subsection, we give the proof of Theorem 1.7 that discrete Gaussian free fields (DGFF) have color representations for large h.

An obstruction for large h
We first deal with the case n = 2, where we have the following easy result.
This result essentially follows from Theorem 2.1 in [2] (see also Lemma 7.10 here) but we include a proof sketch here.
Proof of Proposition 7.1. Note first that since n = 2, the nonnegative correlation immediately implies that X h has a color representation for all h ∈ R, and hence we need only show that lim h→∞ q 12 (h) = 0. Since it can be easily checked that this however is straighforward.
The previous result immediately implies the following.
Interestingly, this gives the following negative result when X is not fully dimensional.
Corollary 7.3. Let (X 1 , X 2 , . . . , X n ) be a standard Gaussian vector with Cov(X i , X j ) ∈ [0, 1) for all i < j. If X is not fully supported, then for all sufficiently large h, X h is not a color process.
Proof. Since X is not fully dimensional, there must exist a linear relationship between the variables. As a result, there must exist ρ ∈ {0, 1} n so that for all h > 0, ν ρ (h) = 0. Hence, if there is a color representation (q σ (h)) for some h, it must satisfy q 1,2,...,n (h) = 0. The desired conclusion now follows from Corollary 7.2.

Discrete Gaussian free fields and large thresholds
In this section, our main goal will be to prove Theorem 1.7. Note that all our random vectors in this section will be fully supported which we know is anyway necessary in view of Corollary 7.3.
We first note the following corollaries of Theorem 1.7.
Corollary 7.4. Let a ∈ (0, 1) and let X := (X 1 , X 2 , . . . , X n ) be a standard Gaussian vector with Cov(X i , X j ) = a for all i < j. Then X h is a color process for all sufficiently large h.
Proof. Let A be the covariance matrix of X. Then one verifies that for i, j ∈ [n] we have Consequently, A is an inverse Stieltjes matrix. Moreover, for all j ∈ [n] we have that and hence 1 T A −1 > 0. Applying Theorem 1.7, the desired conclusion follows.
Corollary 7.5. Let a ∈ (0, 1) and let X := (X 1 , X 2 , . . . , X n ) be a standard Gaussian vector with Cov(X i , X j ) = a |i−j| for all i, j ∈ [n], yielding a Markov chain. Then X h is a color process for all sufficiently large h.
Proof. Let A be the covariance matrix of X. Then one verifies that for i, j ∈ [n] we have Consequently, A is an inverse Stieltjes matrix. Moreover, for all j ∈ [n] we have that if j ∈ {1, n} and hence 1 T A −1 > 0. Applying Theorem 1.7, the desired conclusion follows.
We now state and prove a few lemmas that will be needed in the proof of Theorem 1.7. The first of these will give sufficient conditions for X h to be a color process for large h in terms of the decay of the tails of ν(1 S ) for sets S. Lemma 7.6. Let (ν p ) p∈(0,1) be a family of probability measures on {0, 1} n . Assume that ν p has marginals pδ 1 + (1 − p)δ 0 and that for all S ⊆ [n] with |S| ≥ 2 and all k ∈ S, as p → 0, we have that and lim p→0 S⊆[n] : |S|≥2 Then X p ∼ ν p is a color process for all sufficiently small p > 0.
Proof. We will show that given the assumptions of the lemma, for p > 0 sufficiently small there is a color representation (q σ ) = (q σ (p)) of X p ∼ ν p which is such that q σ = 0 for all σ ∈ B n with more than one non-singleton partition element.
To this end, fix p ∈ (0, 1/2). We now refer to the proof of Theorem 2.4. By Step 1 in that proof we have that a color representation (q σ (p)) with the desired properties exists if and only if the unique solution (q σ S (p)) |S| =1 to In the proof of Step 1 of Theorem 2.4, we saw that for S, T ⊆ [n] with |T | = 1, Note that since p ∈ (0, 1/2), if S ⊆ T and |S| = 1, we have that (BA )(S, T ) > 0. Since, by Step 1 in the proof of Theorem 2.4, the rank of A is exactly 2 n − n, it follows that if we think of ν p as a column vector, then (37) is equivalent to (with T here meaning transpose and e S denoting the vector (I(S = S)) S ⊆[n] ). Now note that DBν p (1 ∅ ) = ν p (1 ∅ ) and that if S ⊆ [n] has size |S| ≥ 2, we have that Since |S| ≥ 2, the denominator is asymptotic to p, and by (36) the numerator is asymptotic to ν p (1 S ). It follows that DBν p (1 S ) ∼ p −I(|S|≥2) ν p (1 S ) for any S ⊆ [n] with |S| = 1. If we apply C to the vector (p −I(|S|≥2) ν p (1 S )) S⊆[n], |S| =1 , a computation shows that we get the vector Since by assumption, this expression is positive for p close to zero, it follows that q σ (p) ≥ 0 for all sufficiently small p > 0 and all σ ∈ B n . This concludes the proof.
Lemma 7.7. Let X := (X 1 , X 2 , . . . , X n ) be a standard Gaussian vector with strictly positive, positive definite covariance matrix A. Assume further that A is an inverse Stieltjes matrix and that 1 T A −1 ≥ 0. Then for each S ⊆ [n], the covariance matrix A S of X S := (X i ) i∈S is a strictly positive, positive definite inverse Stieltjes matrix with 1 T A −1 S ≥ 0.
Remark 7.8. The main part of the proof of this lemma consists of showing that if the weak Savage condition holds for a matrix A which is an inverse Stieltjes matrix, then the weak Savage condition will also hold for any principal submatrix. Without the additional assumption that A is an inverse Stieltjes matrix, this will not be true. To see this, take e.g. One can verify that A is a positive definite matrix for which the Savage condition holds, but that the Savage condition does not hold for the principal submatrix corresponding to the first three rows and columns.
Remark 7.9. Lemma 7.7 essentially proves that if X is a DGFF, then for any S ⊆ [n], X S := (X i ) i∈S is also a DGFF.
Proof of Lemma 7.7. By induction, it suffices to show that the conclusion of the lemma holds for S of the form [n]\{k} for some k ∈ [n]. To this end, fix k ∈ [n]. Clearly, A [n]\{k} is a positive and positive definite matrix. By a lemma on page 328 in [7], A [n]\{k} is also an inverse Stieltjes matrix. Next, let and hence for j = k Since b jk ≤ 0, b kk > 0 and 1 T A −1 (k) ≥ 0 , we obtain the inequality Since this holds for all j = k, the desired conclusion follows.
The following lemma, which collects special cases of Theorems 2.1 and 2.2 in [2] and Theorem 3.1 in [3], will be needed here and in addition in the proof of some lemmas used in the proof of Theorem 1.8 Lemma 7.10. Let X be a fully supported n-dimensional standard Gaussian vector with positive definite covariance matrix A = (a ij ). If the vector α := 1 T A −1 has no zero component, then as h → ∞ one has that We note that if n = 3, then assuming α(1) ≤ α(2) ≤ α(3), then it is immediate to check that α(2) and α(3) are strictly positive, while α(1) can be negative, zero or positive.
Lemma 7.11. Let X := (X 1 , X 2 , . . . , X n ) be a standard Gaussian vector with positive, positive definite covariance matrix A which is an inverse Stieltjes matrix and satisfies 1 T A −1 ≥ 0. Then for any S ⊆ [n] with |S| ≥ 2 and k ∈ S, as h → ∞, we have Proof. Let S ⊆ [n] and define X S := (X i ) i∈S . Let A S be the covariance matrix of X S . By Lemma 7.7, the matrix A S is a strictly positive, positive definite inverse Stieltjes matrix which satisfies 1 T A −1 S ≥ 0. To simplify notation, let (a S . The rest of the proof of this lemma will be divided into several steps Step 1. Fix S ⊆ [n] with |S| ≥ 2 and k ∈ S. In this step, we will prove the inequality To this end, note first that since (b Since X is a standard Gaussian vector, we have that a Combining these observations, we have This implies in particular that with equality if and only if 1 T A −1 S (k) = 0. In view of (43), (41) follows.
Step 2. In this step, we will prove that for all S ⊆ [n] with |S| ≥ 2 and k ∈ S, we have with the first inequality being strict if and only if 1 T A −1 S (k) > 0. To this end, note first that since A is positive definite, so is A S and A S\{k} . So, as before, Using this, we obtain Recalling that b (S) kk > 0 and using the conclusion of Step 1, the desired conclusion follows.
Step 3. For S ⊆ [n] with |S| ≥ 2, define J S := {j ∈ S : 1 T A −1 S (j) = 0}. Note that since A S is positive definite, we have that 1 T A −1 S 1 > 0 and hence J S = S. In this step, we claim that the following hold for any sets S ⊆ S ⊆ [n].
To see this, note first that by (39), for any set S ⊆ [n] and any distinct i, j ∈ S we have that From this (i) immediately follows. For (ii), one first checks that if S has 2 elements, then J S = ∅. For larger S, we argue by induction. Take i ∈ J S . By induction, (S\{i})\(J S\{i} ) ≥ 2 which by (i) implies (S\{i})\(J S \{i}) ≥ 2, which yields the result for S..
Next, by Lemma 7.7, A S is an inverse Stieltjes matrix which satisfies 1 T A −1 S ≥ 0. In particular, this implies that b (S) ji ≤ 0 and 1 T A −1 S (i) ≥ 0, and hence it follows from (46) that (iii) follows. Next, (iv) follows easily from (iii).
We will now show that (v) holds. To simplify notation, let Z S := {T ⊆ [n]\S : T ⊆ J S∪T }. It suffices to show that if T 1 , T 2 ∈ Z S and i ∈ T 1 , then To this end, fix S ⊆ [n], T 1 , T 2 ∈ Z S and i ∈ T 1 . Since T 1 ∈ Z S , we have that T 1 ⊆ [n]\S and T 1 ⊆ J S∪T1 , or equivalently, that 1 T A −1 S∪T1 (j) = 0 for all j ∈ T 1 . Since this in particular implies that 1 T A −1 S∪T1 (i) = 0, using (46) it follows that \S, it follows that T 1 \{i} ∈ Z S , and hence (a) holds. Next, since {i} ∈ T 1 ⊆ [n]\S and T 2 ⊆ [n]\S, we clearly have that T 2 ∪ {i} ⊆ [n]\S, and hence to prove that (b) holds we need to show exactly that 1 T A −1 S∪T2∪{i} (j) = 0 for all j ∈ T 2 ∪ {i}. For this, fix j ∈ T 2 ∪{i} and note that by (a), {j} ∈ Z S , and hence 1 T A −1 S∪{j} (j) = 0. However, since {j} ⊆ T 2 ∪ {i}, by repeated application of (47), it follows that and hence 1 T A −1 S∪T2∪{i} (j) = 0, implying in particular that T 2 ∪ {i} ∈ Z S . Ths concludes the proof of (b).
Step 4. In this step, we will now show that for any S ⊆ [n] with |S| ≥ 2 and k ∈ S, as h → ∞, we have that To this end, fix S ⊆ [n] and let J S be as in Step 3. By Step 2, for any k ∈ S\J S , we have that Since this trivially holds for k ∈ J S , it follows that these inequalities in fact hold for all k ∈ S. Now fix k ∈ S. By Step 3 (iv) we have that 1 T A −1 S\J S > 0 and 1 T A −1 S\(J S ∪{k}) > 0, and hence by applying the first part of Lemma 7.10 and using (48), it follows that as h → ∞, we have Applying the second part of Lemma 7.10 several times together with Step 3 (iii), we see that Using this, it follows that as h → ∞, and hence the desired conclusion holds.
Step 5. In this step, we show that for each S ⊆ [n] with |S| ≥ 2, as h → ∞, we have that To this end, fix S ⊆ [n]. By an inclusion-exclusion argument, we see that For each T ⊆ [n]\S, let J S∪T be as in Step 3. By (49) applied to S ∪ T , it follows that

Now note that by (45) and
Step 3 (iii), we have that (44) and induction now implies that with equality if and only if T ⊆ J S∪T . Since by Step 3 (iv) we have that 1 T A −1 (S∪T )\J S∪T > 0, if we combine these observations and apply Lemma 7.10, it follows that

By
Step 3 (v), the set {T ⊆ [n]\S : T ⊆ J S∪T } is a power set of some set S 0 . Using this, it follows that and hence (50) holds.
Since Step 4 and Step 5 together give the conclusions of the lemma, this concludes the proof.
Remark 7.12. If we assumed Savage instead of weak Savage, the proof could be somewhat shortened.
We are now ready to give the proof of Theorem 1.7.
Proof of Theorem 1.7. The covariance matrix for a DGFF is a block matrix with each block satisfying the assumptions of Lemma 7.11. Hence, restricting to a box, we have that for all S within this box with |S| ≥ 2 and for k ∈ S, we have that The second condition in Lemma 7.6 trivially holds and hence applying this lemma, we obtain conclude that for large h, the threshold Gaussian corresponding to this fixed box is a color process. Since the full process is independent over the different boxes, we easily obtain the desired result for the full process.
8 General results for small and large thresholds for n = 3 in the Gaussian case 8.1 h small Theorem 8.1. Let X be a three-dimensional random vector with a continuous and strictly positive probability density function. Assume further that X D = −X and that X has equal marginal distributions. Then (Note that ν 0 (0) is the one-dimensional marginal density at zero). As a consequence, if X is a three-dimensional standard Gaussian vector with covariance matrix A = (a ij ) and θ ij := arccos a ij , then Proof. Since X has equal marginals, it follows from Theorem 2.4 that X h has a unique signed color representation for each h > 0, and by (10) we have .
Since ν ρ is differentiable at zero, it follows that Similarly, again using (10), one has that lim h→0 q 12,3 (h) = lim If we can show that the first part of the theorem will follow using symmetry and the fact that q σ = 1. To see that (51) holds, let f be the probability density function of (X 1 , X 2 ) and note that ν 0 (x) is the marginal density of both X 1 and X 2 . Then for any h 1 , h 2 ∈ R we have that Differentiating with respect to h 1 in the same way and then setting h 1 = h 2 = 0, it follows that By symmetry, the two summands are each equal to 1/2, and hence ν 00· (0) = ν 0 (0) as desired. The other equalities follow by an analogous argument.
For the second part of the theorem, note first that by an analogous argument as above, one obtains in general that Using basic facts about Gaussian vectors, one has that (X 2 , X 3 ) | X 1 = 0 is a Gaussian vector with correlation .
Using that as A is positive definite, then a 12 ≤ a 13 a 23 + (1 − a 2 13 )(1 − a 2 23 ), it follows that we indeed have that α + β ≥ 0. Moreover, with some work, one verifies that This implies in particular that Combining this with (24) and the first part of the theorem, the desired conclusion follows.
We now apply Theorem 8.1 to a few examples.
Proof. Note first that by using Theorem 8.1, after a computation, we obtain It suffices to show that the above limits are positive. Since arccos x ∈ (0, π) for all x ∈ (−1, 1) and arccos x is strictly decreasing in x, it follows that the first of these is strictly positive whenever By rearranging, one easily sees this to be true whenever a ∈ (0, 1). Next, since π − arccos x = arccos(−x) for all x ∈ (0, 1) it follows that the second limit is strictly positive whenever which clearly holds for all a ∈ (0, 1). To see that X h has a color representation for all sufficiently small h > 0, it thus only remains to show that lim h→0 q 123 (h) > 0. To this end, first note that this is equivalent to that 3 arccos a + arccos a(a 2 − 6a − 3) (1 + a) 3 < 2π.
It is easy to verify that we get equality when a = 0, and hence it would be enough to show that the left hand side is strictly decreasing in a. If we differentiate the left hand side one we obtain, after a detailed computation, that which is clearly negative for all a ∈ (0, 1). From this the desired conclusion follows.
Remark 8.4. With X = (X 1 , X 2 , X 3 ) defined as a above, X is a Markov chain.
Proof of Corollary 8.3. Note first that by using Theorem 8.1, after a computation, we obtain It suffices to show that the above limits are positive. By using the fact that π − arccos x = arccos(−x) for all x ∈ (−1, 1) and the fact that arccosine is a strictly decreasing function, one easily verifies that the first, second and fourth of these are strictly positive for all a ∈ (0, 1). To see that the third limit is strictly positive for a ∈ (0, 1), we differentiate this limit with respect to a to obtain (a 1 + a 2 + 1 − a 2 − (1 + a 2 )) · 2 (1 + a 2 ) This expression can be equal to zero if and only if a 1 + a 2 + 1 − a 2 = 1 + a 2 .

h large
Before proving Theorem 1.8, we start off by giving some interesting applications of it.
Corollary 8.5. For each case below, there is at least one Gaussian vector X with non-negative correlations which satisfies it.
(i) X h has a color representation for all sufficiently large h and for all sufficiently small h > 0.
(ii) X h has no color representation for any sufficiently large h nor for any sufficiently small h > 0.
(iii) X h has a color representation for all sufficiently large h but not for any sufficiently small h > 0.
(iv) X h has a color representation for all sufficiently small h but not for any sufficiently large h.
In particular, the property of X h being a color process for a fixed X is not monotone in h (in either direction) for h > 0.
(i) Of course one can take an i.i.d. process here. A more interesting example is as follows. Let X be a three-dimensional standard Gaussian vector with Cov(X 1 , X 2 ) = Cov(X 1 , X 3 ) = Cov(X 2 , X 3 ) = a ∈ (0, 1). By combining Corollary 8.2 and Theorem 1.8(i), it follows that X h has a color representation for both sufficiently small and sufficiently large h > 0.
(ii) Let X be a three-dimensional Gaussian vector with Cov(X 1 , X 2 ) = 0.05, Cov(X 1 , X 3 ) = Cov(X 2 , X 3 ) = 0.6825. One can verify that this corresponds to a positive definite covariance matrix. Using Theorem 8.1, one verifies that lim h→0 q 12,3 (h) ≈ −0.05 and hence X h does not have a color representation for any sufficiently small h. Using Theorem 1.8, it follows that X h does not either have a color representation for large h.
(iii) Let X be a three-dimensional standard Gaussian vector with Cov(X 1 , X 2 ) = 0.1, Cov(X 1 , X 3 ) = Cov(X 2 , X 3 ) = 0.5. One can verify that this corresponds to a positive definite covariance matrix. Now by Theorem 8.1, the limit lim h→0 q 12,3 (h) ≈ −0.016 and hence X h does not have a color representation for any sufficiently small h > 0. Next, since the Savage condition (2) holds, we have that X h has a color representation for all sufficiently large h.
(iv) This follows immediately from Theorem 5.9.
Example 8.6. It is illuminating to look at the subset of the set of threedimensional standard Gaussians for which at least two of the covariances are equal. So, we let X a,b = (X 1 , X 2 , X 3 ) be a standard Gaussian vector with covariance matrix for some a, b ∈ (0, 1). One can verify that A is positive definite exactly when 2a 2 < 1 + b. Applying Theorem 1.8, one can check that X h a,b is a color process for all sufficiently large h if and only if either 2a − 1 ≤ b or (2a − 1) 2 < b (note both of these inequalities imply that 2a 2 < 1 + b). Cases (i) and (ii) correspond to the first inequality holding and Case (iii) corresponds to the first inequality failing and the second inequality holding. For a fixed h, the set of parameters which yield a color process for threshold h is a closed set. However the set of parameters which yield a color process for sufficiently large h is not a closed set; for example, a = .1 and b = belongs to this set for every > 0 but not for = 0.
In Figure 5, we first draw the regions corresponding to the various cases in Theorem 1.8 and the region corresponding to having a positive definite covariance matrix. In the second picture, we superimpose the region corresponding to all choices of a and b for which X h a,b has a color representation for all h which are sufficiently close to zero. Interestingly, this figure suggests that if X h a,b is a color process for h close to zero, then X h a,b is also a color process for h sufficiently large. Moreover, the region corresponding to the set of a and b for which X h a,b has a color representation for h close to zero intersects both the regions corresponding to Cases (i) and (iii).
Lemma 8.8. Let X be a fully supported 3-dimensional standard Gaussian vector with covariance matrix A = (a ij ). If a ij ∈ [0, 1) for all i < j, then Proof. For i < j, let A ij be the covariance matrix of (X i , X j ). Then 1 T A −1 ij = (1 + a ij ) −1 , (1 + a ij ) −1 > 0 and hence Lemma 7.10 implies that and so In particular, this implies that the desired conclusion follows if we can show that max However, since a ij ∈ [0, 1) for all i < j we have that Lemma 8.9. Let X be a fully supported 3-dimensional standard Gaussian vector with covariance matrix A = (a ij ). If 1 T A −1 > 0 and at most one of the covariances a ij is equal to zero, then Proof. We first show that the second of the two inequalities holds. First, since 1 T A −1 > 0 by assumption, we have that Since This shows that the second inequality in (54) holds. Next, to show that the first of the two inequalities in (54) holds, we will show that for all i < j, since if this holds, then (53) and (55) immediately imply the desired conclusion. To this end, using (1), one first verifies that1 Similarly, (56) can be shown to be equivalent to If a ij = 0 for exactly one of the covariances, then one easily verifies that (58) holds when (57) holds. Now instead assume that a ij > 0 for all i > j. If we think of a 12 > 0 as being fixed, then (58) holds for all a 13 and a 23 in the interior of the ellipse E given by One verifies that the boundary of E passes through the origin and the points (0, 1 − a 2 12 ), (1 − a 2 12 , 0), (a 12 , 1) and (1, a 12 ). Since we are assuming the Savage condition (2), any possible a 13 ) and a 23 ) under consideration necessarily lies in the region R given by 1 + 2 min({a 12 , x, y}) > a 12 + x + y, x, y > 0.
Hence we need only show that R ⊆ E. (See Figure 6.) To see this containment, note that R is a polygon with vertices given by (0, 0), (0, 1 − a 12 ), (1 − a 12 , 0), (1, a 12 ) and (a 12 , 1). We already know that the first, fourth and fifth of these vertices lie on the boundary of E while one easily checks that the other two lie inside E. Since E is convex, and R is a polygon, it follows that R ⊆ E. We are now ready to give the proof of Theorem 1.8. We remark that in the proof, Case 1 and Case 2 can alternatively be proven, using the lemmas in this section, by appealing to Lemma 7.6.
Proof of Theorem 1.8. Let (q σ ) σ∈B3 be the unique solution to (5) guaranteed to exist by Theorem 2.4. By (10), using inclusion-exclusion, we see that for any h > 0 we have that and This implies that there is a color representation for large h if and only if for all large h we have that and We will check when (59), (60) and (61) hold for large h by comparing the decay rate of the various tails. Before we do this, note that by (1), one has that 1 T A −1 (1) ≤ 0 exactly when 1 + a 23 ≤ a 12 + a 13 . If this holds, then clearly a 23 = min i<j (a ij ) and hence ν ·11 Without loss of generality, we assume that 0 ≤ a 23 ≤ a 13 ≤ a 12 and that a 12 > 0, since the case a 12 = a 13 = a 23 = 0 is trivial. Note that this assumption implies by (1) that with the largest two terms being positive. We now claim that (59) holds for all sufficiently large h, without making any additional assumptions on A. To see this, note that Lemma 8.7 implies that and hence (59) holds for all large h. We now divide into three cases.
Case 4 Assume now that a 23 = 0, i.e. that X 2 and X 3 are independent. Note that if a 13 = a 23 = 0, then there is a color representation by Proposition 7.1, and hence we can assume that a 13 > 0. Now note that since X 2 and X 3 are independent by assumption, if X h has a color representation (q σ (h)) for some h, it must satisfy q 1,23 (h) = q 123 (h) = 0. Using the general formula for these expressions, we obtain that Using that ν ·11 (h) = ν 1 (h) 2 by assumption, we see that these equations are both equivalent to that We will show that (63) does not hold for any large h. To this end, note first that if 1 T A −1 > 0 and a 12 , a 13 > 0, then by Lemma 8.9 we have that and hence (63) cannot hold, implying that there can be no color representation for any large h in this case. Next, if 1 T A −1 (1) = 0, then using Lemma 7.10 we get that Using Lemma 8.7 and the assumption that a 12 , a 13 < 1, it follows that and hence (63) cannot hold, implying that there can be no color representation for any large h in this case. Finally, if 1 T A −1 (1) < 0 then we can use Case 3. Observing that if a 23 = 0, then det A > 0 implies that a 2 12 + a 2 13 < 1, we have that implying in particular that there can be no color representation.
When h = 0 and a = 2 −1/α , we have that Hence we get equality in (64) in this case. Now note that the left hand side of (64) is strictly decreasing in a. This implies that when h = 0, we get nonnegative correlations if and only if a ≤ 2 −1/α , establishing the equivalence of (i) and (ii). We now show that (ii) implies (iii). To see this, note first that since the left hand side of (64) is strictly decreasing in a, it suffices to show that (64) holds for all h ≥ 0 when a = 2 −1/α . To this end, note first that in this case, we have that

Now observe that
Putting these observations together, we obtain In particular, we get nonnegative correlations if and only if Rearranging, we see that this is equivalent to This establishes (iii).

h large and a phase transition in the stability exponent
In this subsection we will look at what happens when X is a symmetric multivariate stable random variable with index α < 2 and marginals S α (1, 0, 0), and the threshold h > 0 is large. The fact that stable distributions have fat tails for α < 2 will result in behavior that is radically different from the Gaussian case. We will obtain various results, perhaps the most interesting being a phase transition in α at α = 1/2; this is Theorem 1.12.
Our main tool when working in this setting will be the following theorem.
Theorem 9.1. Let α ∈ (0, 2) and let X be a symmetric n-dimensional α-stable random vector with marginals S α (1, 0, 0) and with spectral measure Λ and let ρ ∈ {0, 1} n . Then for any k ≥ 1 we have that the limit of ν ρ (h)/ν k 1 (h) as h → ∞ is equal to Remark 9.2. The case k = 1 and ρ = 1 n of this theorem is essentially a special case of Theorem 4.4.1 in [10] (see their equation (4.4.2)). Remark 9.3. If Λ is in addition finitely supported, a change of variables shows that the expression in (65) is equivalent to Remark 9.4. Theorem 9.1 can easily be extended quite a lot. First, the same statement but with different thresholds h i = b i h, b i > 0, for each i follows analogously if we replace 1 with (b i ) i=1,2,...,n but keep ν 1 (h). Furthermore, we could drop the assumption that the b i is positive by applying the theorem to the random vector (sgn(b i )X i ). Finally, it in fact already follows from the proof that we do not need the assumptions on the marginals if we replace the term ν 1 (h) in the limit with P (Y ≥ h) for Y ∼ S α (1, 0, 0).
Proof sketch of Theorem 1.11. We show that the assumptions of Lemma 7.6 hold. First Theorem 9.1 implies that (36) holds. Next, a computation using Theorem 9.1 shows that the last condition in Lemma 7.6 holds if (3) holds.
We will now apply Theorem 9.1 to a simple example.
Remark 9.6. The random vector X defined by this corollary is a stable Markov chain. We have already seen a Gaussian analogoue of this result.
Proof of Corollary 9.5. Clearly (X 1 , X 2 , X 3 ) is a three-dimensional symmetric α-stable random vector whose marginals are S α (1, 0, 0). If we let A be given by It follows that for each x ∈ supp(Λ), exactly one of ±(2Λ(x)) 1/α x is a column in A. Moreover, each column of A corresponds to a pair of points in the support of Λ in this way. To simplify notation, for x ∈ supp(Λ) we writex := (2Λ(x)) 1/α x. Using Theorem 9.1 and Remark 9.3 with n = 3 and k = 1, one easily verifies that this implies that and similarly that Combining this with (10) we obtain From this it follows that X h has a color representation for all sufficiently large h if q 13,2 (h) is non-negative for large h. By (10), q 13,2 (h) is given by Here the denominator is strictly positive for all h > 0, and we know from (66) that ν 010 (h) = (1 − a α ) 2 ν 1 (h) + o(ν 1 (h)). Hence it is sufficient to show that To see this, we again apply Theorem 9.1 and Remark 9.3 to obtain which is the desired conclusion.
We can now prove Theorem 1.12 which is a stable version of the example in the proof of (i) of Corollary 8.5.
To check when we are in these two cases, note first that by making the change of variables w = x −1 , we seethat the integral on the left hand side is equal to This integral is easily verified to be infinite when α ≥ 1, strictly positive for all α ∈ (0, 1) and equal to 1 if α = 1/2. Furthermore, if α ∈ (0, 1), recognizing this as the Beta distribution we see that it is equal to where the last equality follows by using the Legendre Duplication Formula (see [1], 6.1.18, p. 256). We claim that this expression is strictly increasing in α.
If we can show this, the conclusion of the theorem will follow since we get the value 1 at α = 1/2. To see this, recall first that Γ (α) = Γ(α)ψ(α), where ψ is the so-called digamma function. It follows that the derivative of the expression above is equal to Since the first term is equal to our original integral, it is clearly strictly larger than zero. Moreover, an integral representation of ψ given in [1] (see 6.3.21, p. 259) implies that ψ(x) is strictly increasing in x for x > 0. It follows that the second term is strictly larger than 2 log 2 + ψ (1/2) − ψ (1) .
Using the values of the digamma function at 1/2 and 1 (see [1], 6.3.2 and 6.3.3, p. 258), this last expression is 0. This finishes the proof.
We next give the proof of Theorem 1.13.
Hence it follows that (72) is bounded from above by C k−1 β/(α(k−1)) C β/(α(k−1)) + C β/α To see this, we first change the order of summation as follows. First, we will sum over all possible choices of i 1 . Then we sum over the number G of terms in the product, which will range between 0 and k − 2. Finally, we sum also over the possible choices of j := i j − i j−1 in the product, which will range from m to infinity. To sum over all possible sequences m ≤ i 1 ≤ . . . ≤ i k−1 , we find an upper bound on the number of ways to choose the differences i j − i j−1 which are smaller than m and also, on the number of ways to choose which of the differences are larger than or equal to m. The former of these quantities is clearly bounded from above by m k−2 , and the latter is equal to k−2 G < 2 k−2 .
We now state the following lemma which will be used in the proof of Theorem 9.1. For a proof of this lemma we refer the reader to [10]. Lemma 9.9 (Theorem 3.10.1 in [10]). Let Λ be a symmetric spectral measure on S n−1 . Furthermore, let C α be defined by P (Y ≥ h) ∼ C α h −α /2 for Y ∼ S α (1, 0, 0), let (Γ i ) i≥1 be the arrival times of a rate one Poisson process and let (W i ) i≥1 be i.i.d., each with distributionΛ := Λ/Λ(S n−1 ) (the normalized spectral measure), independent of the Poisson process. Then converges almost surely to a random vector with distribution S α (Λ).
We now give a proof of Theorem 9.1 using Lemmas 9.7 and 9.9.
10 Ubiquitousness of zero-threshold stables and another phase transition in the stability exponent In this section we prove Theorems 1.15 and 1.16.
Proof of Theorem 1.15. We create a probability measure Λ on S n−1 with at most 2 n point masses as follows. By identifying 0 and −1, we have a natural bijection between {0, 1} n and the subset J = {−1/ √ n, 1/ √ n} n of S n−1 . Using this identification, place a point mass at x ∈ J of weight µ(x). Let X α denote the n-dimensional stable vector with index α and spectral measure Λ. It suffices to show that X 0 α converges in distribution to µ as α → 0. Letting α < 1, it follows from p. 69 in [10] that Since the S (α) y 's are nonnegative, the events {E y,α } y∈J are disjoint for any α > 0. If we define Y y,α ∼ S α 1, 2Λ(y) Λ(y) + y ∈J,y =y Λ(y ) − 1, 0 then it follows from Property 1.2.1 on p. 10 of [10] that P (E y,α ) = P (Y y,α > 0).
(Here one needs to note that the representation for β in Equation 2.2.30 is not the same as ours but that the ratio of the two approaches 1 as α → 0.) It follows that lim α→0 P (Y y,α > 0) = Λ(y) and hence in particular that lim α→0 y∈J P (E y,α ) = lim α→0 y∈J P (Y y,α > 0) = 1.
It is clear that for each y ∈ J E y,α ⊆ {X 0 α = y} and the result follows. The last two statements are clear.
We finally give the proof of Theorem 1.16.