Stochastic domination for the Ising and fuzzy Potts models

We discuss various aspects concerning stochastic domination for the Ising model and the fuzzy Potts model. We begin by considering the Ising model on the homogeneous tree of degree $d$, $\Td$. For given interaction parameters $J_1$, $J_2>0$ and external field $h_1\in\RR$, we compute the smallest external field $\tilde{h}$ such that the plus measure with parameters $J_2$ and $h$ dominates the plus measure with parameters $J_1$ and $h_1$ for all $h\geq\tilde{h}$. Moreover, we discuss continuity of $\tilde{h}$ with respect to the three parameters $J_1$, $J_2$, $h$ and also how the plus measures are stochastically ordered in the interaction parameter for a fixed external field. Next, we consider the fuzzy Potts model and prove that on $\Zd$ the fuzzy Potts measures dominate the same set of product measures while on $\Td$, for certain parameter values, the free and minus fuzzy Potts measures dominate different product measures. For the Ising model, Liggett and Steif proved that on $\Zd$ the plus measures dominate the same set of product measures while on $\T^2$ that statement fails completely except when there is a unique phase.


Introduction and main results
The concept of stochastic domination has played an important role in probability theory over the last couple of decades, for example in interacting particle systems and statistical mechanics. In [13], various results were proved concerning stochastic domination for the Ising model with no external field on Z d and on the homogeneous binary tree T 2 (i.e. the unique infinite tree where each site has 3 neighbors). As an example, the following distinction between Z d and T 2 was shown: On Z d , the plus and minus states dominate the same set of product measures, while on T 2 that statement fails completely except in the case when we have a unique phase. In this paper we study stochastic domination for the Ising model in the case of nonzero external field and also for the so called fuzzy Potts model.
Let V be a finite or countable set and equip the space {−1, 1} V with the following natural partial order: For η, η ′ ∈ {−1, 1} V , we write η ≤ η ′ if η(x) ≤ η ′ (x) for all x ∈ V . Moreover, whenever we need a topology on {−1, 1} V we will use the product topology. We say that a function f : whenever η ≤ η ′ . We will use the following usual definition of stochastic domination. It is well known that a necessary and sufficient condition for two probability measures µ 1 , µ 2 to satisfy µ 1 ≤ µ 2 is that there exists a coupling measure ν on {−1, 1} V ×{−1, 1} V with first and second marginals equal to µ 1 and µ 2 respectively and ν( (η, ξ) : η ≤ ξ ) = 1. (For a proof, see for example [12, p. 72-74].) Given any set S ⊆ R and a family of probability measures {µ s } s∈S indexed by S, we will say that the map S ∋ s → µ s is increasing if µ s1 ≤ µ s2 whenever s 1 < s 2 .
1.1. The Ising model. The ferromagnetic Ising model is a well studied object in both physics and probability theory. For a given infinite, locally finite (i.e. each vertex has a finite number of neighbors), connected graph G = (V, E), it is defined from the nearest-neighbor potential where Z U,η J,h is a normalizing constant and ∂U = { x ∈ V \ U : ∃y ∈ U such that x, y ∈ E }.
For given J > 0 and h ∈ R, we will denote the set of Gibbs measures with parameters J and h by G(J, h) and we say that a phase transition occurs if |G(J, h)| > 1, i.e. if there exist more than one Gibbs state. (From the general theory described in [2] or [12], G(J, h) is always nonempty.) At this stage one can ask, for fixed h ∈ R, is it the case that the existence of multiple Gibbs states is increasing in J? When h = 0 it is possible from the so called random-cluster representation of the Ising model to show a positive answer to the last question (see [5] for the case when G = Z d and [7] for more general G). However, when h = 0 there are graphs where the above monontonicity property no longer holds, see [15] for an example of a relatively simple such graph.
Furthermore, still for fixed J > 0, h ∈ R, standard monotonicity arguments can be used to show that there exists two particular Gibbs states µ J,+ h , µ J,− h , called the plus and the minus state, which are extreme with respect to the stochastic ordering in the sense that for any other µ ∈ G(J, h).
To simplify the notation, we will write µ J,+ for µ J,+ 0 and µ J,− for µ J,− 0 . (Of course, most of the things we have defined so far are also highly dependent on the graph G, but we suppress that in the notation.) In [13] the authors studied, among other things, stochastic domination between the plus measures {µ J,+ } J>0 in the case when G = T 2 . For example they showed that the map (0, ∞) ∋ J → µ J,+ is increasing when J > J c and proved the existence of and computed the smallest J > J c such that µ J,+ dominates µ J ′ ,+ for all 0 < J ′ ≤ J c . (On Z d , the fact that µ J1,+ and µ J2,+ are not stochastically ordered when J 1 = J 2 gives that such a J does not even exist in that case.) Our first result deals with the following question: Given J 1 , J 2 > 0, h 1 ∈ R, can we find the smallest external fieldh =h(J 1 , J 2 , h 1 ) with the property that µ J2,+ h dominates µ J1,+ h1 for all h ≥h? To clarify the question a bit more, note that an easy application of Holley's theorem (see [3]) tells us that for fixed J > 0, the map R ∋ h → µ J,+ h is increasing. Hence, for given J 1 , J 2 and h 1 as above the set } is an infinite interval and we want to find the left endpoint of that interval (possibly −∞ or +∞ at this stage). For a general graph not much can be said, but we have the following easy bounds onh when G is of bounded degree. Then For the Ising model, we will now consider the case when G = T d , the homogeneous d-ary tree, defined as the unique infinite tree where each site has exactly d + 1 ≥ 3 neighbors. The parameter d is fixed in all that we will do and so we suppress that in the notation. For this particular graph it is well known that for given h ∈ R, the existence of multiple Gibbs states is increasing in J and so as a consequence there exists a critical value J c (h) ∈ [0, ∞] such that when J < J c (h) we have a unique Gibbs state whereas for J > J c (h) there are more than one Gibbs states. In fact, much more can be shown in this case. As an example it is possible to derive an explicit expression for the phase transition region in particular one can see that J c (h) ∈ (0, ∞) for all h ∈ R. Moreover, see [2] for more details. (Here and in the sequel, := will mean definition.) To state our results for the Ising model on T d , we need to recall some more facts, all of which can be found in [2, p. 247-255]. To begin, we just state what we need very briefly and later on we will give some more details. Given J > 0 and h ∈ R, there is a one-to-one correspondence t → µ between the real solutions of a certain equation (see (2.3) and the function φ J in (2.2) below) and the completely homogeneous Markov chains in G(J, h) (to be defined in Section 2). Let t ± (J, h) denote the real numbers which correspond to the plus and minus measure respectively. (It is easy to see that the plus and minus states are completely homogeneous Markov chains, see Section 2.) We will write t ± (J) instead of t ± (J, 0). Furthermore, let and denote by t * (J) the t ≥ 0 where the function t → d φ J (t) − t attains its unique maximum. In [2], explicit expressions for both h * and t * are derived: In particular one can see that both h * and t * are continuous functions of J and by computing derivatives one can show that they are strictly increasing for J > J c .
Then the following holds: Remarks.
(Of course, ψ(J 2 , t) and θ(J 2 , t) are just (1.2) and (1.3) with t instead of τ ± .) It is easy to check that for fixed J 2 > 0, the maps t → ψ(J 2 , t) and t → θ(J 2 , t) are continuous. A picture of these functions when J 2 = 2, d = 4 can be seen in Figure 1.2. (iii) It is not hard to see by direct computations that f + satisfies the bounds in Proposition 1.1. We will indicate how this can be done after the proof of Theorem 1.2. (iv) We will see in the proof that if Hence in the first case the left endpoint belongs to the interval, while in the second case it does not.
Our next proposition deals with continuity properties of f ± and g ± with respect to the parameters J 1 , J 2 and h 1 . We will only discuss the function f + , the other ones can be treated in a similar fashion.
is continuous except possibly at −h * (J 1 ) depending on J 1 and J 2 in the following way: We conclude this section with a result about how the measures {µ J,+ h } J>0 are ordered with respect to J for fixed h ∈ R. 1.2. The fuzzy Potts model. Next, we consider the so called fuzzy Potts model. To define the model, we first need to define the perhaps more familiar Potts model. Let G = (V, E) be an infinite locally finite graph and suppose that q ≥ 3 is an integer. Let U be a finite subset of V and consider the finite graph H with vertex set U and edge set consisting of those edges x, y ∈ E with x, y ∈ U . In this way, we say that the graph H is induced by U . The finite volume Gibbs measure for the q-state Potts model at inverse temperature J ≥ 0 with free boundary condition is defined to be the probability measure π H q,J on {1, 2, . . . , q} U which to each element σ assigns probability where Z H q,J is a normalizing constant. Now, suppose r ∈ {1, . . . , q − 1} and pick a π H q,J -distributed object X and for We write ν H q,J,r for the resulting probability measure on {−1, 1} U and call it the finite volume fuzzy Potts measure on H with free boundary condition and parameters q, J and r.
We also need to consider the case when we have a boundary condition. For finite U ⊆ V , consider the graph H induced by the vertex set U ∪ ∂U and let η ∈ {1, . . . , q} V \U . The finite volume Gibbs measure for the q-state Potts model at inverse temperature J ≥ 0 with boundary condition η is defined to be the probability measure on {1, . . . , q} U which to each element assigns probability x,y ∈E,x,y∈U x,y ∈E,x∈U,y∈∂U where Z H,η q,J is a normalizing constant. In the case when η ≡ i for some i ∈ {1, . . . , q}, we replace η with i in the notation.
Furthermore, we introduce the notion of infinite volume Gibbs measure for the Potts model. A probability measure µ on {1, . . . , q} V is said to be an infinite volume Gibbs measure for the q-state Potts model on G at inverse temperature J ≥ 0, if it admits conditional probabilities such that for all finite where H is the graph induced by U ∪∂U . Let {V n } n≥1 be a sequence of finite subsets of V such that V n ⊆ V n+1 for all n, V = n≥1 V n and for each n, denote by G n the induced graph by V n ∪ ∂V n . Furthermore, for each i ∈ {1, . . . , q}, extend π Gn,i q,J (and use the same notation for the extension) to a probability measure on {1, . . . , q} V by assigning with probability one the spin value i outside V n . It is well known (and independent of the sequence {V n }) that there for each spin i ∈ {1, . . . , q} exists a infinite volume Gibbs measure π G,i q,J which is the weak limit as n → ∞ of the corresponding measures π Gn,i q,J . Moreover, there exists another infinite volume Gibbs measure denoted π G,0 q,J which is the limit of π Gn q,J in the sense that the probabilities on cylinder sets converge. The existence of the above limits as well as the independence of the choice of the sequence {V n } when constructing them follows from the work of Aizenman et al. [1].
Given the infinite volume Gibbs measures {π G,i q,J } i∈{0,...,q} , we define the corresponding infinite volume fuzzy Potts measures {ν G,i q,J,r } i∈{0,...,q} using (1.4). In words, the fuzzy Potts model can be thought of arising from the ordinary qstate Potts model by looking at a pair of glasses that prevents from distinguishing some of the spin values. From this point of view, the fuzzy Potts model is one of the most basic examples of a so called hidden Markov field [11]. For earlier work on the fuzzy Potts model, see for example [8,9,10,14,6].
Given a finite or countable set V and p ∈ [0, 1], let γ p denote the product measure In [13] the authors proved the following results for the Ising model. (The second result was originally proved for d = 2 only but it trivially extends to all d ≥ 2.) Proposition 1.6 (Liggett, Steif). Let d ≥ 2 be a given integer and consider the Ising model on T d with paramteters J > 0 and h = 0. Moreover, let µ J,f denote the Gibbs state obtained by using free boundary conditions. If µ J,+ = µ J,− , then there exist 0 < p ′ < p such that µ J,+ dominates γ p but µ J,f does not dominate γ p and µ J,f dominates γ p ′ but µ J,− does not dominate γ p ′ .
In words, on Z d the plus and minus state dominate the same set of product measures while on T d that is not the case except when the we have a unique phase.
To state our next results we will take a closer look at the construction of the infinite volume fuzzy Potts measures when G = Z d or G = T d . In those cases it follows from symmetry that ν G,i q,J,r = ν G,j q,J,r if i, j ∈ {1, . . . , r} or i, j ∈ {r + 1, . . . , q}, i.e. when the Potts spins i, j map to the same fuzzy spin. For that reason, we let ν G,− q,J,r := ν G,1 q,J,r and ν G,+ q,J,r := ν G,q q,J,r when G = Z d or T d . (Of course, we stick to our earlier notation of ν G,0 q,J,r .) Our first result is a generalization of Proposition 1.5 to the fuzzy Potts model.
In the same way as for the Ising model, we believe that Proposition 1.7 fails completely on T d except when we have a unique phase in the Potts model. Our last result is in that direction. q,J = π T d ,0 q,J , then there exists 0 < p < 1 such that ν T d ,0 q,J,r dominates γ p but ν T d ,− q,J,r does not dominate γ p .

Proofs
We start to recall some facts from [2] concerning the notion of completely homogeneous Markov chains on T d . Denote the vertex set and the edge set of T d with V (T d ) and E(T d ) respectively. Given a directed edge x, y ∈ E(T d ) define the "past" sites by for all x, y ∈ E(T d ). Furthermore, a Markov chain µ is called completely homogeneous with transition matrix P for all x, y ∈ E(T d ) and u ∈ {−1, 1}. Observe that such a P necessarily is a stochastic matrix and if it in addition is irreducible denote its stationary distribution by ν. In that situation, we get for each finite connected set where D is the set of directed edges x, y , where x, y ∈ C and x is closer to z than y is. In particular, it follows that every completely homogeneous Markov chain which arise from an irreducible stochastic matrix is invariant under all graph automorphisms. Next, we give a short summary from [2] of the Ising model on T d . For J > 0, define The function φ J is trivially seen to be odd. Moreover, φ J is concave on [0, ∞), increasing and bounded. (In fact, φ J (t) → J as t → ∞.) Furthermore, there is a one-to-one correspondence t → µ t between the completely homogeneous Markov chains in G(J, h) and the numbers t ∈ R satisfying the equation In addition, the transition matrix P t of µ t is given by For the lower bound, we argue by contradiction as follows. Assumẽ and pick h 0 such that The right inequality of (2.5) is equivalent to and so we can pick 0 < p 1 < p 2 < 1 such that By using the last inequalities together with Proposition 4.16 in [3], we can conclude that Since p 1 < p 2 this tells us that µ J2,+ h0 µ J1,+ h1 . On the other hand we have h 0 >h which by definition ofh implies that µ J2,+ h0 ≥ µ J1,+ h1 . Hence, we get a contradiction and the proof is complete. 2 2.2. Proof of Theorem 1.2. We will make use of the following lemma from [13] concerning stochastic domination for completely homogeneous Markov chains on T d . e t+(J2,h)+J2 2 cosh(t + (J 2 , h) + J 2 ) ≥ e t±(J1,h1)+J1 2 cosh(t ± (J 1 , h 1 ) + J 1 ) .
Since the map R ∋ x → e x 2 cosh(x) is strictly increasing this is equivalent to and so we want to compute the smallest h ∈ R such that (2.6) holds. Note that since the map h → t + (J 2 , h) is strictly increasing and t + (J 2 , h) → ±∞ as h → ±∞ there always exists such an h ∈ R. If τ ± ≥ t * (J 2 ) or τ ± < t − (J 2 , −h * (J 2 )), then the equation h + dφ J2 (τ ± ) = τ ± is equivalent to t + (J 2 , h) = τ ± and so in that case the smallest h ∈ R such that (2.6) holds is equal to If τ ± ≤ −t * (J 2 ) or τ ± > t + (J 2 , h * (J 2 )) then we can proceed exactly as in the first case above. If −t * (J 2 ) < τ ± ≤ t + (J 2 , h * (J 2 )), then t − (J 2 , h) < τ ± whenever h ≤ h * (J 2 ) and t − (J 2 , h) > τ ± whenever h > h * (J 2 ) and so in that case we have which yields (1.3) and the proof is complete. 2 We will now indicate how to compute the bounds in Proposition 1.1 in the special case when G = T d . By looking at the formula for f + and using the definition of h * we get that Substituting τ + and using the bounds −J ≤ φ J (t) ≤ J for all t ∈ R we get the upper bound in Proposition 1.1 with N = d + 1. For the lower bound, first note that Moreover it is easy to check that and so the lower bound follows at once.

2.3.
Proof of Proposition 1.3. Before we prove anything we would like to recall the fact that we can write (see Remark (ii) after Theorem 1.2) and the map t → ψ(J 2 , t) is continuous (see Figure 1.2 for a picture). In the rest of the proof, we will use this fact without further notification. For example, the above immediately gives that Proof of Proposition 1.3. We will only prove part a) and c). The proof of part b) follows the same type of argument as the proof of part a).
To prove part a), we start to argue that for given J 1 > 0 the map h 1 → t + (J 1 , h 1 ) is right-continuous at every point h 1 ∈ R. To see that, take a sequence of reals {h n } such that h n ↓ h 1 as n → ∞ and note that since the map h 1 → t + (J 1 , h 1 ) is increasing, the sequence {t + (J 1 , h n )} converges to a limitt witht ≥ t + (J 1 , h 1 ). Moreover, by taking the limit in the fixed point equation we see that and since t + (J 1 , h 1 ) is the largest number satisfying (2.8) we gett = t + (J 1 , h 1 ).
Next, assume h 1 = −h * (J 1 ) and h n ↑ h 1 as n → ∞. As before, the limit of {t + (J 1 , h n )} exists, denote it by T . The number T will again satisfy (2.8). By considering different cases described in Figure 2.3, we easily conclude that T = t + (J 1 , h 1 ). Hence, the function h 1 → t + (J 1 , h 1 ) is continuous for all h 1 = −h * (J) and so we get that h 1 → f + (J 1 , J 2 , h 1 ) is also continuous for all h 1 = −h * (J 1 ). Now assume h 1 = −h * (J 1 ). By considering sequences h n ↓ −h * (J 1 ) and h n ↑ −h * (J 1 ) we can similarly as above see that and since ψ(J 2 , t + (J 2 , −h * (J 2 ))) = ψ(J 2 , t − (J 2 , −h * (J 2 ))), the continuity is clear also in that case. If J 1 > J c and 0 < J 2 ≤ J c , then To prove part c) we take a closer look at the map (J 2 , t) → ψ(J 2 , t). By definition, this map is From the continuity of t → ψ(J 2 , t) for fixed J 2 and the facts that J 2 → t * (J 2 ), J 2 → t − (J 2 , −h * (J 2 )), J 2 → −h * (J 2 ) and (J 2 , t) → t − dφ J2 (t) are all continuous, we get that ψ is (jointly) continuous and so the result follows. 2 2.4. Proof of Proposition 1.4. To prove the statement, we will show that the inequality holds if a) h ≥ 0 and J ≥ J c or b) h < 0 and h * (J) > −h. By integrating equation (2.9) the statement follows. The proof of equation (2.9) will be an easy modification of the proof of Lemma 5.2 in [13]. The proof is quite short and so we give a full proof here, even though it is more or less the same as the proof in [13]. Write φ(J, t) for φ J (t) and use subscripts to denote partial derivatives. By differentiating the relation with respect to J and solving, we get .
To get the left hand side bigger or equal to one, we need The first inequality is immediate since in the cases a) and b) above, the function t → h + d φ(J, t) crosses the line t → t from above to below. For (2.11), note that and so φ 1 (J, t) + φ 2 (J, t) = tanh(J + t), which yields that φ 1 +φ 2 is increasing in both variables. Moreover, since tanh(J c ) = 1 d (see [2]), we get φ 1 (J c , 0) + φ 2 (J c , 0) = 1 d and so To complete the proof, observe that in the cases a) and b), we have J ≥ J c and t + (J, h) ≥ 0. 2 2.5. Proof of Proposition 1.7. In the proof we will use the following results from [13] concerning domination of product measures. Here, as usual, positive correlations is defined as follows: Remarks.
(i) In particular, Theorem 2.3 gives us that if two translation invariant, downward FKG measures have the same above limsup, then they dominate the same set of product measures. (ii) In [13] they had a third condition in Theorem 2.3 which we will not use and so we simply omit it. Before we state the next lemma we need to recall the following definition.
We say that µ satisfies the FKG lattice condition if for all η, ξ ∈ {−1, 1} V Given a measure µ on {−1, 1} Z d we will denote its projection on {−1, 1} T for finite T ⊆ Z d by µ T .
Lemma 2.4. The measures ν Z d ,± q,J,r are FKG in the sense that ν Z d ,± T,q,J,r satisfies the FKG lattice condtion for each finite T ⊆ Z d .
Proof. For n ≥ 2, let Λ n = {−n, . . . , n} d and denote the finite volume Potts measures on {−1, 1} Λn with boundary condition η ≡ 1 and η ≡ q by π n,1 q,J and π n,q q,J . Furthermore, let ν n,− q,J,r and ν n,+ q,J,r denote the corresponding fuzzy Potts measures. Given the convergence in the Potts model, it is clear that ν n,± T,q,J,r converges weakly to ν Z d ,± T,q,J,r as n → ∞ for each finite T ⊆ Z d . Since the FKG lattice condition is closed under taking projections (see [4, p. 28]) and weak limits we are done if we can show that ν n,± q,J,r satisfies the FKG lattice condition for each n ≥ 2. In [6] it is proved that for an arbitrary finite graph G = (V, E) the finite volume fuzzy Potts measure with free boundary condition and parameters q, J, r is monotone in the sense that for all x ∈ V and η, η ′ ∈ {−1, 1} V \{x} with η ≤ η ′ . We claim that it is possible to modify the argument given there to prove that ν n,± q,J,r are monotone for each n ≥ 2. (Recall from [4] the fact that if V is finite and µ is a probabilty measure on {−1, 1} V that assigns positive probabilty to each element, then monotone is equivalent to the FKG lattice condition.) The proof of (2.13) is quite involved. However, the changes needed to prove our claim are quite straightforward and so we will only give an outline for how that can be done. Furthermore, we will only consider the minus case, the plus case is similar.
By considering a sequence η = η 1 ≤ η 2 ≤ · · · ≤ η m = η ′ where η i and η i+1 differ only at a single vertex, it is easy to see that it is enough to prove that for all x, y ∈ Λ n and η ∈ {−1, 1} Λn\{x,y} we have (2.14) Fix n ≥ 2, x, y and η as above. We will first consider the case when x and y are not neighbors. At the end we will see how to modify the argument to work when x, y are neighbors as well. Define V − = {z ∈ Λ n \ {x, y} : η(z) = −1} and V + = {z ∈ Λ n \{x, y} : η(z) = 1}. Furthermore, denote by E n the set of edges u, v with either u, v ∈ Λ n or u ∈ Λ n , v ∈ ∂Λ n and let P denote the probability measure on W and let P ′ and P ′′ be the probability measures on {1, . . . , q} Λn × {0, 1} En obtained from P by conditioning on A ∩ C and A ∩ B ∩ C respectively. (P ′ is usually referred to as the Edward-Sokal coupling, see [3].) It is well known (and easy to check) that the spin marginal of P ′ is π n,1 q,J and that the edge marginal is the so called random-cluster measure defined as the probability measure on {0, 1} En which to each ξ ∈ {0, 1} En assigns probability proportional to where k(ξ) is the number of connected components in ξ not reaching ∂Λ n . In a similar way it is possible (by counting) to compute the spin and edge marginal of P ′′ : The spin marginal π ′′ is simply π n,1 q,J conditioned on B and the edge marginal φ ′′ assigns probability to a configuration ξ ∈ {0, 1} En proportional to where k 0 (ξ) is the number of clusters intersecting V − but not reaching ∂Λ n , k 1 (ξ) is the number of clusters intersecting V + , k x (ξ) (resp k y (ξ)) is 1 if x (resp y) is in a singleton connected component and 0 otherwise and D is the event that no connected component in ξ intersects both V − and V + . Observe that (2.14) is the same as π ′′ (X(x) ∈ {r + 1, . . . , q}, X(y) ∈ {r + 1, . . . , q}) ≥ π ′′ (X(x) ∈ {r + 1, . . . , q}) π ′′ (X(y) ∈ {r + 1, . . . , q}). (2.15) An important feature of the coupling P ′′ is that it gives a way to obtain a spin configuration X ∈ {1, . . . , q} Λn distributed as π ′′ : (1) Pick an edge configuration ξ according to φ ′′ .
(2) Assign X = 1 to the connected components of ξ that intersect ∂Λ n and denote the union of those components byC. if C is a singleton vertex x or y.
By defining the functions f x , f y : where C x is the connected component of ξ containing x (f y defined analogously), we see as in [6] that (2.15) follows if (2.16) The significance of f x and f y is that f x (ξ) is the conditional probability that X(x) ∈ {r + 1, . . . , q} given ξ and similarly for f y , and that the events X(x) ∈ {r + 1, . . . , q} and X(y) ∈ {r+1, . . . , q} are conditionally independent given ξ. With all this setup done it is a simple task to see that to prove (2.16) we can proceed exactly as in [6, p. 1154-1155].
To take care of the case when x and y are neighbors, observe that everything we have done so far also works for the graph with one edge deleted, i.e. the graph with vertex set Λ n and edge set E n \ { x, y }. Hence we can get (2.15) for that graph. However the observation in [6,1156] gives us (2.15) even in the case when we reinsert the edge x, y .
Proof of Proposition 1.7. Let k, l ∈ {0, −, +} be given and let A n = [1, n] d , n ≥ 2. We are done if there exists 0 < c < 1 (independent of k, l and n) such that ν Z d ,k q,J,r ( η ≡ −1 on A n ) ≥ c |∂An| ν Z d ,l q,J,r ( η ≡ −1 on A n ) for all n. As for the Ising model, it is well known that the infinite volume Potts measures satisfy the so called uniform nonnull property (sometimes called uniform finite energy property), which means that for some c > 0, the conditional probability of having a certain spin at a given site given everything else is at least c. (See for example [8] for a more precise definition.) We get for arbitrary σ ∈ {1, . . . , q} ∂An Since ν Z d ,l q,J,r ( η ≡ −1 on [1, n] d ) can be written as a convex combination of the terms in the far right side of (2.17) the result follows at once. 2 2.6. Proof of Proposition 1.8. Let ρ denote the root of T d and let V n be the set of all sites in T d with distance at most n from ρ. If x is on the unique selfavoiding path from ρ to y, we say that y is a descendant of x. Given x ∈ T d , let S x denote the set of vertices of all descendants of x (including x). Moreover, let T x denote the subtree of T d whose vertex set is S x and edge set consisting of all edges u, v ∈ E(T d ) with u, v ∈ S x . In the proof of Proposition 1.8, we will use the following Lemma from [13]: 1} } be a transition matrix for an irreducible 2-state Markov chain with P (−1, 1) ≤ P (1, 1) and let µ be the distribution of the corresponding completely homogeneous Markov chain on T d . Then the following are equivalent: Proof of Proposition 1.8. Fix J > 0, q ≥ 3 and r ∈ {1, . . . , q − 1} with e 2J ≥ q − 2.
In [9], it is proved that ν T d ,0 q,J,r is a completely homogeneous Markov chain on T d for all values of the parameters with transition matrix e 2J +r−1 e 2J +q−1 q−r e 2J +q−1 r e 2J +q−1 e 2J +q−r−1 e 2J +q−1 .
Hence, from Proposition 2.5 we get that ν T d ,0 q,J,r ≥ γ p if and only if (2.18) p ≤ q − r e 2J + q − 1 .
Moreover, an easy calculation gives us that ce 2J ce 2J + q − 1 + e 2J c + e 2J + q − 2 ≥ ce 2J + q − 2 ce 2J + q − 1 and since It is now clear that for p as in (2.21) we have that ν T d ,0 q,J,r dominates γ p but ν T d ,− q,J,r does not dominate γ p . 2 Remark. By deriving the transition matrix for π T d ,q q,J it is probably possible to prove that there exists p ∈ (0, 1) such that ν T d ,0 q,J,r dominates γ p but ν T d ,+ q,J,r does not dominate γ p .

Conjectures
We end with the following conjectures concerning the fuzzy Potts model. The corresponding statements for the Ising model are proved in [13].
Conjecture 3.1. Let q ≥ 3, r ∈ {1, . . . , q − 1} and consider the fuzzy Potts model on Z d . If J 1 , J 2 > 0 with J 1 = J 2 , then ν Z d ,+ q,J1,r and ν Z d ,+ q,J2,r are not stochastically ordered. If the underlying Gibbs measures for the Potts model satisfy π T d ,1 q,J = π T d ,0 q,J , then the sets in (3.1) are all different from each other.