Spatial modelling for mixed-state observations

In several application fields like daily pluviometry data modelling, or motion analysis from image sequences, observations contain two components of different nature. A first part is made with discrete values accounting for some symbolic information and a second part records a continuous (real-valued) measurement. We call such type of observations"mixed-state observations". This paper introduces spatial models suited for the analysis of these kinds of data. We consider multi-parameter auto-models whose local conditional distributions belong to a mixed state exponential family. Specific examples with exponential distributions are detailed, and we present some experimental results for modelling motion measurements from video sequences.


Introduction
In many applications, it is frequent to get observations with two components of a different nature: the first component is made up of discrete values and the second component records a continuous measurement.For example, pluviometry time series at a given site records many zeros for dry days, followed by positive and continuous records for wet periods [2,1].Similar phenomena also occur in speech recordings where interchanges are permanent between absences and presences of the signal.Other examples arise in the motion analysis problem from image sequences [5], or in epidemiological data analysis where the disease at given locations can be absent or spreads out.We call such type of measurements mixed-state observations.It then raises the question to find accurate models for these types of data.
To deal with data of mixed nature, most of the existing approaches rely on an hierarchic approach.One introduces a hidden variable to distinguish discrete observations from continuous ones.Or equivalently, the discrete values are interpreted as resulting from some unobserved censoring variable [2].Specifically, a Bayesian approach is used for statistical inference.
Our approach is different.We propose a direct modelling by considering random variables which can take discrete values as well as continuous ones.Although the idea seems absolutely natural, we are not aware of any statistical models relying on such a direct approach for mixed-state data.
The main motivation of the paper is a search for spatial models for observations {X s } such that each X s is a mixed-state random variable.In the spatial context, the discrete components could not be simply neglected, because these symbolic values as well as their spatial correlations convey important point-wise and contextual information.To this end, we introduce a new class of auto-models for such mixed-state data.Their construction proposed in this paper relies on an adaptation to the present context of a general class of Markov random fields models, namely multi-parameter auto-models, that we recently introduced in [9].Roughly speaking, an auto-model, as introduced originally in [4], is a Markov field on a finite set of sites, for which the interactions between sites are pairwise only, and each local conditional distributions belongs to some exponential family.The multi-parameter auto-models of [9] extend the classical one-parameter auto-models of [4] and several known spatial models previously proposed in [6,10,11].
The plan of the paper is the following.We first present mixed state random variables in a simple context where the observation is made up with 0 and values in (0, ∞).The distribution of this mixed state random variable has two main features; it reflects the dual character of the observation, and the distribution belongs to an exponential family.In §3, we give the general definition for mixed-state variables.We recall in §4 results on general multi-parameters auto-models of [9] which constitute the starting blocks of our construction of auto-models for mixed-state observations that we present in §5.We wet out in §6 a detailed study of mixed-state auto-models where neighbouring sites are spatially cooperative.This property contrasts with many classical auto-models introduced in [4] which lead to a spatial competitive behaviour, which is clearly inadequate in many practical situations.We conclude the paper by an analysis of motion measurements from video sequences, using a mixed positive Gaussian auto-model.

Simple random variables with mixed states
Before defining general mixed state variables, let us begin with the simplest situation where the state space is E = {0}∪(0, ∞).Of course E = [0, ∞), but the split formula has the merits to insist on the null value which plays a particular role in the construction.A mixed-state random variable X on E is defined as follows: with a probability γ ∈ [0, 1] we set X = 0, and with probability 1 − γ, X follows a continuous distribution on (0, ∞) with a probability density function g.Formally, we equip E with its Borel field E and we introduce a reference measure of mixture type where δ 0 is the Dirac measure at 0 and λ the Lebesgue measure.Throughout the paper, we denote by 1 A the indicator function of a set A. For the particular case of {0}, we use a simpler notation by setting δ(x) = 1 {0} (x) and δ * (x) = 1−δ(x).
The above mixed-state variable X then has a probability density function with respect to m given by Clearly, such mixed-state random variables (or distributions) can provide accurate modelling for the marginal empirical distributions discussed in §1.
For the upcoming construction of spatial models, we are interested in mixedstate random variables of a particular type, namely their continuous component g belongs to a ℓ−dimensional exponential family for some sufficient statistics T and measurable positive functions H and L ( , denoting the scalar product in R ℓ ).Interestingly enough, the mixed-state distribution can also be put in the form of an exponential family.Indeed, with H ′ (θ) = γ, L ′ (x) = exp{δ * (x) log L(x)}, and the natural parameter and the sufficient statistics defined by Note that with the standard convention 0 log 0 = 0, these formula are still valid in the extreme situations γ ∈ {0, 1} which correspond to a purely continuous and a purely discrete distribution, respectively.Therefore, the mixed-state distribution f θ belongs to an exponential family, of dimension ℓ + 1. Moreover the original parameters ξ and γ can be recovered from θ by

Let us consider some examples.
Example 1. Mixed-state Exponential distribution: this simple distribution is obtained with g λ (x) = λe −λx where λ > 0. Then ξ = H(ξ) = λ and T (x) = −x.The parametric dimension of the resulting mixed-state distribution equals two.Positive mixed-state Gaussian distribution: here the continuous component of X is the distribution of the modulus of a zero-mean Gaussian distribution with variance σ 2 .We have θ = (ln

General random variables with mixed states
To cover situations involving several atomic values, the previous simple model need to be extended.Let F = {e 1 , . . ., e M } be a finite set of M elements and G a Borel subset of an Euclidean space R p .Let q = (q 1 , . . ., q M ) be a probability distribution on F and g a probability density function on G (with respect to the Lebesgue measure).
We define a general mixed-state random variable X as follows: • with a probability γ ∈ [0, 1], X takes values in F with distribution q; • with probability 1 − γ, X takes values in G according to the density function g.
Although the nature of the discrete state space F could be arbitrary (possibly qualitative), we are going to embed F in R p to ease the development of a likelihood-based estimation theory.In other words, we set the state space of X to be Therefore, we can supply E with its Borel field and a reference measure of mixture type where δ e denotes the Dirac measure at a point e ∈ R p and λ the Lebesgue measure on E. Then the mixed-state variable X has the the following density function with respect to m: (3.2)

Exponential family case
We focus now on mixed-state distributions which belong to some exponential family.To avoid trivial situations, we assume that • the discrete distribution q is everywhere positive: q i > 0 (i = 1, . . ., M ).
• the density g of the continuous component belongs to a ℓ−dimensional exponential family as in (2.3).
We first write the discrete distribution q in an exponential family form, through the logistic transformation: We notice that by definition k M = 0. We have, Combining this writing with (2.3) and (3.2), we get where H ′ (θ) = log(γq M ), and . In other words, f θ belongs to an exponential family of dimension ℓ + M with the natural parameter and the sufficient statistics given by Note that θ M+1 and T (x)1 G (x) are ℓ-dimensional vectors, and by definition B(e M ) = 0 and L ′ (e M ) = 1.Furthermore, the original parameters ξ, q and γ are recovered through the formulae It is worth noticing that in the case of E = {0}∪(0, ∞), the formulae (3.3)-(3.4)reduce to equations (2.4)-(2.5) of the previous section.

Example of a mixed-state and censored exponential variable
Let Z be an exponential random variable with parameter λ, censored at a known location K > 0. The probability density function of Z is defined by g We define the following mixed-state variable X: with probability α, X takes the value 0 ; with probability 1 − α, X has the distribution of Z. Therefore, X has masses {α, (1 − α)e −λK } on the atoms {0, K}, and a continuous density function (1 − α)λe −λx on (0, K).Equivalently, X can be viewed as a general mixed-state variable with the state space E = {0, K} ∪ (0, K) = [0, K] and the following parameters Following (3.3) and (3.4), the distribution of X belongs to an 3-dimensional exponential family with the following natural parameters and sufficient statistics:

Results on multi-parameter auto-models
The construction of spatial models for mixed-state observations relies on the general theory of multi-parameter auto-models developed in [9].We quote below its main results which are relevant for the present purpose.
Let us set some notations.Let S = {1, . . ., n} be a finite set of sites equipped with a symmetric graph G without loops.We denote by {i, j} a pair of neighbouring sites (in particular, i = j).For any subset A of S, let x A = (x i , i ∈ A) and x A = (x j , j ∈ S \ A).The neighbourhood of a site i is ∂i = {j ∈ S : i, j }.We shall write x i = x {i} .The variates x i take their values in a measurable state space (E, E, m).Most of the time, E will be a subset of R p .The configuration space Ω = E S is equipped with the σ-algebra E ⊗S and the product measure ν := m ⊗S .A random field is specified by a probability distribution µ on Ω, and we assume the positivity condition, that is µ has an everywhere positive density P with respect to ν. Consequently we write where Z is a normalisation constant.From the Hammersley-Clifford Theorem, the energy function Q(x) is a sum of potentials {G A ; A ∈ C } indexed by a set C of cliques.Let us fix a reference configuration, or "ground states", τ = (τ i ) ∈ Ω yielding to the potentials normalisation: for any potential G A (x A ) where A ⊂ S, we have The class of multi-parameter auto-models defined in [9] extends the classical one-parameter auto-models of J. Besag in its seminal paper [4].Their construction rely on the following assumptions.Assumption 1.The dependence between the sites is pairwise only; in other words, Assumption 2. For an integer k ≥ 1 and all i ∈ S, the conditional distribution of X i given X i = x i relies in an exponential family The following result of [9] determines the necessary form of the local natural parameters {θ i (.)} to ensure the compatibility of the family of conditional distributions.[9]).Assume that Assumptions 1-3 are satisfied with the normalisation

Theorem 1. (Hardouin and Yao
Then, necessarily, the functions θ i take the form where {α i : i ∈ S} is a family of k-dimensional vectors, and Moreover, the potentials are given by A model satisfying the assumptions of this theorem is called a multi-parameter auto-model.For a concrete construction of such a multi-parameter auto-model, one follows a two steps method: first, specify the family of cliques (Assumption 1), and the family of conditional distributions (Assumption 2); secondly, find the admissible set of parameters {α i , β ij } which ensures the integrability condition: We refer the reader to [9] and the references therein for more account on this new family of auto-models.
Another important question about the model is that of spatial symmetry.The general formulation given above does not impose any symmetry, and hence it can be useful for modelling random fields on arbitrary or oriented graphs.However, in the case of a spatially symmetric random field, all potentials G ij (x i , x j ) are necessarily symmetric functions; equivalently, all the matrices β ij are symmetric.

The construction
Following the general theory of multi-parameter auto-models quoted above, we now construct auto-models for general mixed-state variables X = {X i , i ∈ S} on a finite set S. The state space for each variable X i is E = F ∪ G as defined in §3.We let the configuration space Ω = E S = (F ∪ G) S be supplied with the product measure ν = m ⊗S , where m is defined by (3.1).
We assume that Assumptions 1-3 are satisfied, where in Assumption 2, the family of conditional distributions f i (x i | •) belongs to the family of mixed-state distribution given in (3.3).In other words, Note that by definition, B i (e M ) = 0 and L ′ (e M ) = 1.Therefore, the state e M serves as a reference state for coordinates X i and the reference configuration becomes τ = (e M , . . ., e M ) for the application of Theorem 1.
Following Theorem 1, there exist a family of (ℓ + M )−dimensional vectors {α i : i ∈ S} and a family of (ℓ + M ) × (ℓ + M ) matrices The families of potentials are given by Let us note that two variables X i and X j are (spatially) conditionally independent, given {X k , k = i, j}, if (and only if) β ij = 0.In this case, we say that the sites i and j are neighbours.Thus the neighbourhood ∂i of i is the set {j : β ij = 0}.Moreover we can substitute x i by x ∂i in the previous equations (5.1) and (5.2).
These auto-models for mixed state variables are completely and well defined once we choose admissible parameters {α i , β ij } that ensure the integrability condition (4.5).

Spatial cooperation behaviour
In many practical situations, we need to investigate the properties of local interactions of the system.Indeed, we want to know whether the field is spatially cooperative or spatially competitive (or neither of them).A standard definition of spatial cooperation (respectively, competition) is that at each site i, the conditional expectation E X i | x i increases (respectively, decreases) with each neighbouring value x j , j = i.For mixed-state auto-models valued in E S = (F ∪ G) S , we must adapt these definitions.For each i we define the x i and we study its variations in each coordinate x j of x i , where x j ∈ G and x j ∈ ∂i.Then we define spatial cooperation (or competition) similarly as the classical definition by substituting R( Let us note that this definition coincides with the classical one in the case E = G. In the particular case where E = {0} ∪ (0, ∞) with F = {0} and G = (0, ∞), for any random mixed state variable X on E as defined in §2 with density function (2.2), we have Then we conclude that for mixed state variables in E S = ({0} ∪ (0, ∞)) S , the generalised definition for spatial cooperation (competition) above meets the classical one.This will not occur anymore in the case of an atomic value different from zero.

A translation invariant and symmetric mixed-state auto-model with the four nearest neighbours system
Let us consider the four nearest neighbours system on a two-dimensional lattice, S = [1, M ] × [1, N ]: each site i ∈ S has four neighbours denoted by {i e = i + (0, 1), i w = i − (0, 1), i n = i − (1, 0), i s = i + (1, 0)}, with obvious neighbour adjustments near the boundary.We assume translation invariance in the sense that the parameters are functions of the displacement between sites; we assume spatial symmetry, which implies that the matrices β ij = β ji are symmetric; we allow possible anisotropy between the horizontal and vertical directions.Under all these conditions and from the result above, there exist a (ℓ+M )−dimensional vector α and two (ℓ + M ) × (ℓ + M ) symmetric matrices {β (1) , β (2) } such that for all i, α i = α, and, for all {i, j}, , β ij = 0 unless i and j are neighbours, in which case β i,ie = β i,iw = β (1) , β i,in = β i,is = β (2) .
Moreover, the translation invariance implies that the local conditional density function f i , hence the functionals θ i , B i , H ′ i and L ′ i in (5.1), are independent of i.The potentials are given by otherwise.
The natural parameter of f i equals to

Parameter estimation
It is well-known that the maximum likelihood method needs intensive computational approximations for Markov random fields.An efficient remedy relies on the pseudo-likelihood estimator introduced by [4].Theoretical results for this estimator in the general framework of Markov random fields can be found in e.g.[8].In the case of multi-parameter auto-models, [9] provide conditions under which this estimator is consistent.We refer the reader to this paper where this theory is developed in details.In particular, it applies to the present class of mixed-state auto-models.In the later §7, we will use this pseudo-likelihood estimator for the modelling of motion measurements from video sequences.

Mixed exponential auto-models
In this section, we focus on auto-models with mixed exponential conditional distributions.The relative simplicity of the model allows us a complete study of the various properties of the model, without getting bogged down in the parameters.Moreover, the exponential distribution itself is commonly used for modelling, e.g., records of pluviometry data.From Besag's seminal paper [4], we know that several classical auto-models imply spatial competition between the neighbouring sites.This is particularly the case for the auto-exponential scheme.We will see that this fact appears again for mixed-state auto-model with exponential distributions.To overcome this limitation, we propose two alternatives by means of data truncation or data censoring.

Mixed state auto-models with exponential conditionals
We consider the mixed state space E = {0} ∪ (0, ∞), and we assume that the conditional distributions f i (x i | •) belong to the family of mixed state exponential distributions as defined in Example 1 of §2.We write where the natural parameter and the sufficient statistics are, noting that Here, the reference state is 0. Besides, obviously, the family of efficient statistics B(x) verify Assumption 3. Therefore, following the previous general result for mixed state auto-models: there exist a family of vectors We note that the model can be spatially asymmetrical if d ij = f ij .This is particularly interesting for our mixed state auto-models, where δ * (x i )x j and x i δ * (x j ) may be interpreted as different situations.We can also think about models with oriented graphs.
The local natural parameters are And we also have the reciprocal correspondence It remains to make certain the well-definiteness of the model, that is to ensure the integrability condition (4.5).Necessarily, we must have for all i, γ i ∈ [0, 1] and λ i > 0. Since the x j 's belong to [0, ∞), this leads to require the following conditions: (A) (i) for all {i, j}, e ij ≤ 0. (ii) for all i and any subset Fortunately enough, these necessary conditions also ensure the integrability condition (4.5).
Proposition 1.Under Conditions (A), the auto-model with mixed exponential conditionals is well-defined.
Proof.The configuration space Ω can be decomposed as We have with m(dx) = δ 0 (dx) + λ(dx).Therefore, Condition (4.5) holds if and only if As e ij ≤ 0 , we have for some constant C > 0, and still x ∈ Ω A , Let |A| = card(A).By Conditions (A), b i + j∈A, i,j f ij > 0 and we finally obtain The proof is complete.
Let us notice that in the context of n "ordinary" variables, [3] claim that Condition (A) is both necessary and sufficient for n ≥ 2. This is true for n = 2 but the condition is not necessary anymore if n ≥ 3.
Let us examine local interactions between neighbouring sites.Considering the generalised definition of spatial cooperation (respectively, competition) for mixed-state auto-models given in §5.2 , we are looking for the variations of .
Under Conditions (A), in particular e ij ≤ 0, we see that the parameter θ 2,i (x ∂i ) defined in (6.2) is an increasing function of neighbouring values x j > 0. As , we conclude that the model cannot be spatially cooperative, although the precise dependence of the other parameter θ 1,i (x i ) in the x j 's will vary according to the values of the {d ij }'s.Similarly to the auto-exponential scheme of [4], this locally non cooperative behaviour seems inappropriate in many application fields.
To overcome this drawback, there are two commonly used approaches, namely data truncation and data censoring.We adapt below these two methods for mixed exponential auto-models.

Cooperative mixed exponential auto-models by truncation
First let us define a mixed truncated exponential variable X.The state space is E = {0}∪(0, K] where K is a given (arbitrary) positive constant.The continuous component on (0, K] of X follows a truncated exponential distribution with the probability density function Thus, the probability density function of X equals Note that conversely we have .
Let us consider a mixed-state auto-model for X = {X i , i ∈ S}, whose conditional distributions lay in the family of the mixed truncated exponential distributions above.Here the reference state is 0 and the family of sufficient statistics B verify Assumption 3. By Theorem 1, there exist a family of vectors Because of the truncation, exp Q is always integrable.
The natural parameters of local conditional distributions are written As for the conditions on the parameters, we keep the requirement: θ i,2 (x ∂i ) > 0 (which implies γ i (x ∂i ) ∈ (0, 1)).This is clearly satisfied under the following Assumption 4. For all i ∈ S, b i + j∈∂i min(0, f ij , f ij − e ij K) > 0.
To understand whether the system is spatially cooperative or not, let us first examine, for the mixed truncated exponential variable X, the variation of E[X1 (0,K] (X)] with respect to its parameters {θ 1 , θ 2 }.If we denote Z a random variable following a truncated exponential distribution with the density g λ , a simple calculus leads to E[Z] = K 1 λK − 1 e λK −1 , which decreases from 1 2 K to 0 when λ raises from 0 to ∞.On the other side, 1 − γ is decreasing with respect to θ 2 and increases with θ 1 .Finally, E[X1 (0,K] (X)] = (1 − γ)E(Z) is decreasing in θ 2 and increasing in θ 1 .
Gathering this result together with (6.3), we deduce the variation of Therefore, this family of auto-models can exhibit spatial cooperation as well as spatial competition.
Let us give an application to the translation invariant and symmetric scheme with the four nearest neighbours system, as introduced in §5.3, and with the additional assumption of isotropy.Then the parameters are α = (a, b) T and In this case, Assumptions Note that in the case of spatial cooperation (Assumptions 4 and 5), the positive parameters e ij 's in θ i,2 (x ∂i ), Eq. ( 6.3), give a measure of the strength of the spatial cooperation: the bigger are the values of these parameters, the stronger is the spatial cooperation realized in the model.However, Assumptions 4 and 5 imply 0 ≤ e ij < h ij /K for some positive constants h ij .Therefore if the truncation level K is large, the implied spatial cooperation becomes limited.

Cooperative mixed exponential auto-models by censoring
As previously, let K be a fix positive constant.We consider the mixed censored exponential variable X defined in §3.2.Let us recall the expression of the corresponding probability density function.
The reference state is K, and we notice that the components of θ are dependent.
Conversely we have λ = θ 3 , α = e θ1−θ3K 1 + e θ1−θ3K .Let us consider now a mixed-state auto-model for X = {X i , i ∈ S}, whose conditional distributions belong to the family of mixed censored exponential distributions above.For sake of simplicity, we assume spatial symmetry.Applying again Theorem 1, there exist a family of 3-dimensional vectors α i = (r i , a i , b i ) T and 3 × 3 symmetric matrices such that the energy function equals to The natural parameters of local conditional distributions are the model attractive.To convince, let us look at the translation invariant and symmetric scheme with the four nearest neighbours system of §5.3, assuming also spatial isotropy.Therefore, the unique interaction matrix equals In this case, Assumptions  Let {I i (t)} be an image sequence where i = (i 1 , i 2 ) ∈ S denotes the pixel locations and t = 1, . . ., T the time instants in the sequence.Roughly speaking, a motion map at time t, X(t) = {X i (t)} = { v i (t) } is defined as the norm of the underlying motion field {v i (t)} which is estimated by a "regularised" minimisation of the sum of squares i [I i+vi(t) (t + 1) − I i (t)] 2 .Usually some local smoothing procedures are needed to get a more robust motion map and we refer to [7] for details of these computations.
We consider here video sequences of natural scenes.Figure 1 displays three sample images from each of two sequences involving a moving escalator and trees under wind respectively.The corresponding motion maps {X i (t)} are displayed in Figure 2. Next, sample histograms from these motion maps are presented in Figure 3.As a matter of fact, these histograms present a composite picture.An important peak appears at the origin accounting for regions where no motion is present, while a continuous component encompasses actual motion magnitudes in the images.

An mixed-state auto-model with positive Gaussian distributions
We follow the general construction of mixed states auto-models of §4.First, we consider a positive mixed-state Gaussian variable X, defined in Example 3 of §2.Then X has the following density function :  where To construct auto-models for the motion maps observations {X i (t)}, we assume that the family of conditional distributions f i (x i |x i ) belongs to the family of mixed positive Gaussian distribution given in (7.1).By Theorem 1, there exist a family of vectors α i = (a i , b i ) T ∈ R 2 and a family of 2 × 2 matrices {β ij } satisfying β ij = β T ji , such that Moreover the associated energy function is given by (7.4) To analyse the motion measurements, we consider the specification of §5.3, namely a translation invariant and spatially symmetric auto-model with the four-nearest-neighbours system and possible anisotropy between the horizontal and vertical directions.Then the parametrisation reduces to one vector α = (a, b) T and two 2 × 2 matrices β (1) and β (2) such that ∀i, α i = α, and ∀{i, j}, β ij = 0 unless i and j are neighbours in which case Moreover, the present context asks for spatial cooperation and we need further to constrain the parameters d k and e k , k = 1, 2 to be zero.The resulting automodel has four parameters φ = (a, b, c 1 , c 2 ) and is well-defined (admissible) under an unique condition: b > 0. Here, we use the pseudo-likelihood method to estimate these parameters.
Let us mention that in the context of image segmentation, Salzenstein and Pieczynski [12] have previously proposed a fuzzy image segmentation model where the fuzzy labels are a particular instance of mixed-state variables with values in [0, 1].

Experiments
The experiments are conducted in order to evaluate whether the model above can correctly account for a fundamental characteristic of an homogeneous tex-ture, namely spatial isotropy or spatial anisotropy.For the present four-nearestneighbours Gaussian mixed-state model, the spatial isotropy occurs if (and only if) c 1 = c 2 .
We fit this model to several motions maps like those displayed in Figure 2. First we consider motions from trees (bottom row of the figure).A typical set of parameter estimates is φ = (â, b, ĉ1 , ĉ2 ) = (−5.805,3.044, 3.057, 2.954).The parameters c 1 and c 2 are almost identical with regard to standard deviations of these estimates computed at other time instants of the same tree sequence.Therefore, the believed spatial isotropy for these motions is well reflected here.
Next we consider the motion maps from a moving escalator (top row of Figure 2).Since the motion is a vertical one, we clearly have anisotropy.A typical set of parameter estimates is φ = (â, b, ĉ1 , ĉ2 ) = (−6.512,0.320, 2.192, 3.598).Therefore, the difference between c 1 and c 2 appears to be significant and the mixed-state model is able to reflect the spatial anisotropy of the considered motion.More experiments on motion analysis can be found in [5].

Example 2 .
Mixed-state Gamma distribution: this situation generalises Example 1 by substituting a Gamma distribution Γ(a, b), a, b > 0, for the exponential distribution.Here we have ξ = (b, a − 1), H(ξ) = Γ(a) −1 b a and T (x) = (−x, ln x).The resulting mixed-state distribution belongs to an exponential family of dimension three.Example 3.
relatively to the neighbouring values x j , j ∈ ∂i which are positives.Let us introduce the following assumptions.Assumption 5.For all i, j ∈ S, d ij ≤ 0, e ij ≥ 0. Assumption 6.For all i, j ∈ S, d ij ≥ 0, e ij ≤ 0.We thus have proved the following Proposition 2. Assume Assumption 4 holds.Then, (i) The mixed auto-model with mixed truncated exponential conditionals is well-defined.(ii) The model is spatially cooperative under Assumption 5.(iii) The model is spatially competitive under Assumption 6.

4 and 5
reduce to the conditions d ≤ 0, e ≥ 0, b + 4(d − eK) > 0, which makes the model spatially cooperative.Similarly, the model is spatially competitive under Assumptions 4 and 6 which bring down to d ≥ 0, e ≤ 0, b > 0.

7 .
An application to motion analysis from video image sequences7.1.Motion measurements from video sequencesMotion computation and analysis are of central importance in image analysis.

Figure 1 .
Figure 1.Sample images from two videos.Top row: a moving escalator; bottom row: trees.

Figure 2 .
Figure 2. Sample motion measures {X i (t)} from the videos of Figure 1.Top row: a moving escalator; bottom row: a tree (white=0; black=maximum value).

Figure 3 .
Figure 3. Sample histograms of motion measures {X i (t)} of Figure 2.