A Markov process for an infinite interacting particle system in the continuum

An infinite system of point particles placed in $\mathds{R}^d$ is studied. Its constituents perform random jumps with mutual repulsion described by a translation-invariant jump kernel and interaction potential, respectively. The pure states of the system are locally finite subsets of $\mathds{R}^d$, which can also be interpreted as locally finite Radon measures. The set of all such measures $\Gamma$ is equipped with the vague topology and the corresponding Borel $\sigma$-field. For a special class $\mathcal{P}_{\rm exp}$ of (sub-Poissonian) probability measures on $\Gamma$, we prove the existence of a unique family $\{P_{t,\mu}: t\geq 0, \ \mu \in \mathcal{P}_{\rm exp}\}$ of probability measures on the space of cadlag paths with values in $\Gamma$ that solves a restricted initial-value martingale problem for the mentioned system. Thereby, a Markov process with cadlag paths is specified which describes the stochastic dynamics of this particle system.


Introduction
Measure-valued Markov processes [11] have become popular on their own -as a challenging object of probability theory -as well as due to their applications in mathematical physics, biology, ecology, etc. Among such applications one can distinguish those describing stochastic evolution of large (infinite) systems of point particles dwelling in continuous habitats, e.g., R d . In this case, the state space of the system is the set of all locally finite configurations of particles, which can also be interpreted as random counting Radon measures. In the case of finite particle systems, the construction of the corresponding Markov processes is now quite standard. For infinite systems, however, the list of results reduces mostly to those describing free (noninteracting) systems [22] or birth-and-death dynamics with generators obeying essential restrictions [16,17,23,30]. In this context, one can also mention models with interactions of Curie-Weiss (mean-field) type (e.g., [25]) where one starts with a system of N particles interacting with a strength proportional to 1/N , and then passes to the limit N → +∞. In this paper, we prove the existence and uniqueness of a Markov process with cadlag paths for an infinite system of point particles performing random jumps in R d with a translation-invariant jump kernel and mutual repulsion, which appears to be the first result of this kind known in the literature.
The state space of the considered model is the set where γ is a configuration and | · | stands for cardinality. An account of mathematical properties of Γ can be found in [10,20], see also Sect. 2 below where we outline those of them which are relevant to the present study. The space (1.1) is equipped with the vague (weak-hash) topology -the weakest topology that makes continuous all the maps γ → x∈γ g(x), g ∈ C cs (R d ), where C cs (R d ) denotes the set of all compactly supported continuous functions g : R d → R. The vague topology is metrizable in such a way that the corresponding metric space is complete and separable. Then the states of the considered system are probability measures on Γ, the set of which is denoted by P. The point states γ are associated to the Dirac measures δ γ . The evolution of the system which we consider is described by the (backward) Kolmogorov equation where F t : Γ → R, t ≥ 0 and Originally, models of this kind were introduced and (heuristically) studied in physics [18], where they are known under a common name Kawasaki model. In the rigorous setting, the model described by (1.2), (1.3) was studied in [3] (see also [5] for a preliminary investigation). In [3], the following result was obtained. For a special class of states P exp ⊂ P and each µ 0 ∈ P exp , there was constructed a map [0, +∞) ∋ t → µ t ∈ P exp that can be interpreted as the evolution of states related to (1.2). In the present work, we construct a Markov process with cadlag paths in such a way that µ t is the law at time t. Let us outline some of the aspects of this construction. As we show here, for a sufficiently large set of functions F : Γ → R, the map [0, +∞) ∋ t → µ t ∈ P exp constructed in [3] is the unique solution of the Fokker-Planck equation holding for all 0 ≤ s < t < ∞, see [6] for the general theory of the Fokker-Planck equation. The peculiarity of P exp is that the Dirac measures δ γ are not in this set for each γ ∈ Γ. Therefore, one cannot directly construct a transition function (and hence the corresponding Markov process) just by setting µ 0 = δ γ . In view of this, in constructing the process we take a version of the martingale approach suggested in [31], see also [11,Sect. 5.1], [13,Chapter 4]. The main aspects of our construction can be outlined as follows. When dealing with measures µ ∈ P exp , it is natural to use a subset Γ * ⊂ Γ such that µ(Γ * ) = 1 for all µ ∈ P exp . This set Γ * is equipped with a metric which turns it into a Polish space, continuously embedded in Γ. Then the measures of interest are redefined as measures on Γ * . Let D R + (Γ * ) stand for the space of all cadlag maps R + ∋ t → γ t ∈ Γ * , equipped with the usual Skorohod topology, see [13, page 118]. Let also ̟ t : D R + (Γ * ) → Γ * , t ≥ 0 be the evaluation map, i.e., ̟ t (γ) = γ t . For A ∈ B(Γ * ), set (1.5) The principal result of this work (Theorem 3.6) can be characterized as follows. We prove that there exists a family of probability measures, {P s,µ : s ≥ 0, µ ∈ P exp }, on D R + (Γ * ) which is a unique solution of the restricted initial-value martingale problem corresponding to (1.3). For such measures, their one-dimensional marginals µ t = P s,µ • ̟ −1 t , t ≥ s belong to P exp and satisfy the corresponding version of the Fokker-Planck equation (1.4). In view of this, one may say that these path measures are obtained as 'superpositions' of the measures constructed in [3]. This might be interpreted as that an infinite dimensional analog of the theory developed in [14,33]. Here, however, the techniques are essentially different from those used in the latter works.
In [4], there was studied a model in which point particles of two types perform random jumps over R d . Their common dynamics is described by the corresponding analog of the Kolmogorov operator (1.3) in which particles of different types repel each other, whereas those of the same type do not interact. This kind of interaction is typical for the classical Widom-Rowlinson model (see [8] and the literature quoted therein), for which the states of thermal equilibrium can be multiple [8,24]. The latter fact has essential impact on the stochastic dynamics of such models, cf. [19], which further stimulates the efforts in this direction. The results of [4] are pretty analogous to those of [3], which means that -after proper modification -the approach developed in the present work can be applied also to the model of [4], which we will realize in a subsequent paper.
The rest of the paper is organized as follows. In Sect. 2, we introduce all necessary facts and notions, among which are sub-Poissonian measures and the above-mentioned set Γ * ⊂ Γ. Here we also introduce and study two classes of functions F : Γ * → R, which play a crucial role in defining the Kolmogorov operator L introduced in (1.3). In Sect. 3, we impose standard assumptions on a and φ and then make precise the domain of L, i.e., the class of functions F : Γ * → R for which the Fokker-Planck equation is solved. Thereafter, in Theorem 3.6 we formulate the result -the statement that the restricted initial value martingale problem for our model has precisely one solution. Then we outline our strategy of proving this statement. In Sect. 4, we present and employ the results of [3] where the evolution of states t → µ t ∈ P exp was constructed. In Sect. 5, we prove that the restricted initial value martingale problem for our model has at most one solution. This is done by proving that the Fokker-Planck equation (1.4) has a unique solution in the class of sub-Poissonian measures. Since the one-dimensional marginals of the constructed path measures should solve (1.4), this yields a tool of proving the desired uniqueness. In Sect. 6 and 7, we prove the existence of the path measures in question by employing auxiliary models (Sect. 6) for which one can construct the processes directly (by means of transition functions), and then by proving (Sect. 7) that these models approximate the main model. Their Markov property is then obtained similarly as in [11,Sect. 5.1, pages 78, 79].

Preliminaries
Throughout this work we use the following notations: Λ -a compact subset of R d ; R + = [0, +∞); N -the set of natural numbers, N 0 = N ∪ {0}; C cs (R d ) -the set of all compactly supported continuous functions g : R d → R, B r (y) = {x ∈ R d : |x − y| ≤ r}, B r = B r (0), r > 0 and y ∈ R d .
2.1. The configuration spaces. There exists a metric, υ # , on Γ that makes it a Polish space, see [10, Sects. A25, A26] and a recent work [26]. It is known that • The metric space (Γ, υ # ) is complete and separable.
• The metric topology of (Γ, υ # ) is the weakest topology that makes continuous all the Obviously, each Γ (n) -and hence the set of finite configurations Γ 0 -belong to B(Γ). The topology induced on Γ 0 by the vague topology of Γ coincides with the weak topology determined by bounded continuous functions f ∈ C b (R d ). Then the corresponding Borel σ-field B(Γ 0 ) is a sub-field of B(Γ). It is possible to show that a function G : Γ 0 → R is measurable if and only if there exists a family of symmetric Borel functions G (n) : (R d ) n → R, n ∈ N such that In this case, we also write G (0) = G(∅).
Definition 2.1. A measurable function, G : Γ 0 → R, is said to have bounded support if there exist N ∈ N and a compact Λ such that: (a) G (n) ≡ 0 for all n > N ; (b) G(η) = 0 whenever η is not a subset of Λ. By B bs we will denote the set of all bounded functions with bounded support. For G ∈ B bs , N G and Λ G will denote the least N and Λ as in (a) and (b), respectively. We also set C G = sup η∈Γ 0 |G(η)|.
The Lebesgue-Poisson measure λ is defined on Γ 0 by the integrals with C G , Λ G and N G as in Definition 2.1.

Sub-Poissonian measures.
In this subsection, we introduce a set of probability measures on (Γ, B(Γ)), which plays a key role in this research. Set Θ 0 := {θ ∈ C cs (R d ) : θ(x) ∈ (−1, 0]}. Then, for each µ ∈ P and θ ∈ Θ 0 , the function is continuous and bounded. Definition 2.3. By P exp we denote the set of all those µ ∈ P for each of which µ(θ) := µ(F θ ) can be defined as a real entire function of exponential type of θ ∈ L 1 (R d ).
Proposition 2.4. A given µ ∈ P belongs to P exp if and only if, for each n ∈ N, the map θ → µ((Ke n )(θ; ·)) can be defined as a continuous monomial of order n on L 1 (R d ). In particular, this means that it satisfies n! |µ((Ke n )(θ; ·))| ≤ κ n θ n L 1 (R d ) . (2.10) The least κ > 0, for which this estimate holds, is then the type of µ.
For the homogeneous Poisson measure π κ with density κ > 0, it follows that is k (n) πκ (x 1 , . . . , x n ) = κ n , n ∈ N 0 , and hence which means that π κ ∈ P exp and κ is its type.
Remark 2.5. Let G in (2.9) be such that G(η) ≥ 0 for all η ∈ Γ 0 . Then by (2.7) it follows that µ(KG) ≤ π κ (KG), where κ is the type of µ. In view of this, the elements of P exp are called sub-Poissonian measures.
(2.18) This crucial property of the elements of P exp will allow us to consider only configurations belonging to Γ * . In particular, this means that we will use the following sub-field of B(Γ): (2.19)

Functions and measures on
stand for the set of all bounded continuous (resp. measurable) functions g : R d → R. For ψ defined in (2.13), we set whenever µ ∈ P exp and κ is its type. For γ ∈ Γ * , the measure on R d is finite since ν γ (R d ) = Ψ 0 (γ), see (2.13). The set of finite positive measures on R d can be metrized in the following way. Consider and then define and also υ(ν, ν ′ ) = sup Proposition 2.6. [12, Theorem 18] The following three types of the convergence of a sequence {ν n } ⊂ N * to a positive finite measure ν on R d are equivalent: That is, υ metrizes the usual weak convergence of the elements of N * . Set (2.26) and then define, cf. (2.23) and (2.25), Proposition 2.7. The metric space (Γ * , υ * ) is complete and separable. Its metric topology is the weakest topology that makes continuous all the maps Γ * ∋ γ → x∈γ θ(x), θ ∈ Θ ψ .
Proof. The stated continuity follows by the fact that nonnegative θ ∈ C cs (R d ) belong also to Θ ψ , and by Propositions 2.6 and 2.7. The equality of the σ-fields then follows by Kuratowski's theorem, see [29, Theorem 3.9, page 21].
For θ ∈ Θ ψ , we set, see (2.20) and (2.21), Note that V ⊂ C b (R d ) is closed with respect to the pointwise addition and its elements are separated away from zero. The former follows by the fact that θ + θ ′ + θθ ′ belongs to Θ ψ for each θ, θ ′ ∈ Θ ψ . Next, define For τ = 0 and θ(x) ≡ 0, we also set F 0 0 (γ) ≡ 1 and include this function in the set  [13, page 111], we introduce the following notion.
Definition 2.10. A sequence of bounded measurable functions F n : Γ * → R, n ∈ N is said to boundedly and pointwise (bp-) converge to a given F : that contains H and is closed under the bp-convergence. In a similar way, one defines also the bp-convergence of sequences of functions g : R d → R.
It is well-known that C b (R d ) contains a countable family of nonnegative functions, {g i } i∈N , which is convergence determining and such that its linear span is bp-dense in B b (R d ), see [ n → +∞ for all i ∈ N. One may take such a family containing the constant function g(x) ≡ 1 and closed with respect to the pointwise addition. Moreover, one may assume that If this is not the case for a given g i , in place of it one may takeg i (x) = g i (x) + ς i with some ς i > 0. The new set, {g i }, has both mentioned properties and also satisfies (2.32). Then assuming the latter we conclude that To see this, for a given g i , take τ i ≥ sup x g i (x) and then set Clearly, θ i (x) ≥ 0. Since ψ n (x) ≤ ψ(x), n ∈ N, we have that θ i (x) ≤ e τ i ψ(x), and hence {θ i } i∈N ⊂ Θ ψ , see (2.20). At the same time, v θ i τ i = g i and c θ i = sup x (τ i − g i (x)) < τ i in view of (2.32). By (2.34) and (2.33), for all i ∈ N, it follows that Proposition 2.11. The set F defined in (2.31) is closed with respect to the poinwise multiplication. Moreover, it has the following properties: (i) It is separating: µ 1 (F ) = µ 2 (F ), holding for all F ∈ F, implies µ 1 = µ 2 for all µ 1 , µ 2 ∈ P(Γ * ).

The Result
3.1. The domain of L. First we make precise the conditions imposed on the model defined by L given in (1.3). The positive measurable functions a and φ are supposed to satisfy the following: The conditions in (3.1) are the same as in [3]. We impose them to be able to use the results of this work here. Note that the assumed boubedness of φ excludes a hard-core repulsion. The condition in (3.2) was not used in [3].
As mentioned in Introduction, we are going to construct the process as a solution of a restricted initial value martingale problem. In this case, the domain of the operator introduced in (1.3) plays a key role, cf. [11, page 79]. Along with the set defined in (2.31), we define where F θ 1 ,...,θm τ is the function introduced in (2.36).
Definition 3.1. By D(L) we denote the linear span of the set F ∪ F.
By (2.31) and Proposition 2.12 one concludes that D( By iterating the latter we get For θ ∈ Θ ψ and a as in (3.1), we set where the continuity follows by the dominated convergence theorem and the latter equality in (3.5). Moreover, by (2.22) we have where we have used (3.1), (3.2) and the fact that |y| l ψ(y) ≤ 1 holding for all y and 0 ≤ l ≤ d + 1. Therefore, Since θ j ∈ Θ + ψ , we then get by the latter that also θ 1 Here Then proceeding as in (3.6) we get Thereafter, by (3.4), (3.5), (3.7), and (1.3) we obtain Then the boundedness of L F θ 1 ,...,θm τ follows by Proposition 2.12.
Then by (3.7), (3.8), (3.9) and (3.10) we arrive at Then the boundedness in question follows similarly as in Proposition 2.12. The next statement summarizes the properties of D(L).
Proposition 3.2. The set of functions introduced in Definition 3.1 has the following properties: 3), and µ ∈ P exp , the measure F µ/µ(F ) belongs to P exp .
Proof. Claim (i) has been just proved. Claims (ii) and (iii) follow by Proposition 2.11 and the fact F ⊂ D(L). It remains to prove (iv). Take F ∈ F, µ ∈ P exp and denote µ F = F µ/µ(F ). Certainly, µ(F ) > 0 and µ F ∈ P(Γ * ). Thus, to prove µ F ∈ P exp we have to show that µ F (θ) can be continued to an exponential type entire function of θ ∈ L 1 (R d ), see Definition 2.3. Take θ ∈ Θ 0 and consider , is an exponential type entire function of θ ′′ ∈ L 1 (R d ), and hence can be written as in (2.6) with k where the second and third terms are in L 1 (R d ), and the coefficient at θ is continuous and bounded. Then we plug θ ′′ in this form into (2.6) and obtain that µ(F θ F θ ′ τ ) can also be written in the form of (2.6) -i.e., as a series in θ ⊗n -with the coefficients belonging to the corresponding L ∞ (R d ) and satisfying (2.7) with some κ. This yields the proof.
3.2. Formulating the result. As mentioned in Introduction, following [11,Chapter 5] we are going to obtain the process by solving a restricted initial value martingale problem. Recall that D R + (Γ * ) stands for the space of all cadlag maps [0, +∞) =: R + ∋ t → γ t ∈ Γ * , and the evaluation maps ̟ t are defined in (1.5). In a similar way, one defines also the spaces D [s,+∞) (Γ * ), s > 0. For s, t ≥ 0, s < t, by F 0 s,t we denote the σ-field of subsets of D R + (Γ) generated by the family That is, F s,+∞ is the smallest σ-field which contains all F s,s+n . Given s ≥ 0 and µ ∈ P exp , in the definition below -which is an adaptation of the definition in [11, Section 5.1, pages 78, 79]) -we deal with probability measures P s,µ on (D [s,+∞) (Γ * ), F s,+∞ ).

Definition 3.3.
A family of probability measures {P s,µ : s ≥ 0, µ ∈ P exp } is said to be a solution of the restricted initial value martingale problem for our model if, for all s ≥ 0 and µ ∈ P exp , the following holds: is such that 14) The restricted initial value martingale problem is said to be well-posed if, for each s ≥ 0 and µ ∈ P exp , there exists a unique P s,µ satisfying all the conditions mentioned above.
Here by saying "for our model" along with the Kolmogorov operator L given in (1.3) we mean also its domain D(L) (Definition 3.1) and the class P exp ⊂ P(Γ * ) defined by the property (2.6), see also Proposition 2.4. Note that H defined in (3.13) is P s,µ -integrable, that follows by claim (i) of Proposition 3.2. Note also that the functions G in (3.13) can be taken in the form with all possible choices m ∈ N, F 1 , . . . , F m ∈ F (see Proposition 2.11), and s ≤ s 1 < s 2 < · · · < s m ≤ t 1 , see [13, eq. (3.4), page 174].
. Such a map is said to be a solution of the Fokker-Planck equation for our model if, for each F ∈ D(L) and any t 2 > t 1 ≥ s, the following holds Remark 3.5. By taking G ≡ 1 in (3.13) one comes to the following conclusion. Let {P s,µ : s ≥ 0, µ ∈ P exp } be a solution as in Definition 3.3. Then, for each s and µ ∈ P exp , the map is Markov. This means that, for all t > s and B ∈ F t,+∞ , the following holds P s,µ (B|F s,t ) = P s,µ (B|F t ), P s,µ − almost surely.
The proof of this statement will be done in the following two steps. First we prove that the restricted initial value martingale problem as in Definition 3.3 has at most one solution. Thereafter, we construct a solution by 'superposing' the collection of measures constructed in [3].
3.3. Strategy of the proof and some comments. Our approach is essentially based on the Fokker-Planck equation (1.4), (3.16) for which a solution, t → µ t ∈ P exp , µ 0 ∈ P exp was constructed in [3]. In Sect. 6, we introduce approximating models by modifying the jump kernel in such the way that allows one to solve the Fokker-Planck equation directly by constructing stochastic semigroups in a Banach space of signed measures, with the possibility to take Dirac measures δ γ , γ ∈ Γ * as the initial conditions. This allows in turn for introducing finite-dimensional marginals of the presumed law of the processes corresponding to these approximating models by means of the transition functions obtained in that way. Then we prove that these marginals satisfy a Chentsov-like condition (see [13, Theorem 3.8.8, page 139]) -the same for all approximating models. This yields the existence of cadlag versions of the approximating processes and is used in Sect. 7 to prove that their distributions have accumulating points -possible distributions of the process in question. Then we prove that such accumulation points solve the martingale problem in the sense of Definition 3.3. To prove uniqueness we again use the Fokker-Planck equation and the construction made in [3]. At this stage -realized in Sect. 5 -we show that this equation has a unique solution, which implies that the mentioned accumulation points have coinciding one-dimensional marginals. A classical result (see [13, claim (a) of Theorem 4.4.2, page 184]) is that one would have uniqueness if the one-dimensional marginals were equal for all initial µ ∈ P(Γ * ). Since we have such an equality only for µ from a subset of P(Γ * ), we turn to the restricted version of the martingale problem [11,Chapter 5]. A crucial element of this version is Lemma 5.1 that states that a solution of the Fokker-Planck equation with µ 0 ∈ P exp is also in P exp , and its type satisfies κ t ≤ κ T for t ≤ T , where κ T depends on T and κ 0 only. The proof of Lemma 5.1 is the most technical element of this part, based on a number of combinatorial results (see also Appendix). By means of Lemma 5.1 we then prove (Theorem 5.3) that (1.4) with µ 0 ∈ P exp has a unique solution coinciding with the map t → µ t constructed in [3]. This finally yields the uniqueness of the solution. 4. The Evolution of States on Γ * As mentioned above, in the proof of Theorem 3.6 we essentially use the construction of the family of measures {µ t } t≥0 performed in [3]. Thus, we begin by describing this family in a way adapted to the present context. 4.1. Spaces of functions on Γ 0 . By (2.16) it follows that each measurable F satisfying |F (γ)| ≤ C exp(βΨ 0 (γ)) for some positive β and C is µ-absolutely integrable for each µ ∈ P exp . This obviously relates to F = KG with G ∈ B bs , see Remark 2.2 and (2.9). For a and φ as in (3.1) and G ∈ B bs , let us consider In (4.1), the sums are finite and the integral is convergent in view of the integrability of the jump kernel a. It turns out that holding for all G ∈ B bs , see [15,Corollary 4.3 and eq. (4.7)]. By (2.9) this yields which by (2.7) points to the possibility to extend L from B bs to integrable functions. For a given ϑ ∈ R, let G ϑ stand for the weighted L 1 -space equipped with the norm In fact, we have a descending scale {G ϑ : ϑ ∈ R} such that To estimate the last line in the latter formula we use the inequality xe −αx ≤ 1/eα, both x, α positive, and the fact that B bs ⊂ G ϑ ′ for each ϑ ′ > ϑ. Thereafter, we obtain Below by means of this estimate we extend L to operators acting in the scale {G ϑ } ϑ∈R , cf. (4.4). Along with G ϑ we introduce the following Banach spaces. For symmetric k (n) ∈ L ∞ ((R d ) n ), n ∈ N, let k be defined by k (n) as in (2.1), that includes also some constant k(∅) = k (0) . Such k constitute a real linear space and can be considered as essentially bounded functions k : Γ 0 → R. Note that the correlation functions k µ , cf. (2.7), are such functions. Then for ϑ ∈ R, we define The linear space K ϑ equipped with this norm is the Banach space in question. Clearly, cf. (4.4), Note that K ϑ is the topological dual to G ϑ as the value of k on G is given by the formula Let us now define L ∆ by the condition, cf. (4.2), LG .
Proceeding similarly as in obtaining (4.5), for all ϑ ∈ R and ϑ ′ > ϑ, we get where we have taken into account that a = 1, see (3.1).
In a similar way, one shows that the Cauchy problem in (4.11) has a unique classical solution in G ϑ , on the time interval [0, T (ϑ ′ , ϑ)), given by the formula By construction these solutions of (4.12) and (4.11) satisfy k t , G 0 = k 0 , G t , t < T (ϑ ′ , ϑ). Note that some of its members can take also negative values. By [20, Theorems 6.1 and 6.2 and Remark 6.3] one proves the following statement.
Proposition 4.1. Let a measurable function, k : Γ 0 → R, have the following properties: Then k is the correlation function for a unique µ ∈ P exp .
Recall that the least κ as in item (c) above is the type of µ of which k is then the correlation function. Set P ϑ exp = {µ ∈ P exp : µ is of type ≤ e ϑ }.
(4.25) Let K ⋆ be the set of all k : Γ 0 → R that possess the properties listed in Proposition 4.1. In [3,Theorem 3.3], it was shown that k t as in (4.21) belongs to K ⋆ whenever k 0 is the correlation function of a certain µ ∈ P exp . In the context of the present study, the relevant results of [3] can be formulated as follows.

The Uniqueness
In this section, we prove that the restricted initial value martingale problem has at most one solution. To this end we use the properties of D(L) stated in Proposition 3.2. In view of Remark 3.5, see also Lemma 5.5 below, the proof of the uniqueness in question amounts to proving that, for each µ ∈ P exp , the Fokker-Planck equation (3.16) has at most one solution µ t ∈ P exp satisfying µ 0 = µ. The main tool for this is based on controlling the type of µ t by a method based on the use of the concrete form of the elements of D(L), see Definition 3.1.

Solving the Fokker-Planck equation.
We begin by pointing out that in Definition 3.4 we do not assume that µ t ∈ P exp for t > 0.
Lemma 5.1. Let [0, +∞) ∋ t → µ t ∈ P(Γ * ) be a solution of (3.16) with all F belonging to the linear span of F and a given µ 0 ∈ P ϑ 0 exp . Then, for each T > 0, there exists ϑ T ∈ R such that, for all t ∈ [0, T ], µ t ∈ P ϑt exp with some ϑ t < ϑ T . Note that here we assume that only the initial state µ 0 belongs to P exp . Also, we assume that µ t solves (3.16) with F belonging only to a subset of D(L). It turns out that this is enough to solve it for all D(L), and even more. Set where K is defined in (2.3) and G is supposed to be such that |G| ϑ is finite for all ϑ, see (4.3). Let us show that D(L) ⊂ F. Since K is linear, this will follow from the fact that Clearly, θ τ ∈ L 1 (R d ) for each τ ≥ 0 and θ ∈ Θ ψ , cf. Definition 3.1. Then G θ τ = e(θ τ ; ·) ∈ G ϑ for any ϑ ∈ R, which yields F ⊂ F.
In the case of F given in (2.36), (2.37), we write Both Lemmas 5.1 and 5.2 are proved below. Now assuming that their claims hold true, we prove the next statement -one of the two basic tools of proving Theorem 3.6.
holding for all t 2 > t 1 ≥ 0 and T > t 2 . We multiply both parts of the latter equality by an arbitrary G ∈ ∩ ϑ∈R G ϑ -also corresponding to F ∈ F -and then integrate with respect to λ. By claim (b) of Proposition 4.2 this integration and that over [t 1 , t 2 ] can be interchanged, that implies where we have used (4.2), (4.7) and the fact that G ∈ ∩ ϑ∈R G ϑ . This yields (3.16). By Lemma 5.2 and (5.2) we then get that µ t corresponding to k t is a solution. Assume now that there exists another solution, say {μ t } t≥0 ⊂ P(Γ * ), such thatμ 0 = µ 0 . By Lemma 5.1 we have thatμ t ∈ Pθ t exp andθ t ∈ (ϑ 0 ,θ T ) for someθ T and all t ≤ T . This means that the corresponding correlation functions,k t , t ≤ T belong to Kθ t . Then the vector q u = L ∆ ϑ Tk u = L ∆ ϑ Tθuk u , see (4.17), lies in Kθ T , and hence in Kθ T +ε for each ε > 0, see (4.6). Then, for a fixed ε, by (4.13) and (2.7) we have with C(T, ε) = 1/eT (θ T + ε,θ T ), see (4.14). Let us prove that the following holds A priori, the equality in (5.11) holds for only for G corresponding to F ∈ D(L), that includes G = G θ 1 ,...,θm τ , see (5.4). For τ ∈ (0, 1], by (5.7) and (5.10) we then have Now we write (5.11) for G = G θ 1 ,...,θm τ and pass to the limit τ → 0 + . By the dominated convergence theorem and (5.6) we then obtain that holds for all m ∈ N and θ 1 , . . . , θ m ∈ Θ + ψ , see (2.20). For a fixed m ∈ N, the set of functions (x 1 , . . . , x m ) → θ 1 (x 1 ) · · · θ m (x m ) with θ 1 . . . , θ m ∈ Θ + ψ is closed with respect to the pointwise multiplication and separates points of (R d ) m . Such functions vanish at infinity and are everywhere positive. Then by the corresponding version of the Stone-Weierstrass theorem [7] the linear span of this set is dense (in the supremum norm) in the algebra C 0 ((R d ) m ) of all continuous functions that vanish at infinity. At the same time, For its subset C cs ((R d ) m ) has this property. This allows us to extend the equality in (5.12) to the following holding for all G (m) ∈ L 1 ((R d ) m ). Then the passage from this equality to that in (5.11) follows by the fact that G belongs to each G ϑ , ϑ ∈ R. By (4.7) the equality in (5.11) yields in which Lθ tϑ0 G =: G 1 ∈ ∩ ϑ∈R G ϑ . In view of (5.11), we can repeat (5.13) with G 1 instead of G, and repeat this procedure again by employing the same arguments. After repeating n times we arrive at Assume now thatθ T > ϑ 0 + T , see Proposition 4.2, that is clearly possible by (4.6). Then we write down the same formula -in the same spaces -for k t considered in (5.8), i.e., described in Proposition 4.2. This yields Now we take ϑ =θ t + δ(θ t ), see (4.15). Then by (4.13), (4.16) and (4.15) we have from the latter (5.14) Note that here τ (θ t ) ≥ τ (θ T ). Then for t < τ (θ T ), the right-hand side of (5.14) can be made as small as one wants by taking big enough n. Since G ∈ G ϑ is arbitrary, this yieldsk t = k t for all such t. The latter impliesμ t = µ t , see Proposition 4.2. The continuation to bigger values of t is made by repeating the same procedure. The proof that these continuations cover the whole R + can be done similarly as in the proof of Theorem 3.3 [3].
Proof of Lemma 5.2. By Lemma 5.1 a solution µ t is in P ϑ T exp for t ≤ T . Let k t be its correlation function, which satisfies the equality in (5.11) with G = G θ 1 ,...,θm τ . As we have shown in the proof of Theorem 5.3 it satisfies this equality for all G such that F = KG ∈ F, see (5.1). Then we apply (5.9) with this F and get the proof.
Remark 5.4. It follows by (5.11) that the claim of Lemma 5.2 holds true for all F = KG with G ∈ ∩ ϑ∈R G ϑ , also for unbounded ones.

Further properties of the solutions.
In this subsection, we prepare proving Lemma 5.1. Our ultimate goal here is to estimate the integrals of the solutions of (3.16) taken with the functions F θ m (γ) = which can be obtained from the functions defined in (2.36) by setting θ 1 = · · · = θ m = θ and τ = 0. Note that F θ m is unbounded, but integrable for each µ ∈ P exp , cf. Proposition 2.4. Moreover, for µ ∈ P exp , Then by estimating µ t (F θ m ) and then applying Proposition 2.4 we will prove the mentioned lemma. To simplify notations by Φ m τ we denote a particular case of the function defined in (2.36), corresponding to the choice θ 1 = · · · = θ m = θ ∈ Θ + ψ withc θ = 1, see (2.22). Namely, for θ ∈ Θ + ψ , we set, cf. also (5.15), and consider such functions with τ ∈ (0, 1]. Note that the function defined in (3.12) is a particular case of Φ m τ (γ) corresponding to the choice θ = ψ. Then by (3.11) we obtain Here θ 1 = a * θ + θ, see (3.7), and Note that Φ m τ,1 is a linear combination of the elements of F, see (3.3). Hence, any solution of (3.16) should satisfy it also with this function. Let us then estimate LΦ m τ,1 . Proceeding as in (3.4) we obtain Now to estimate LΦ m τ,1 we perform the same calculations as in passing to the second line in the right-hand side of (3.11), see (3.9), (3.10). In addition, the third term in the right-hand side of the latter is estimated by employing θ(x) ≤ ψ(x), cf. (2.22), and θ 1 (x) ≤ c a ψ(x), cf. (3.7). This yields see also (3.12). Thereafter, we obtain =: Φ m τ,2 (γ). Here and below we denote θ 0 = θ and is obtained according to (5.19), and Note that by (3.7) we have θ k (x) ≤ c k a ψ(x) (recall thatc θ = 1). To proceed further we introduce the following notations. For m ∈ N and n ∈ N 0 , by C m,n we denote the set of all sequences c = {c k } k∈N 0 ⊂ N 0 such that the following holds: c 0 + c 1 + · · · + c k + · · · = m, c 1 + 2c 2 + · · · + kc k + · · · = n.
We also prove that that can be deduced in the same way as we obtained the estimate in (5.20). In the first line of (5.27) we take into account also (5.26). The initial condition w 1 (m, 1) = 1 can easily be derived from (5.18). Then iterating back to n = 1 in the first line of (5.27) yields w 1 (m, n) = (m+1) n −m n . It turns out that the complete solution of (5.27) has the following simple form where ∆ is the forward difference operator -a standard combinatorial object. Note that the right-hand side of (5.28) makes sense for all k ∈ N 0 : w 0 (m, n) = m n , w k (m, n) = 0 for all k > n. In view of (5.23) and Proposition 2.12, all the terms of the linear combination in the first line in (5.25) are continuous bounded functions of γ. Hence, the same is Φ m τ,n . However, its bound may depend on n, and our aim now is to control this dependence. For ρ > 0, set To get an upper bound for Υ m τ,ρ we estimate each θ q in the first line of (5.25) as θ q ≤ c q a ψ, q ≥ 0, see (5.21), which by (5.23) and (5.22) yields , where we have taken into account that q 1 + · · · + q m = c 1 + 2c 2 + · · · + kc k + · · · = n. In view of (5.26), this leads to the following Here we used the fact that ∆ k m n = 0 for k > n, see (5.28). To proceed further we use Proposition 2.12 and (3.12) and then obtain which in turn yields in the last line of (5.30) the following estimate where we have used (5.18). Since Φ m τ,1 is a linear combination of the elements of F, we can repeat (5.33) with this function and obtain which then can be used in (5.33). In view of (5.24), we can repeat this procedure due times and thereby get the following estimate where we have used (5.32) and the fact that µ t is a probability measure. For t < ρ ε , the last summand in the right-hand side of (5.34) vanishes as n → +∞. Hence, V 0 (c; ·) is an unbounded function, which, however, is µ 0 -integrable. Let κ 0 be the type of µ 0 . As in Remark 2.5, we then have µ 0 (V τ (c; ·)) ≤ π κ 0 (V 0 (c; ·)) = κ m 0 θ q 1 · · · θ qm = 2 n (κ 0 θ ) m , see (5.21) and (3.1). Here we have taken into account that q 1 + · · · + q m = n. By (3.12) we have Then similarly as in (5.36) we obtain We use (5.36) and (5.37) in (5.25) and then in (5.35) and arrive at the following estimate where we have applied the same approach as in obtaining (5.30) and the fact that τ ≤ 1. Since, for each γ ∈ Γ * and an arbitrary sequence τ n → 0, { F 0 τn (γ)} n∈N is a nondegreasing sequence, by (5.17) and Beppo Levi's monotone convergence theorem we then get from the latter that, cf.
holding for all m ∈ N and t < ρ ε , see (5.31). Since θ ∈ Θ + ψ , we have θ = θ L 1 (R d ) , and the latter estimate can be rewritten in the form, cf. (2.10), The set of functions Θ + ψ defined in (2.20) is closed with respect to the pointwise multiplication and separates points of R d . Such functions vanish at infinity and are everywhere positive. Then by the aforementioned version of the Stone-Weierstrass theorem [7] the linear span of this set is dense (in the supremum norm) in the algebra C 0 (R d ) of all continuous functions that vanish at infinity. At the same time, µt , θ ⊗m , m ∈ N can be extended to homogeneous continuous monomials on L 1 (R d ). By Proposition 2.4 this yields the proof of the considered statement for t < ρ ε . Since ρ ε is independent of κ 0 , the continuation to all t > 0 can be made by the repetition of the same arguments.

5.4.
Proof of the uniqueness. By employing Lemmas 5.1 and 5.2, see also Remark 3.5, we prove the following statement.
for all t ≥ s, s ≥ 0 and µ ∈ P exp . Then P 1 s,µ = P 2 s,µ for all s and µ.
Proof. By Kolmogorov's extension theorem it is enough to prove that all finite-dimensional marginals of both path measures coincide. In view of claim (i) of Proposition 2.11, to this end we have to show that the following holds . . , n, see (3.3), ought to be taken with all possible θ i ∈ Θ ψ , τ i > c θ i and t i satisfying s ≤ t 1 ≤ · · · ≤ t n . Assume that (5.39) holds with a given n and prove its validity for n + 1. Since F t i (γ) > 0, see (2.30), we may set , and then define two path measures on (D [tn,+∞) , F tn,+∞ ) Since both P i satisfy (3.14), we have also for all u 2 > u 1 ≥ t n , see Remark 3.5. By the inductive assumption and claim (iv) of Proposition 3.2 it follows that µ 1 tn = µ 2 tn =: µ ∈ P exp . By Lemma 5.1 we then conclude that µ i t ∈ P exp , i = 1, 2 for all t > t n . That is, both Q i satisfy all the three conditions of Definition 3.3 and thus belong to solutions of the restricted initial value martingale problem. Hence, µ 1 t = µ 2 t by the assumption of the lemma. In particular, , which completes the proof.
Proof. By Remark 3.5 both P (i) s,µ •̟ t , t ≥ s solve (3.16), which by Theorem 5.3 yields P holding for all t ≥ s and µ ∈ P exp . Then the proof follows by Lemma 5.5.

The Existence: Approximating Models
The aim of this and the subsequent sections is to prove the following statement which is the second corner stone in the proof of Theorem 3.6. The basic idea is to approximate the model by auxiliary models described by L α , α ∈ [0, 1] with L 0 coinciding with L defined in (1.3). For α ∈ (0, 1], the solution {P α s,µ : s ≥ 0, µ ∈ P exp } of the corresponding restricted initial value martingale problem for L α will be constructed in a direct way. Then the proof of Theorem 6.1 will be done by showing the weak convergence P α s,µ ⇒ P s,µ as α → 0, and then by proving that {P s,µ : s ≥ 0, µ ∈ P exp } is a solution in question. In the current section, we introduce the auxiliary models and study their relations with the basic model. The construction of the path measures P α s,µ will be preformed in the subsequent section. 6.1. The approximating models. Recall that the functions ψ, Ψ 0 and Ψ 1 were introduced in (2.13). Along with them we will use Note that a 0 (x, y) = a(x − y) and a α (x, y) = a α (y, x) for α ∈ (0, 1]. Now let L α be defined as in (1.3) with a replaced by a α . That is, Then keeping in mind (2.8), (4.2) and (4.7) we define L ∆,α by the following expression One observes that L ∆,0 coincides with the operator introduced in (4.8). For α ∈ (0, 1], L ∆,α is then obtained by replacing in (4.8) a(x − y) by a α (x, y) ≤ a(x − y). Hence, L ∆,α clearly satisfies (4.13) and similar estimates. Then by repeating the construction realized in subsection 4.2 we obtain the family of bounded operators {Q α ϑ ′ ϑ (t) : t ∈ [0, T (ϑ ′ , ϑ))} (resp. {H α ϑϑ ′ (t) : t ∈ [0, T (ϑ ′ , ϑ))}), ϑ ′ > ϑ acting from K ϑ to K ϑ ′ (resp. from G ϑ ′ to G ϑ ). By employing these families we then set with k 0 ∈ K ϑ and G 0 ∈ G ϑ ′ . Note that, for α = 0, these vectors coincide with those introduced in (4.21) and (4.22), respectively, and thus they satisfy (4.23) for all α ∈ [0, 1]. Moreover, as in Proposition 4.2, for each ϑ 0 ∈ R and µ ∈ P ϑ 0 exp , by (6.4) with k 0 = k µ we obtain a family, {µ α t : t ≥ 0, µ 0 = µ} ⊂ P exp , µ α t ∈ P ϑt exp such that µ α t (F θ ) = k α t , e(θ, ·) , θ ∈ L 1 (R d ). (6.5) Next, by repeating the construction used in the proof of Theorem 5.3 one obtains that the map t → µ α t is a unique solution of the equation holding for all F : Γ * → R which can be written as F = KG with G ∈ ∩ ϑ∈R G ϑ , see Remark 5.4.
Here and below we set with D(L) as in Definition 3.1.
6.2. The weak convergence. Our aim now is to prove that the families {µ α t : t ≥ 0, µ 0 = µ} ⊂ P exp , α ∈ [0, 1] constructed above have the following property. Lemma 6.2. For each t > 0, it follows that µ α t ⇒ µ t as α → 0, where we mean the weak convergence of measures on the Polish space Γ * .
We begin by proving the convergence of the corresponding correlation functions. Lemma 6.3. For each t > 0, one findsθ t > ϑ t such that the following holds Proof. We recall that k t satisfies (4.26) with L ∆ ϑ T corresponding to α = 0. Note that the domains of L ∆,α ϑ are the same for all α ∈ [0, 1]. Assume now that the convergence stated in (6.6) holds for a given t ≥ 0. Note that k 0 = k α 0 = k µ 0 ; hence, this assumption is valid for at least t = 0. Let us prove that there exists s 0 > 0possibly dependent on t -such that this convergence holds for all t + s, s ≤ s 0 . Keeping in mind that Q α and k α t satisfy the corresponding analogs of (4.20) and (4.26), respectively, we write whereθ t = ϑ t + δ(ϑ t ) and ϑ t = ϑ 0 + t. Note that the left-hand side of (6.7) is considered as a vector in Kθ t . Both Qθ tϑt (s) and Q ᾱ ϑtϑt (s) are defined only for s < τ (ϑ t ), see (4.15). At the same time, for each ϑ ′ > ϑ, Q ϑ ′ ϑ (0) = Q α ϑ ′ ϑ (0) = I ϑ ′ ϑ , where the latter is the embedding operator, see (4.6). Keeping this and (4.20) in mind we rewrite (6.7) as follows where L ∆,α is given in (4.8) with a(x − y) replaced byã α (x, y) = a(x − y)(1 − ψ α (x)), see (2.24). The choice of s and ϑ 1 , ϑ 2 should be made in such a way that the series as in (4.18) converge for the corresponding operators. Set ϑ 1 = ϑ t + δ(ϑ t )/2. We use this in (4.14) and obtain that Then for some ǫ ∈ (0, 1), we set s 0 = ǫτ (ϑ t )/2 = ǫT (θ t , ϑ 1 ). (6.10) Since the map ϑ → T (θ t , ϑ) is continuous, one can find ϑ 2 ∈ (ϑ 1 , ϑ t ) such that s 0 < T (θ t , ϑ 2 ), cf. (6.10), which together with (6.9) yields that all the three Qθ tϑ1 (s − u), Qθ tϑ2 (s − u) and Q α ϑ 1 ϑt (u) in (6.8) are defined for all s ≤ s 0 and u ∈ [0, s]. Now we take G ∈ Gθ t and set G s = H ϑ 2θt (s)G, s ≤ s 0 . Then G s ∈ G ϑ 2 ⊂ G ϑt , which yields by (6.8) the following Thus, we have to prove that Y α (s) → 0 as α → 0. Since L ∆,α consists of two terms, see (4.8), it is convenient for us to write Y α (s) = Y (1) To estimate both terms we take into account that e(τ y ; η) ≤ 1 and where the latter estimate follows by the fact that k α t+u (η) ≤ exp(ϑ t+u |η|) ≤ exp(ϑ 1 |η|), see claim (a) of Proposition 4.2. By these estimates we obtain from (6.12) and (6.13) the following s (y)dy, i = 1, 2, (6.14) where Let us show that g (1) s is integrable for all s ≤ s 0 . To this end we use the fact that G s−u ∈ G ϑ 2 for all s ≤ s 0 and u ≤ s. Then its norm can be estimated which is finite by our choice of ϑ 2 and s 0 . Then Now let us turn to (6.15). First of all, we note that h (1) α (y) ≤ 1, see (3.1). The function r → 1 −ψ α (r) is increasing. Then, for a certain r > 0, we have where the second term of the last line was obtained by Markov's inequality and (3.2) together with the estimate 1 −ψ α (r) ≤ 1. Now we set in (6.17) r = α −1/(d+2) and obtain Hence, for each y, h α (y) → 0 as α → 0. Then by Lebesgue's dominated convergence theorem and (6.16) and (6.14) we conclude that Y (1) α (s) → 0 as α → 0, holding for all s ≤ s 0 . Now we turn to (6.13) by which we get h (2) (y) = R dψ α (y)a(x − y)dx =ψ α (y) =: 1 − ψ α (y), and g (2) s (y) = g (1) s (y). Hence, also Y (2) α (s) → 0 as α → 0, holding for all s ≤ s 0 , which by (6.11) yields the proof of (6.6) for t + s with s ≤ s 0 whenever it holds for t. To complete the proof let us consider the following sequences, cf. (6.10), Since k α 0 = k 0 = k µ , the proof made above yields the stated convergence for t ≤ sup l t l = lim l t l . Thus, our aim is to show that t l → +∞ as l → +∞. Assume that sup l t l = t * < ∞. By the first line in (6.18) we have that t l = s 01 + · · · s 0l and hence s 0l → 0 in this case. Now we pass in the second line of (6.18) to the limit l → +∞ (τ is continuous) and get that t * should satisfy τ (ϑ t * ) = τ (ϑ 0 + t * ) = 0, which is impossible as τ (ϑ) > 0 for all ϑ ∈ R. This completes the proof withθ t =θ t .
Proof of Lemma 6.2. By Lemma 6.3 and (2.9) it follows that µ α t (F ) → µ t (F ) as α → 0, holding for all F ∈ F, see (5.1). Then the proof follows by the fact that F ⊂ F, see (5.2), and claim (ii) of Proposition 2.11. Below we us the following fact, that can be considered as a complement to Lemma 6.2. Lemma 6.4. Assume that a sequence {ν n } n∈N ⊂ P ϑ exp , ϑ ∈ R, cf. (4.25), satisfy ν n ⇒ ν as n → +∞ for some ν ∈ P(Γ * ). Then ν ∈ P ϑ exp . Furthermore, for each G ∈ ∩ ϑ G ϑ , it follows that k νn , G → k ν , G , n → +∞. (6.19) Proof. By assumption ν n (F ) → ν(F ) for each F ∈ F , see (3.3) and Proposition 2.12. By (2.36), (5.17) and (5.16), for given m ∈ N, θ ∈ Θ + ψ and τ ∈ (0, 1], we then get Then the proof of ν ∈ P ϑ exp follows by the monotone convergence theorem and Proposition 2.4. The validity of (6.19) for G such that KG ∈ F follows by the fact just mentioned, i.e., just because ν has a correlation function. The extension of (6.19) to all G ∈ ∩ ϑ G ϑ is then made by the same arguments as the proof of (5.11).

The Existence: Approximating Processes
In this section, we prove Theorem 6.1 by constructing path measures for the models described by L α , α ∈ (0., 1] introduced in the preceding section. This will be done in a direct way by means of the corresponding Markov transition functions.
7.1. The Markov transition functions. The transition functions in question will be obtained in the form where δ γ is the Dirac measure with atom at γ ∈ Γ * and S α = {S α (t)} t≥0 is a stochastic semigroup of linear operators, related to the Kolmogorov operator L α . Hence, we begin by constructing S α .
7.1.1. Stochastic semigroups. A more detailed presentation of the notions and facts which we introduce here can be found in [1,2,32]. Let E be an ordered real Banach space, and E + be a generating cone of its positive elements. Set E +,1 = {x ∈ E + : x E = 1} and assume that the norm is additive on E + , i.e., x + y E = x E + y E whenever x, y ∈ E + . In such spaces, there exists a positive linear functional, ϕ E , such that A C 0 -semigroup, S = {S(t)} t≥0 , of bounded linear operators on E is said to be stochastic (resp. substochastic) if the following holds S(t)x E = 1 (resp. S(t)x E ≤ 1) for all t > 0 and x ∈ E +,1 . Let D ⊂ E be a dense linear subspace, D + = D ∩ E + and (A, D), (B, D) be linear operators in E. A paramount question of the theory of stochastic semigroups is under which conditions the closure (resp. an extension) of (A + B, D) is the generator of a stochastic semigroup. Classical works on this subject trace back to Feller, Kato, Miyadera, etc, see [1,32]. In the present work, we will use a result of [32], which we present now in the form adapted to the context. To proceed we need to further specify the properties of the space E.
Assumption 7.1. There exists a linear subspace, E ⊂ E, which has the following properties: (ii) There exists a norm, · E , on E that makes it a Banach space.
(iii) E + := E ∩ E + is a generating cone in E; · E is additive on E + and hence there exists a linear functional, ϕ E , on E, such that x E = ϕ E (x) whenever x ∈ E + , cf. (7.2). (iv) The cone E + is dense in E + .  (iv) there exist c > 0 and ε > 0 such that Then the closure of (A + B, D) in E is the generator of a stochastic semigroup,  Proof. Obviously, for each n ∈ N and β > 0, the following holds M β ⊂ M n . Then it is enough to prove the validity of (7.5) for M β . Let us prove the inclusion M β ⊂ M * . For a given µ ∈ M β , let {µ n } n∈N ⊂ M β be a sequence such that µ − µ n → 0. Fix n and let then Γ = P ∪ N be the Hahn decomposition for µ − µ n , i.e., µ(A) ≥ µ n (A) for each A ⊂ P, and µ(A) ≤ µ n (A) for each A ⊂ N. Then where we have taken into account that |µ n |(Γ c * ) = 0. Then the assumed convergence µ n → µ yields that µ ∈ M * . To prove the opposite inclusion we take an arbitrary µ ∈ M * and write its Jordan decomposition µ = µ + − µ − . For a given n ∈ N, let I n be the indicator of the set Γ * ,n defined in (2.17). Then both µ ± n := I n µ ± are in M β . At the same time, by (2.17) the sequence of function J n (γ) := 1 − I n (γ) converges to zero pointwise on Γ * . Since µ ∈ M * , we have By the triangle inequality we then obtain that µ − µ n → 0, where µ n := µ + n − µ − n ∈ M β . By the very definition of the spaces M n , M β and M * , we conclude that they have generating cones of positive elements consisting of those µ that take nonnegative values only. Proof. The first part of the statement follows directly by (7.5). The second part is obtained by the construction used in (7.6).
Our aim is to define its 'predual', L †,α , acting according to the rule µ(L α F ) = (L †,α µ)(F ), (7.9) for appropriate µ ∈ P(Γ * ) and F : Γ * → R, and then to use it to define the corresponding operators acting in the spaces of measures just introduced. Obviously, we can restrict ourselves to the elements of M * . By (6.3) and (6.2) we thus obtain it in the form where A is the multiplication operator by the function −Φ α defined in (7.7). In view of (7.8) the domain of A is to be D = {µ ∈ M * : Φ α µ ∈ M * } = M 1 . (7.11) To define B we introduce the following measure kernel with γ ∈ Γ * and A ∈ B(Γ * ). By (7.7) we then have Moreover, for µ ∈ M + 1 , by (7.13) and (7.14) we have Hence, we can take M 1 as the domain of B and then define L †,α by (7.10) with domain D = M 1 , see (7.11).
In the sequel, we will use one more property of B. By (7.12), (7.14) and (7.4) we get By (2.13) it follows that We apply this and (7.8) in (7.17) and obtain This yields the following extension of (7.15) holding for all n ∈ N. Since Aµ n ≤ α −1 µ n+1 , by (7.19) we also get ∀n ∈ N 0 L †,α : M n+1 → M n , that can be used to define the powers of L †,α (L †,α ) m : M n+m → M n , n ∈ N 0 , m ∈ N. (7.20) Here -and in the sequel in similar expressions -M 0 (corresponding to M n with n = 0) is understood as M * Let us now define a bounded linear operator L †,α β ′ β : M β → M β ′ , β ′ < β, the action of which is the same as that of the unbounded operator L †,α = A + B defined in (7.10) and (7.14). For a given µ ∈ M * , let µ = µ + − µ − be its Jordan decomposition. Then holding for all µ ∈ M β . Here we have used the additivity of the norms on the positive cone as well as the positivity of B and −A. By (7.8) and the following evident inequality xe −κx ≤ 1/eκ holding for all positive x and κ, we obtain exp (βΨ 0 (γ)) . (7.22) By (7.22) for µ ∈ M + β , we then get Next, similarly as in (7.17) it follows that .
We combine this estimate with (7.23) and (7.21) to obtain .
In a similar way, for each n ∈ N, we also obtain, cf. (4.13), By (7.20), for each n ∈ N and µ ∈ M β , we have that (L †,α ) n µ ∈ M β ′ , β ′ < β, and the following holds Lemma 7.6. For each α ∈ (0, 1], the closure of (L †,α , M 1 ) in M * is the generator of a stochastic semigroup, S α = {S α (t)} t≥0 , in M * such that S α (t) : M n → M n for each n ∈ N. The restrictions S α (t)| Mn constitute a C 0 -semigroup on M n . Moreover, for each β > 0 and β ′ ∈ (0, β), S α (t) : M + β → M + β ′ for t < T α (β, β ′ ), see (7.24). Proof. The construction of the semigroup in question will be made, in particular, by showing that all the conditions of Proposition 7.2 are met. We thus begin by checking whether each of the spaces M n and M β enjoys the properties listed in Assumption 7.1. By Lemma 7.4 the density assumed in (i) is guaranteed. Each of these spaces is a Banach space with the corresponding norm, that was already mentioned in the course of their introduction. The properties assumed in (iii) are evident, whereas (iv) follows by Corollary 7.5. Thus, we can start checking the validity of the conditions imposed in Proposition 7.2. Recall that both A and B are (densely) defined on the domain D = M 1 , see (7.11) and Lemma 7.4, and A is the multiplication operator by the function (−Φ α ). Hence, condition (i) of Proposition 7.2 is satisfied. Moreover, A generates the semigroup S consisting of the following operators (S(t)µ)(dγ) = exp (−tΦ α (γ)) µ(dγ). (7.26) which obviously holds for all µ ∈ M * . To check whether S is strongly continuous in M * , for a given µ ∈ M * and ε > 0, we have to find δ > 0 such that µ − S(t)µ < ε for all t < δ. Since M * is the · -closure of M 1 (by Lemma 7.4), for the chosen µ, one finds µ ′ ∈ M 1 such that µ − µ ′ < ε/3. Then by (7.26) and (7.27) we get ≤ t Aµ ′ + 2ε/3 ≤ (t/α) µ ′ 1 + 2ε/3, which completes the proof for M * . Clearly, S(t) : M + n → M + n , and the domain of the trace of A in M n is D n = M n+1 . Then the proof that S(t)| Mn is strongly continuous in M n can be performed similarly as in (7.28). Thus, condition (ii) of Proposition 7.2 is met. In view of (7.19) to complete the proof of (iii) we have to show ϕ((A + B)µ) = 0 whenever µ ∈ M + 1 , which is obviously the case by (7.16). Then it remains to show that, for a fixed n ∈ N, Γ * Ψ n 1 (γ)(L †,α µ)(dγ) ≤ c Γ * Ψ n 1 (γ)µ(dγ) − ε Γ * Φ α (γ)µ(dγ), (7.29) holding for each µ ∈ M + n+1 and some positive c and ε, possibly dependent on n. In view of the following estimate, cf. (7.8), it is enough to show (7.29) with ε = 0 and sufficiently big c. By (7.9) this amounts to showing L α Ψ n 1 (γ) ≤ cΨ n 1 (γ), γ ∈ Γ * . (Chentsov) Assume that there exists C α > 0 and δ > 0 such that, for each triple t 1 , t 2 , t 3 , the following holds W α (t 1 , t 2 , t 3 ) ≤ C α |t 3 − t 1 | 2 , t 3 − t 1 < δ.
are in P ϑ exp with ϑ independent of n and u ∈ [s m , t 2 ] (see (7.52)). We also let ν u (A) = C −1 P s,µ (G · (1 A • ̟ u )), u ∈ [s m , t 2 ], A ∈ B(Γ * ), with C = P s,µ (G). Then ν n,u ⇒ ν u for all u ∈ [s m , t 2 ]. By Lemma 6.4 this yields ν u ∈ P ϑ exp , and hence the corresponding correlation functions satisfy k αn u , k u ∈ K ϑ for all u ∈ [s m , t 2 ] and n ∈ N, see (2.6). To prove P s,µ (H) = 0 we rewrite it, cf. (3.13), P s,µ (F t 2 G) − P s,µ (F t 1 G) − t 2 t 1 P s,µ (K u G)du = 0. (7.53) that the corresponding c ′ runs over the whole C m,n+1 when c runs through C m,n . Then these three lines, denoted S n+1 , take the following form where we have taken into account that j jc j = n + 1, see (5.22). This completes the proof of (5.24) and (5.25). It then remains to prove (5.26). For n = 1, C m,1 is a singleton consisting of c = (m − 1, 1, 0, . . . ), which yields Now we set in the second line of (7.59) V τ (c ′ ; γ) ≡ 1 and calculate S n+1 with this V τ , which is equal to the first three lines of (7.57). That is, C m,n+1 (c ′ ) = c∈Cm,n C m,n (c) c 0 + c 1 + · · · + c n = m c∈Cm,n C m,n (c), where we once again have used the first equality in (5.22). Now (5.26) is obtained from the latter by the induction in n.