Trait-dependent branching particle systems with competition and multiple offspring

In this work we model the dynamics of a population that evolves as a continuous time branching process with a trait structure and ecological interactions in form of mutations and competition between individuals. We generalize existing microscopic models by allowing individuals to have multiple offspring at a reproduction event. Furthermore, we allow the reproduction law to be influenced both by the trait type of the parent as well as by the mutant trait type. We look for tractable large population approximations. More precisely, under some natural assumption on the branching and mutation mechanisms, we establish a superprocess limit as solution of a well-posed martingale problem. Standard approaches do not apply in our case due to the lack of the branching property, which is a consequence of the dependency created by the competition between individuals. For showing uniqueness we therefore had to develop a generalization of Dawson's Girsanov Theorem that may be of independent interest.


Introduction
The study of interactions between organisms and their environment which influence their reproductive success and contribute to genotype and phenotype variation is one of the main questions in evolutionary ecology and population genetics. In this paper, we are interested in modelling the dynamics of populations by emphasizing the ecological interactions, namely the competition between individuals for limited resources, where each individual is characterized by a quantitative trait which remains constant during the individual's life and which is passed on to offspring unless a mutation occurs. Motivated by the work of Bolker-Pacala [5] and Dieckmann-Law [13], several models have been rigorously developed in this context. Firstly, Fournier and Méléard [20] considered spatial seed models. Secondly, Champagnat, et al. [6], Jourdain, et al. [25], Méléard and Viet Chi [32] studied phenotypic trait structured populations when the mutation kernel behaves essentially as a Gaussian law or it belongs to the domain of attraction of a stable law. Finally, Méléard and Viet Chi [31] considered also structured populations whose dynamics depends on the past. In these works, the population is essentially modelled by a continuous time pure birth-death process with mutation. The birth and death rates of this Markov process may depend on each individual's trait and the interactions between them. While traits are normally hereditarily transmitted from a parent to its offspring with a (small) probability a mutation may occur. In this case, the offspring makes an instantaneous mutation step at birth to a new trait value. This mutation step is driven by a mutation kernel (a probability kernel) that depends only on the parent trait. The authors then pass from the microscopic description of the population on the level of individuals to a macroscopic description on the level of population mass distribution in the trait space.
It is important to point out that whereas most organisms rely on binary reproduction for propagation, many other use alternative mechanisms which include multiple offspring in order to reproduce and remain competitive; see for example [1], [7] and [39]. Thus, in this work, we are interested in generalizing this microscopic model in that we allow individuals to have multiple offspring at a reproduction event. More precisely, we consider a general offspring distribution where the number of children produced by each individual depends on its trait as well as on the new trait that appears in case a mutation occurs. We have a number of scenarios in mind in which such a dependence may occur. One such scenario is modelling so-called "jackpot" events, introduced in a seminal paper of Luria and Delbrück [30], in which particular mutants rapidly create a sizeable mutant subpopulation -in the original famous Luria and Delbrück experiment because they are more resistant to detrimental effects of the environment.
In our model, this mutant subpopulation is created instantaneously and we refer to them simply as mutant offspring. Let us consider a particular scenario, namely the evolution of different strains of virus populations or other microparasites with fast adaptation, in order to motivate a dependence of the offspring distribution on the parent as well as on the mutant strain.
Virus populations evolve as subpopulations within hosts that in turn infect other hosts and thus create new evolving subpopulations. Within each host subpopulations of different strains of the virus will generally be present (due to the initial infection but in particular due to mutation during the infection) and their evolution is affected by the immune system, that reacts to the presence of particular strains that it has already recognized. This leads to an increased death rate of a prevalent subpopulation, which we model by a competition term, effectively a death rate that depends on the size and proximity (in type space) of the entire virus population. A mutant type -sufficiently different from the parent type say-may on the other hand quickly establish a sizeable subpopulation, whose size could also depend on the intrinsic fitness of their trait type, before they are targeted by the immune system ("immune escape"). Admittedly, a dependence on the size and proximity of the entire virus population (as in the competition term), which is shaping the current immune system and its response, could be even more desirable. But we view the dependence on the parent type as a first step in this direction, which is in particular realistic if a mutation to an epitope site results in a completely new phenotype of the virus' antibody-binding sites. (We note that our general type space and set-up may also be used to explicitly model the interplay between frequent epitope mutations and relatively rare non-epitope mutations affecting the fitness, see Strelkowa and Lässig [44] for a discussion.) On the level of the hosts a similar dynamic is at play. The infection of a new host (with a particular virus type) is affected by the (local) availability of hosts that are yet uninfected by a particular strain.
Thus again new mutants may have an initial advantage that could depend on a strain's intrinsic fitness as well as on how different it is from previous strains. We refer to Remark 4 (c) for a choice of mutation dependent offspring distributions that could be suitable in modelling the situation described above.
We note that there is an extensive research literature on analysing the spread of different virus strains (and their genealogies), see for example [44,12,34,35] and references therein. Mathematically rigorous results for models with fixed total population size and particular type spaces can be found in recent work by Schweinsberg [42,43], see Dawson and Greven [11] for a general treatment of such models.
Apart from an interpretation of the type space as a space of genetic or phenotypic traits we could also go back to the interpretation of the type space as a spatial location of individuals (or a combination of the two). In a spatial setting Fournier and Méléard [20] interpreted individuals as plants and the production of new individuals in the type space as a result of seed dispersal (with immediate maturation). But unlike in the model of [20] seeds are not always dispersed individually but may be dispersed in groups, in particular when the seeds within fruit are consumed by animals and carried over larger distances, see [8,41] for some recent biological literature highlighting the importance of these dispersal mechanisms. How many of these seeds establish themselves at their new location may depend on the parent location, which can influence how many viable seeds were produced, as well as on the new location, which may be more or less favorable. Finally, we point out that our model can easily be adjusted to also include mutation of individuals during their lifetime. In this case, the "birth" of a (or multiple) individuals at a new location in the type space would happen at the same time as the "death" of the individual at the original location. (If we think again of a geographical space then this would be migration.) Details are left to the interested reader.
As in previous work, the main goal of our work is to look for macroscopic approximations, namely for tractable large population approximations of the individual-based models when the size of the population tends to infinity, combined with frequent mutation and accelerated birth and death. The latter is known as allometric demographies or allometric effects (larger populations made up of smaller individuals who reproduce and die faster); see for example [6,Section 4.2] and reference therein for background. Basically, this leads to systems in which organisms have short lives and reproduce fast while their colonies or populations grow or decline on a slow timescale. We proceed with tightness-uniqueness arguments inspired by the classical theory of superprocesses [10] and [28] without interaction. Clearly, difficulties arise due to the lack of the branching property which is a consequence of the dependency created by the competition between individuals. Nevertheless, following ideas of Méléard [20] and Champagnat, et al. [6] we introduce a new infinite dimensional martingale problem. In the limit, we obtain a measurevalued process defined as the solution of this nonlinear martingale problem. The proof of uniqueness of such a martingale problem requires substantial work. We develop a new Girsanov type theorem which allows us to get rid of the non-linearities caused by the competition. This Girsanov theorem may be viewed as a generalization of Dawson's Girsanov Theorem [9] and may also be of independent interest. the effect of multiple branching makes the analysis more complicated due to the loss of some moments. Therefore, we adopt the localization procedure introduced by Stroock [45] and generalized by He [21] to the measure-valued context. It is important to point out that the nonlinear superprocess obtained at the limit generalizes, for instance, the work of [15], [19] or [37] by incorporating interaction. On the other hand, the general reproduction law of the approximating population system yields a limiting process with a general branching mechanism, which extends the models proposed by Méléard [20], Champagnat, et al. [6], Jourdain, et al. [25] and Etheridge [16] to study spatially interactive structured populations. Let us remark that our model allows the description of massive reproduction events which translate into discontinuities of the limiting process. This can be the first step to analyze superprocesses with interactions that possess a jump structure. The plan of the rest of this paper is as follows. Section 2 is devoted to the introduction of the individual-based model we are interested in. Here, we also prove some useful properties of the model. The main convergence result based on a large population limit is stated in Section 3. In Section 4, we prove tightness of the laws of the particle processes and we identify the limiting values as solutions of a nonlinear martingale problem. The uniqueness of such a martingale problem is attended to in Sections 5 and 6.

The individual-based model
In this section, we formally introduce our interacting particle Markov process for Darwinian evolution in an asexual population with non-constant population size in which each individual is characterized by hereditary types. Our model's construction starts with a microscopic description of a population in which the adaptive traits influence the birth rate, the mutation process, the death rate, and how the individuals interact with each other and their external environment. More precisely, we assume that the phenotype of each individual is described by a quantitative trait. Throughout the paper, we will assume that the trait space X is a Polish space that is locally compact.
In the following we should consistently refer to x ∈ X as either a "trait" or a "type". We consider a parameter K ∈ N that scales the resources or area available. It is called the "system size" by Metz et al. [33]. It will become apparent later that this parameter is linked to the size of the population: large K means a large population (provided that the initial condition is proportional to K). We have the following definition of the stochastic interacting individual system where individuals behave independently: 1. Birth and mutation: An individual of trait type x ∈ X gives birth at rate b K (x) ∈ R + . The number of offspring born at each birth time is controlled by a Markov kernel π K on X 2 × N i.e. by a family of offspring distributions indexed by X × X , say such that (x, h) → π K (x, h, ·) is measurable and ∞ k=1 π K (x, h, k) = 1 for all x, h ∈ X . More precisely, each individual of type x gives birth independently to k clonal individuals with probability π K (x, x, k)(1 − p(x)), where p(x) ∈ [0, 1] is the mutation probability of an individual with trait x ∈ X . Otherwise, it produces k individuals of type h with probability p(x)π K (x, h, k)m K (x, dh), where m K (x, ·) is a probability measure on X called the mutation kernel or mutation step law. Note here that the new type h only depends on x while the number of individuals produced depends on x and h.
2. Natural death: An individual of type x ∈ X dies naturally at rate d K (x) ∈ R + .

Competition:
We let c K (x, y) ∈ R + be the competition kernel which models the competition pressure felt by an individual with trait x ∈ X from an individual with type y ∈ X . We then add extra death due to competition. Specifically, each individual of type y points independent exponential clocks of parameter c K (x, y) on each individual of type x. Then, the death of an individual of type x occurs as soon as a clock pointed at this individual rings.
Let M(X ) denote the set of finite Borel measures on X equipped with the weak topology, and define where δ x is the Dirac measure at x. For any µ ∈ M(X ) and any measurable function f on X , we set At any time t ≥ 0, we let N t be the finite number of individuals alive, each of which is assigned a trait type in X . Let us denote by x 1 , . . . , x Nt the trait types of these individuals. The state of the population at time t ≥ 0, rescaled by K, can be described by the finite point measure ν K t on X defined by We let 1 A be the indicator function of a set A ⊂ X . For simplicity, we denote by 1 := 1 X the indicator function on the whole space. We observe that ν K t , 1 = N t K −1 . For any x ∈ X , the positive number ν K t , 1 {x} is called the density of the trait x at time t. In the next section, we are going to construct under suitable assumptions a M K (X )-valued Markov process with infinitesimal generator, L K , defined for a convergence determining subspace of bounded measurable functions f from M K (X ) to R and for all µ K ∈ M K (X ) by The construction is inspired by [6] and [20], who consider the case of binary reproduction and an offspring distribution that is independent of the trait type. In this more general setting to the best of our knowledge, this has not been shown before. Therefore, we present the proof in order to make this work self-contained.
Remark 1. In our model we assume that in case of mutation all offspring will have the same mutant trait. One could consider more general dynamics in which at a mutation event each new offspring could mutate into a different trait independently of its sibling. This clearly will make the model more realistic but mathematically more involved and complicated. Thus, we leave it as an open problem. On the other hand, we recall that we are primarily interested in studying the (potentially fast) rise in numbers of individuals of new traits and that this is what the proposed model is trying to capture.
In the present paper we use the following notation. Given a topological space V , let B(V ) denote the Borel σ-algebra on V . Let W be another topological space with its respective σ-algebra B(W ). Then we denote by B(V, W ) the set of bounded measurable functions from V to W . Let T > 0 and D([0, T ], V ) (resp. D([0, ∞), V )) denote the space of càdlàg paths from [0, T ] (resp. from [0, ∞)) to V furnished with the Skorokhod topology. For a metric spaceV let P(V ) be the family of Borel probability measures onV equipped with the Prohorov metric. Let B(V, R) be furnished with the supremum norm (i.e. for f ∈ B(V, R), we write f ∞ = sup x∈V |f (x)|) and B(V, R + ) denote the subset of B(V, R) of positive elements. We use C b (V, R) (resp. C b (V, R + )) to denote the set of bounded continuous functions from V to R (resp. from V to R + ). For any integer n ≥ 1, ) of functions with bounded continuous derivatives up to the n-th order. We write C 0 (V, R) for the space of continuous functions from V to R which vanish at infinity. LetX = X ∪ {∂} be the one-point compactification of X , withX = X whenever X is ) which together with their derivatives up to the n-th order can be extended continuously toX (resp.X ×X ). We use the superscript "+" to denote the subsets of non-negative elements bounded away from zero e.g.,

Poissonian construction
We provide a path-wise description of the stochastic process (ν K t , t ≥ 0). For this we will use the following: Assumption 1. Assumptions on the parameters of the model: (i) The birth and natural death rate belong to B(X , R + ). So, there exist 0 < b, d < +∞ (that may depend on K) such that b K (·) ≤ b and d K (·) ≤ d.
(iii) The mutation kernel m K (x, dh) is absolutely continuous with respect to a σ-finite probability measurem on X with density m K (x, h).
We need the following notation: where x θ(1) · · · x θ(n) for some arbitrary (but fixed) order on X .
The function H allows us to label the individuals in a population described by a measure in M K (X ) in an arbitrary way (here depending on their types). The vector that is given by H will be useful later on when we want to attach Poisson processes to all individuals and want them to interact (at the jump times of these Poisson processes) with the rest of the population according to their trait type.
Observe that for (ν t , t ≥ 0) ∈ C K , and t > 0 we can define ν t− in the following way: We now introduce some Poisson point processes that we need. We will write λ for the Lebesgue measure on R + and n for the counting measure on N. Definition 1. Let (Ω, F , P) be a (sufficiently large) probability space. On this space, we consider the following four independent random elements: 1. Initial distribution: Let ν K 0 be a M K (X )-valued random variable.
2. Clonal birth: Let N c be a Poisson point measure on R + × N × N × R + , with intensity measure λ ⊗ n ⊗ n ⊗ λ.

Mutation:
Let N m be a Poisson point measure on R + × N × X × N × R + , with intensity measure λ ⊗ n ⊗m ⊗ n ⊗ λ.

4.
Natural death and competition: Let N d be a Poisson point measure on R + × N × R + , with intensity measure λ ⊗ n ⊗ λ.
Let us denote by (F t ) t≥0 the canonical filtration generated by these processes.
Finally, we define the population process in terms of the previous Poisson measures.
Definition 2. Assume that the biological parameters satisfy Assumption 1. A (F t ) t≥0 -adapted stochastic process ν K = (ν K t , t ≥ 0) that belongs a.s. to C K will be called the population process if a.s., for all t ≥ 0, Let us now show that the stochastic process ν K = (ν K t , t ≥ 0) from Definition 2 follows the Markovian dynamic we are interested in, i.e. that it has infinitesimal generator given by (2).

Proposition 1. Assume Assumption 1 and consider
for all T > 0. Then ν K is a Markov process. Its infinitesimal generator L K is given in (2), and it is defined for all functions f ∈ B(M K (X ), R) such that for u ∈ [0, 1] and µ ∈ M K (X ) where C is a positive constant that does not depend on µ. In particular, the law of ν K does not depend on the chosen order (see Notation 1).
Proof. The fact that ν K = (ν K t , t ≥ 0) is a Markov processes follow from its definition by classical results from the theory of Markov processes. Let us now prove that the infinitesimal generator of the process ν K has the desired form. Consider a function f as in the statement and recall that in our notation . We notice that a.s.
Taking expectations, we obtain that Recalling Notation 1 and that we are integrating with respect the Lebesgue measure, we have that , Assumption 1 as well as the conditions (3) and (4) lead to (2) by differentiating the previous expression. Moreover, it should be now clear that the law of ν K does not depend on the chosen order.
Remark 2. We point out that the function f ∈ B(M(X ), R) given by for some C i > 0. We have used the inequality |e x − 1| ≤ |x|e |x| , for x ∈ R, in order to obtain the second line. We also note that this class of functions is convergence determining.
We now show existence and some moments properties for the population process.
Assumption 2. We consider the following moment conditions: (i) The offspring distribution π K has finite mean, i.e., (ii) The measure ν K 0 has finite mean, that is, We now show that under Assumptions 1 and 2 the stochastic process ν K = (ν K t , t ≥ 0) from Definition 2 is well-defined. Observe that the total jump rate of ν K t is bounded by a polynomial in the total mass at time t ≥ 0 by Assumption 1. Therefore, the process is well-defined on the interval [0, τ n ], where for Moreover, the process is shown to be well-defined if we can exclude explosion of the total mass. Thus the goal then is to show that τ n → ∞ almost surely as n → ∞.
Theorem 1. Suppose that Assumptions 1 and 2 are fulfilled. Then the following hold: (a) The stochastic process ν K = (ν K t , t ≥ 0) from Definition 2 is well-defined and it is not explosive.
(b) Moreover, assume that for some q ∈ N Then for any 0 < T < +∞, Proof. Claim (a) is a consequence of point (b). Indeed, we can build the solution ν K = (ν K t , t ≥ 0) step by step using Definition 2. We have to check only that the sequence of (effective or fictitious) jump instants (T n , n ≥ 0) goes a.s. to infinity as n → ∞ (i.e. there is no explosion in finite time), and this follows from (b) with q = 1 due to the uniform (in X ) boundedness of the rates by Assumption 1.
We now prove (b). Recall τ n from (6). Then, a simple computation using Assumption 1 shows that, dropping the non-positive death terms on Definition 2 yields By taking expectations and recalling Assumption 1, we obtain that Next, we recall the convex inequality and some positive constant C q depending only on q. We thus obtain where C q,K is a positive constant depending only on q and K (for the last inequality, we used that . The Gronwall Lemma allows us to conclude that for any T < ∞, there exists a constant C q,T (not depending on n) such that Finally, we only need to deduce that τ n tends a.s. to infinity in order to finish the proof. Indeed, if not, we may find a T 0 < ∞ such that for all n which contradicts our last inequality. Therefore, we may let n → ∞ in (8) thanks to Fatou's Lemma and get (7).

Martingale properties
We finally give some martingale properties of the process ν K = (ν K t , t ≥ 0) , which are the key point of our approach. Recall that π K = (π K (x, h) = (π K (x, h, k), k ≥ 1), x, h ∈ X ) is the offspring distribution associated to the model described in Section 2. Let g K (x, h, ·) be the associated probability generating function for x, h ∈ X , that is, We consider the mean value of the offspring distribution π K , Theorem 2. Suppose that Assumption 1, 2 are fulfilled.
This is a consequence of the assumption on f and Proposition 1. The point (b) is a consequence of (a) with f (ν) = exp(− ν, φ ) with ν ∈ M K (X ).

The superprocess limit
In this section, we investigate the limit when the system size K increases to +∞ of the interactive particle system described in Section 2, which leads to a random measure-valued process. In an obvious way, we regard the previous interactive particle system as a process with state space M K (X ) ⊂ M(X ). We denote by g K (x, h, ·) and κ K (x, h) the probability generating function and mean, of the offspring distribution π K (x, h), for x, h ∈X , defined as in (9) and (10), respectively. We consider the following hypotheses.
Assumption 3. The biological parameters satisfy: for all x, y ∈X and c ∈ C ∂ (X × X , R + ).
(ii) We have that the mean offspring is uniformly bounded, i.e., (vii) For x ∈X , the mutation kernel m K (x, dh) is absolutely continuous with respect to a σ-finite probability measurem onX with density m K (x, h).
(viii) There are bounded generators A 1 and A 2 of Feller semi-groups on Moreover, The motivation behind Assumption 3 (iv) and (v) comes from the theory of superprocesses. More precisely, from the approximation of branching particle systems that lead to a measure-valued branching processes with local branching mechanism; see for example [10,Section 4.4] or [28,Proposition 4.3].

Remark 3.
A classical choice of the competition function in Assumption 3 (i) is c ≡ 1 which corresponds to density dependence involving the total population size known as the "mean field case" or the "logistic case". In particular, choosing b K and d K proportional to K yields the case studied by Champagnat, et al. [6]. More precisely, Champagnat, et al. [6] This model has been also studied in [25], [31] and [32] in a similar setting. Finally, let us mention that Champagnat, et al. [6] also studied the case of single offspring distribution where the natural birth and death rate are proportional to K η , for some η ∈ (0, 1). In this scenario, they showed that the limit process is deterministic and described by a partial differential equation. This follows from the fact that the variance vanishes in the limit.
(b) α-stable offspring distribution. The reproduction law satisfies This type of offspring distribution has been used in order to get convergence of branching particle systems to the so-called (β, d, α)-superprocess (see for example [10,Section 4.5]). In this case, in order to obtain a nontrivial limit we must choose b K and d K proportional to K α . Clearly, the variance of the offspring distribution is infinite and therefore the limiting process can no longer have finite second moments.
Just as obtained in case (a) by Champagnat, et al. [6] we expect that the limit process is deterministic and described by a partial differential equation if the offspring distribution is α-stable and b K , d K are proportional to K η ′ , for some η ′ ∈ (0, α).
(c) Let Λ ∈ B(X ×X , R + ) and we consider that the reproduction law satisfies z)), x, h ∈X and |z| ≤ 1.
By considering b K (x) = Kσ(x)+b(x) and d K (x) = Kσ(x), for x ∈X and where b, σ ∈ C ∂ (X , R + ), one can check that Assumption 3 (v) is fulfilled. This generating function corresponds to a random variable X x,h + 1, where X x,h is distributed according to a Poisson random variable of parameter . From Taylor's Theorem, Assumption 3 (ii) and (iv) we deduce that for each for x ∈X and z ∈ [0, a] (the small o term is uniform onX × [0, a]). We conclude that (Kψ K (x, z/K)) K converges, uniformly onX × [0, a] for each a ≥ 0, if and only if κ K → κ ∈ C ∂ (X , R + ), as K → ∞, uniformly onX .
In this case, we may expect to obtain a deterministic limiting process described by a partial differential equation as [6,Theorem 4.2] or [20,Theorem 5.3].
It is important to point out that the previous examples of offspring distributions satisfy Assumption 3 (ii) and (iii).
Let us now state our main theorem. For 0 < T < +∞ and µ K ∈ M K (X ), let us call Q K = L(ν K ) the law of the process ν K = (ν K t , t ∈ [0, T ]) such that Q K (ν K 0 = µ K ). We denote by E K the expectation with respect Q K . We make a slightly abuse of notation and we denote by 1 = 1X the indicator function on the whole spaceX , unless we specify otherwise.
Theorem 3. Suppose that Assumption 3 is fulfilled. Assume also that there exists µ ∈ M(X ) (possibly random) such that in law for the weak topology on M(X ) and that Then, for each 0 < T < +∞, (b) Let Q µ be a limit point of (Q K ) K . Then, the measure-valued process ν ∈ D([0, T ], M(X )), with law Q µ such that Q µ (ν 0 = µ), satisfies the following conditions: 2. The measure-valued process ν ∈ D([0, T ], M(X )), or equivalent its law Q µ , solves the following martingale problem: For any where M c (φ) is a continuous martingale with increasing process and M d (φ) is a purely discontinuous martingale, i.e.
Let us provide some specific examples of Assumption 3 (ix), in order to show that a large class of dynamics can be included , for x, h ∈X and where ρ ∈ C ∂ (X , R + ) (see Remark 4). Clearly, we have A 2 ≡ 0 and r ≡ 0. We further assume that ρ ≡ 1 for simplicity.
(b) Consider the case where X = R l , l ≥ 1, and for x ∈X , the mutation kernel m K (x, dh) is the density of a random variable with mean (x, . . . , x) ∈X and covariance matrix Σ(x)/ε K = (Σ ij (x)/ε K , 1 ≤ i, j ≤ l) such that ε K > 0, ε K → +∞ and b K (x)/ε K converges (uniformly on X ), as K → ∞. Moreover assume that the function Σ is bounded and that the third moment of for x ∈ R l and θ 2 (x) positive and bounded.
(c) Consider the case where X = R, and for x ∈X , the mutation kernel m K (x, dh) is the law of a Pareto random variable with index β ∈ (1, 2) divided by K η/β , for η ∈ (0, 1], then it has been proved by Jourdain et al. [25] that for φ ∈ C 2 ∂ (R, R), is the fractional Laplacian of index β. Thus, Assumption 3 (vii) is satisfied with A 1 = D β as long as we take the birth rate b K such that b K /K η converges (uniformly onX ), as K → ∞. Remark 4). The mutation kernel m K (x, dh) be the Gaussian distribution (conditioned to be in [x 1 , x 2 ]) with mean x ∈ X and variance θ K as Remark 5 (a). In addition b K (x)λ K θ K → λ ∈ R + (uniformly on X ), as K → ∞. Then an elementary computation shows Remark 7. We take κ K (x, h) =Λ(h), for x, h ∈X , withΛ ∈ B(X , R + ) (see Remark 4). Let X = R and the mutation kernel m K (x, dh) as Remark 5 (b). If in additionΛ ∈ C 2 ∂ (R, R + ), then for φ ∈ C 2 ∂ (R, R), We state the following result on uniqueness of the limiting process of Theorem 3.
Theorem 4. Suppose that Assumption 3 is fulfilled and that σ ∈ C ∂ (X , R + ) + . For a non random µ ∈ M(X ), let Q µ be a limit point of the tight sequence (Q K ) K in Theorem 3 and ν ∈ D([0, T ], M(X )) a measure-valued process with law Q µ . Then there is a unique solution to the martingale problem (M).
Finally, we provide a criterion to check that no mass escape.
For a non random µ ∈ M(X ), let Q µ be the unique solution to the martingale problem (M). Then, Q µ is actually the law of a measure-value process in D([0, T ], M(X )).

Remark 8.
We give an example that satisfies the condition of Theorem 5. We consider the framework of Remark 5 (b) where X = R l , l ≥ 1. In this case, A 1 is given for One can easily check that (φ n ) n≥1 ⊂ C 2 ∂ (X , R + ) + ∩ D(A 1 ) ∩ D(A 2 ) and that the conditions of Theorem 5 are satisfied.
Remark 9. We notice the following: (a) Following the language of the theory of superprocesses (see for example [10], [19] and [28]), we could refer to the solution of the martingale problem (M) as the (A 1 + A 2 , ψ − pr, c)-superprocess with competition, i.e. it is a superprocess with spatial motion governed by the infinitesimal generator A 1 + A 2 , reproduction mechanism or branching mechanism ψ(x, z) − p(x)r(x)z, with x ∈ X and z ≥ 0, and with competition c.
(b) In the non-spatial setting, Lambert [26] introduced general branching processes with logistic growth, abbreviated LB-processes. These processes may be viewed as continuous-state branching processes (CSBP's) with a general branching mechanism and negative interaction between each pair of individuals in the population. Therefore, the solution to the martingale problem (M) appears as a generalization of the LB-process to model spatially structured populations.

(c) The solution of the martingale problem (M) generalizes the models proposed by Méléard [20],
Champagnat, et al. [6], Jourdain, et al. [25]. They can be recovered by considering A 2 ≡ 0, r ≡ 0, ψ(x, z) = b(x)z + σ(x)z 2 for x ∈ X and z ≥ 0, and A 1 is the Laplacian or the fractional Laplacian. This model also can be seem as an extension of the one of Etheridge [16] by taking A 2 ≡ 0, r ≡ 0, ψ(x, z) = bz + σz 2 for x ∈ X , z ≥ 0 and b,σ constants, A 1 the Laplacian and c(x, y) = h(|x − y|) for x, y ∈ X with a nonnegative decreasing function h on R + that satisfies ∞ 0 h(r)r d−1 dr < ∞.
(d) It is important to point out that we were not able to show uniqueness in general for the martingale problem (M). More precisely, the case when the diffusion part in the branching mechanism ψ, i.e. σ in Assumption 3 (vi), is not bounded away from zero it is not cover in Theorem 4. We could not obtain a useful Girsanov type theorem in this case (see Section 5.2) to get rid of the nonlinearity problems, which prevent us to use Laplace-transform techniques as in the classical theory of superprocess. It seems that this is a really hard problem.
Section 4 is devoted to the proof of Theorem 3. We firstly establish in Section 4.1 the tightness of the sequence (ν K ) K , i.e., Theorem 3 (a). In Section 4.2, we identify its limiting values ν, and we show that they satisfy the properties of Theorem 3 (b). In Section 5, we prove Theorem 4 about uniqueness of the limiting process and the convergence of the sequence (ν K ) K . Finally, we show that there is not escape of mass for the limiting process, i.e. Theorem 5, in Section 6.

Tightness
For 0 < T < +∞ and µ K ∈ M(X ), recall that Q K = L(ν K ) is the law of the process ν K = (ν K , t ∈ [0, T ]) such that Q K (ν K 0 = µ K ). We denote by E K the expectation with respect Q K . We shall prove that: First, we obtain the following moment estimate.
Proof. We introduce for each n ≥ 1 the stopping time τ n = inf{t ≥ 0 : ν K t , 1 ≥ n}. We obtain from Definition 2 that for t ≥ 0. By arranging the terms on the right hand side, we get that for t ≥ 0. Assumption 3 (iii) and (vi) imply that for some positive constant C (that does not depend on K and n). Gronwall Lemma and Condition (12) allow us to conclude that there exists a constant C T , not depending on K and n, such that Finally, the claims follow by noticing that τ n tends to infinity a.s. (see for example the end of the proof of Theorem 1).

Proof of Proposition 2. Notice that M(X ) is a Polish space by [17, Theorem 3.1.7] which implies that
We define the family of functions Observe that it separates points on M(X ) (it follows from Dynkin's π-λ Theorem; see for example [3, Theorem 1.3.2 and 1.3.3]) and that it is closed under addition. On the other hand, Lemma 1 implies (a). Therefore, it only remains to show (b). We consider f ∈ E , i.e., where µ ∈ M(X ), λ i ∈ R + , θ i ∈ R and φ i ∈ C ∂ (X , R + ), for i = 1, . . . , n. It have been shown in Remark 2 that f ∈ E satisfies condition (4) in Proposition 1. Then, We now use Assumption 3: We apply condition (iv) and (v) to the forth to sixth line, (viii) to the seventh line and (i) to the last line together with (5) which implies that writing f (ν K ) = n i=1 θ i e −λ i ν K ,φ i this last line can be bounded by where C is a nonnegative constant (not depending on K and whose value changes from line to line). Thus, we obtain for a positive constant C (that does not depend on K). Thus, Lemma 1 implies that and [38,Theorem I.51] implies also that for each f ∈ E , the process M K (f ) = (M K t (f ), t ∈ [0, T ]) given by

Identifying the limit
Recall that Q K = L(ν K ) denotes the law of the process ν K such thar Q K (ν K 0 = µ K ), and denote by Q µ a limiting value of the tight sequence (Q K ) K . Recall also that D([0, T ], M(X )) is a separable space; see for example [4]. By Skorokhod's representation (see [17, p. 102]), we may assume that the càdlàg processes (ν K t , t ∈ [0, T ]) and (ν t , t ∈ [0, T ]) with distributions Q K and Q µ respectively are defined on the same probability space and that the sequence  [17,Lemma 3.7.7]. It follows from [17, Proposition 3.5.2] that for each t ∈ D(ν) we have lim K→∞ ν K t = ν t almost surely. In this section, we show that the limit point Q µ of the sequence (Q K ) K satisfies the properties stated in Theorem 3 (b).
Proof of the moment bound. The first moment bound of the limiting process (ν t , t ∈ [0, T ]) follows from condition (12) together with Fatou's Lemma and Lemma 1 (we have implicitly used that D(v) is at most countable [17, Lemma 3.7.7] and right continuity).
Proof of the martingale property. Let φ ∈ D(M), (26) is a martingale. On the other hand, we use Taylor's Theorem, Assumption 3 (ii) and (iv) of the offspring distribution to write for x, h ∈X . Ignoring the terms of order o(1/K), the expression for the martingale E K (φ) becomes for t ∈ [0, T ]. Assumption 3 (by applying condition (iv) and (v) to the second line, (viii) to the third to fifth line and (i) as well as (4) to the sixth and last line) now shows that if a subsequence (ν Kn ) n converges to a ν then we obtain E Kn t (φ) converges weakly to E t (ν, φ) where This also suggests that E t (ν, φ) for any limit point ν of the sequence (ν K ) K should be a martingale.
To justify this conclusion we need to know that the martingale property was preserved under passage to the limit. Therefore, it is enough to check that for each l ∈ N, (s i ) l j=1 ⊂ D(ν), s, t ∈ D(ν) with 0 ≤ s 1 ≤ · · · ≤ s n < s < t ≤ T , some continuous and bounded maps h 1 , h 2 , . . . , h l on M(X ), It follows from Theorem 2 (b) that We notice that Assumption 3 implies Recall that φ is bounded away from zero such that ν K u , 1 ≤ C ν K u , φ for some constant C > 0. Using for some positive constant C 1 that does not depend on K. By using the calculation (18) above and a little bit of extra effort (similar computations have been done in (17)), one can check that for some positive constant C 2 that does not depend on K. Now we see that for a subsequence (ν Kn ) n that converges to a ν we have by the Dominated Convergence Theorem since all the h j are also bounded that which shows (20). Thus, Next, we show that M(φ) defined in (M) is a martingale with the desired decomposition. Our proof follows similar ideas to those of [10, Theorem 6.1.3] or [28,Theorem 7.13]. We show that Z = (Z t (φ), t ∈ [0, T ]) given by Z t (φ) := e − νt,φ is a special semimartingale, i.e., it has a representation is a process of locally bounded variation that has locally integrable variation; see, e.g. [29, p. 85]. In the following, we where ψ(φ) := ψ(x, φ(x)) for x ∈X with ψ defined as in Assumption 3. We now consider the processes ds. Using this and integration by parts together with the fact that Y (φ) is a process of locally bounded variation we obtain that

locally bounded variation and
so, again by integration by parts, is a special semi-martingale (one can follow a similar estimation procedure as in (21) in order to check that the locally bounded variation term of Z has locally integrable variation).
On the other hand, using Itô's formula [38,Theorem II.32] we conclude that ν · , φ = − log Z · (φ) is also a semi-martingale. Let M ± (X ) denote the space of signed Borel measures onX endowed with the where ∆ν s = ν s − ν s− ∈ M • ± (X ). LetN (ds, dµ) denote the predictable compensator of N(ds, dµ) and letÑ (ds, dµ) denote the compensated random measure; see [29, p. 172]. It follows that where is a purely discontinuous local martingale; see [23, p. 85]. We apply Itô's formula [38,Theorem II.32] to exp(− ν · , φ ), with ν · , φ given by (23), and we get that is a local martingale. Note that for some constant C ≥ 0. According to Theorem I.4.47 of [23], s≤t ∆ν s , φ 2 < ∞. Thus the second term in (24) has finite variation over each finite interval [0, T ]. Since Z is a special semi-martingale, Proposition I.4.23 of [23] implies that is of locally integrable variation. Thus it is locally integrable. According to Proposition II.1.28 of [23], is a purely discontinuous local martingale. Therefore, is a local martingale. The uniqueness of canonical decomposition of special semi-martingales (see, e.g. [29, p. 85]) allows us to identify the predictable components of locally integrable variation in the two decompositions (22) and (25) to obtain that Then It is not difficult to deduce that U t (λφ) = λU t (φ) and C t (λφ) = λ 2 C t (φ), for λ ∈ R + . Replacing φ by λφ in (26), we have This allows to conclude (in the semimartingale representation of ν t , λφ ): That is, the jump measure of the process ν has compensator given by (16). In particular, this implies that the jumps of the ν are almost surely in M(X ).
Finally, from the identity (23), we observe that M(φ) = M c (φ) + M d (φ). Therefore, it is enough to show that M c (φ) and M d (φ) are actually martingales to conclude that M(φ) defined in (M) is a martingale. Following the argument in Section 2.3 of [27] we obtain the martingale property of M d (φ).
due to Assumption 3 (v) for φ ∈ D(M) and whereN is given by (16). Hence, Proposition II.1.28 and Theorem II.1.33 in [23] show that M d,1 (φ) is a martingale and M d,2 (φ) is a square-integrable martingale with quadratic variation process given by which implies that M d (φ) is a martingale. On the other hand, recall that the continuous local martingale M c (φ) possesses an increasing process C(φ) given by (27), and such that by the moment property in Theorem 3 (b). Hence Corollary 1.25 in [40] implies that M c (φ) is a squareintegrable martingale. This conclude the proof of Theorem 3 (b).

Proof of Theorem 4
In this subsection, we will prove that uniqueness holds for solutions of the martingale problem (M). Our approach is based on the use of a Girsanov type transform and the localization method introduced by Stroock [45] in the measure-valued context (see He [21]). More precisely, we first introduce in Section 5.1 the "killed" martingale problem associated with the martingale problem (M). The "killed" martingale problem may be seen as the martingale problem (M) where the randomness is eliminated from the big jumps. Secondly, we develop a Girsanov type theorem in Section 5.2 for the "killed" martingale problem in order to get rid of the non-linearities (caused by the competition) which allows us to deduce uniqueness for the "killed" martingale problem. Finally, we develop a localization argument to show that uniqueness of the "killed" martingale problem implies uniqueness for the martingale problem (M).
In this section, we always assume that Assumption 1 and 3 are fulfilled.
It is important to mention that the use of Girsanov type transforms was first applied in the measurevalued diffusions setting by Dawson [9, Section 5] (or [18,Theorem 2.3]). However, in our case Dawson's Girsanov Theorem is not applicable since the measure-value process possesses jumps. Thus, we should extend Dawson's result in our setting.

The killed martingale problem
In this section, we introduce the killed martingale problem.

Dawson's Girsanov type Theorem
We next develop a Dawson's Girsanov type theorem. Recall that for each 0 < T < +∞, the process , with law Q ′ µ , denotes a solution of the (M ′ ) martingale problem. Informally, we want to find a measure Q µ under which, for all φ ∈ D(M), To achieve this we will use the fact that the continuous part M c ′ (φ) = (M c ′ t (φ), t ∈ [0, T ]) of the martingale M ′ (φ) can be expressed as an integral with respect an orthogonal martingale measure (see [9,Section 7.1]). As in Walsh [47,Chapter 2], we write where W (ds, dx) is an orthogonal continuous martingale measure with covariance given by and R is defined by R(µ, dx, dy) = 2σ(x)δ x (y)µ(dy), for µ ∈ M(X ).
We consider the continuous local martingale L = (L t , t ∈ [0, T ]) given by where Recall here that the function σ is bounded away from zero and that the competition kernel c(x, y) is bounded such that a(ν ′ s , x) ≤ C ν ′ s , 1 uniformly over x ∈X . Then, the stochastic linear equation has a unique nonnegative solution (see for example [14]) known as the Doléan-Dade exponential, It is well-known that z is a nonnegative local martingale (see [14]), and therefore it is a supermartingale is a martingale. The martingale property plays an important role in many applications. In particular, z T usually plays the role of the Random-Nykodym derivative of one probability measure with respect to another, and thus, this will allow us to generalize Dawson's Girsanov Theorem [9] in our setting.
We write E ′ µ and E µ for the expectation with respect Q ′ µ and Q µ . Our aim is to show that E ′ µ [z T ] = 1 for which it is enough to prove that Q µ (τ n ≤ T ) → 0 as n → ∞. By [38,Theorem III.39,p. 134], we get that under Q µ , the process (M t∧τn (φ), t ∈ [0, T ]) is a martingale. By the same argument as in the proof of Lemma 2, we have that Claim.M ·∧τn (φ) can under Q µ furthermore be decomposed into a continuous and purely discontinuous martingaleM ·∧τn (φ) =M c ·∧τn (φ) +M d ·∧τn (φ) which are up to the stopping time τ n characterized as in (c) of the Theorem 6. That is,M c ·∧τn (φ) has increasing process Recall that we want to show that Q µ (τ n ≤ T ) → 0 as n → ∞. For this we take m = 1 in the inequality (30) and we integrate to obtain that, The Burkholder-Davis-Gundy inequality applied to the Q µ -martingale (M t∧τn (1), t ∈ [0, T ]) gives that there is a constant C ′ > 0 (the value of C ′ changing from line to line) such that On the other hand, it is not difficult to see that under Q µ . Then, Assumption 3 implies that We observe that Then, (31) and (32) as well as Assumption 3 imply that whereC (C ′ , C 1 , T, µ, 1 ) is a positive constant. We observe that and we notice that the Markov inequality together with the estimation (33) implies that Finally, we have shown that Q µ (τ n ≤ T ) → 0 as n → ∞ and so, we deduce that E ′ µ [z T ] = 1. Therefore, z is a martingale as required.
Proof of Claim. We first check that the random measureN n is a Q µ -compensator of the optional random measureN n . Let θ n be a stopping time such that θ n ≤ τ n for n ≥ 1. Let B ⊂ M(X) \ {0} be a measurable set. Then, where we have used the fact that This follows from well-known results of square integrable martingales (recall Lemma 2), and the fact that 1 ·∧τnÑ ′ (ds, dη) has bounded variation while z ·∧τn does not have.
The following proposition shows uniqueness for the killed martingale problem. Definition 3. For µ ∈ M(X ) and 0 < T < +∞, we say that a stochastic process ν ∈ D([0, T ], M(X )), or equivalently its law P µ , solves the (L , D(L ), µ)-martingale problem if P µ (ν 0 = µ) = 1 and is a P µ -martingale, for all F in some appropriate domain of functions on D(L ) ⊂ B(M(X ), R).

Remark 10.
Recall thatX is the one-point compactification of X . Thus M(X ) equipped with the weak topology is also compact. On the other hand, recall from the proof of tightness (Proposition 2) that E separates points on M(X ) (this follows from Dynkin's π-λ Theorem). Moreover, E has the nonvanishing property, i.e, for every µ ∈ M(X ) there exists F φ,λ,θ n ∈ E such that F φ,λ,θ n (µ) = 0. Therefore the Stone-Weierstrass Theorem (see for example [2, Appendix A7, Theorem 5, p. 393]) implies that E is dense in C 0 (M(X ), R).
We make the link between the martingales problems (M), (M ′ ) and the Definition 3. Proof. The result is a consequence of Theorem 3 (b) and its proof as well as an application of Itô's formula.
By Proposition 3, we henceforth assume throughout this section that for µ ∈ M(X ), there is a unique solution to the martingale problem (M ′ ) which is the killed martingale problem.
Let ω = (ω t , t ∈ [0, T ]) denote the coordinate process of D([0, T ], M(X )) and let Q ′ denote the unique solution of the killed martingale problem. For 0 ≤ s < T < +∞ and µ ∈ M(X ), let Q ′ s,µ = Q ′ (·|ω s = µ). Hence Q ′ s,µ is also a unique solution of the killed martingale problem starting from time s at the value µ. We set  (19)) the process I n = (I n t , t ∈ [0, T ]) for n ≥ 0 an integer given by is a local martingale under Q µ , where λ i ∈ R + , θ i ∈ R and φ i ∈ D(M), for i = 1, . . . , n.
Set τ (ω) = τ 1 (ω) ∧ τ 2 (ω). The following lemma gives another martingale characterization for ω l . Proof. The result follows from the formula for integration by parts and the same argument as in the proof of Theorem 7 in [15]. Here, it is used that up to time τ (ω) we have ω t , 1 bounded almost surely.
The next two theorems correspond to [21, Theorem 2.4 and Theorem 2.5]. The first result shows that the solution of the martingale problem (M) is determined by the martingale problem (M ′ ) before it has a jump of size larger than 1 < l < +∞.
Proof. The statement is obtained along exactly the same lines as the proof of [21,Theorem 2.4].
We now see that uniqueness of the killed martingale problem implies uniqueness for the solution Q µ of the martingale problem (M) on F l τ (ω)− . Our next step is to show that uniqueness of the killed martingale problem implies uniqueness of Q µ on F τ (ω) . We have the next theorem which shows that when a jump of size larger than 1 < l < +∞ happens, the jump size is uniquely determined by F l τ (ω)− . We denote by E µ the expectation with respect to Q µ . Theorem 8. For 1 < l < +∞, let M l (X ) = {µ ∈ M(X ) : µ, 1 ≥ l}. There is an F l τ (ω)− -measurable function τ ′ : Ω → [0, T ] such that for E ∈ B(M l (X )) Π(x, du)ω l s∧τ (ω) (dx)ds 1 E (vδ y )Π(y, dv)ω l s∧τ (ω) (dy)dt holds for any solution Q µ of the martingale problem (M). In particular, given F l τ (ω)− the distribution of the random measure N up to time τ (ω) is uniquely determined.
Proof. The formula for the conditional expectation follows from [21, Theorem 2.5] by using Theorem 7 and Lemma 3 to show that the requirements of [45,Theorem 3.2] are satisfied. Since the distribution of the random measure N up to time τ (ω) is characterized by its intensity the result follows.
Notice that for each n ≥ 1, β n is bounded by nl. By Lemma 4 and Theorem 8, we can prove by induction that Q µ is uniquely determined on F βn for all n ≥ 1. Therefore, it is enough to check that Q µ (β n ≤ T ) → 0 as n → ∞ for each T > 0, which follows along exactly the same lines as in [21,Theorem 2.6].
Finally, the previous result together with Proposition 3 concludes the proof of Theorem 4.

Proof of Theorem 5
In this section, we just check that no mass escapes for the unique solution to the martingale problem (M). The proof follows exactly as in [21, Theorem 3.1 and Theorem 3.2]. Specifically, one first shows