Long time behaviour of continuous-state nonlinear branching processes with catastrophes

Motivated by the study of a parasite infection in a cell line, we introduce a general class of Markov processes for the modelling of population dynamics. The population process evolves as a diffusion with positive jumps whose rate is a function of the population size. It also undergoes catastrophic events which kill a fraction of the population, at a rate depending on the population state. We study the long time behaviour of this class of processes.


Introduction
We introduce a general class of non-negative continuous-time and space Markov processes, including diffusive terms, as well as negative and positive jumps. They can be seen as a generalization of a class of continuous-state nonlinear branching processes introduced recently in [18], which did not allow for negative jumps. Our motivation comes from the study of a parasite infection in a cell population (see the companion paper [20]). The processes studied in the current work may indeed be interpreted as the dynamics of the quantity of parasites in a cell line. Catastrophic events correspond to cell divisions during which a cell splits its parasites between its two daughter cells, according to a probability kernel κ(dθ) on (0, 1). The quantity of parasites in a cell line is thus multiplied by θ ∈ (0, 1), which can also be interpreted as the death of a fraction (1 − θ) of the parasites in the cell line. First, we investigate the possibility for the processes to be absorbed or to explode in finite time. In each case, we give conditions under which the event has null or positive probability, and even provide conditions under which it happens almost surely. Building on these results, we then explore the long time behaviour of the process. We give criteria for the processes to converge to a positive random variable, to 0 or to ∞. Moreover, in the case of almost sure extinction, we give bounds on the exponential decay of the survival probability.
The class of processes under study belongs to a class of processes recently introduced as strong solutions of Stochastic Differential Equations (SDE) in [21], and only few processes of this class have been studied until now. This framework allows to take into account interactions between individuals as well as the effects of the environment. The addition of interactions between individuals in continuous-time branching processes has recently attracted a lot of interest. For instance, Feller diffusions and Continous State Branching Processes (CSBPs) with logistic competition have been studied in [14,16] and [5], respectively, Feller diffusions with some nonlinear birth rates have been studied in [24], and polynomial interactions have been considered in [17]. Li and coauthors [18] have recently introduced a general class of continuous state nonlinear branching processes, and have investigated extinction, explosion and coming down from infinity for this class. However, in all these models, only positive jumps are allowed. They result from large birth events, and were first introduced by constructing the continuous state process as the limit of a sequence of discrete branching processes (see for instance [15,3]). In parallel, models where the interactions between individuals result from the fact that the whole population is subject to the variations of the same environment have been intensively studied recently, in particular in the framework of CSBPs in random environment. This class of models, initially introduced by Keiding and Kurtz [13,12] in the case of Feller diffusions in a Brownian environment, have been generalised and studied by many authors during the last decade [7,1,23,22,9,21,19,2]. In this setting, negative jumps may occur, being for instance the result of environmental catastrophes killing each individual with the same probability [1]. However, in CSBPs in random environment, the environment is independent of the population state. In particular the rate of catastrophes does not depend on the population size. We relax this assumption in the current work.
In the next section, we define the processes of interest and give sufficient conditions for their existence and uniqueness as the solution of an SDE. Sections 3, 4 and 5 are dedicated to the possibility of absorption and explosion of the process in finite time. In Section 6 we study the long time behaviour of the process. The proofs are derived in Section 7.
In the sequel, we work on a filtered probability space (Ω, F, F t , P), N := {0, 1, 2, ...} will denote the set of non-negative integers, R + := [0, ∞) the real line and R * + := (0, ∞). We will denote by C 2 b (R + ) the set of twice continuously differentiable bounded functions on R + . Finally, for any stochastic process X on R + , we will denote by E x [f (X t )] = E f (X t ) X 0 = x .

Definition of the population process
We consider continuous-time and continuous-state Markov processes solution to the following SDE: where X 0 is non-negative, g is a real function on R + , σ, p and r are non-negative functions on R + , B is a standard Brownian motion, Q is a compensated Poisson point measure with intensity ds ⊗ dx ⊗ π(dz), π is a positive measure on R + , and N is a Poisson point measure with intensity ds ⊗ dx ⊗ κ(dθ) where κ is a probability measure on [0, 1]. We assume that N ,Q and B are mutually independent. Under some mild conditions, the SDE (2.1) has a unique pathwise strong solution. We will work under these conditions in the sequel.
Assumption A.
-The function g is continuous on R + , g(0) = 0 and for any n ∈ N there exists a finite constant B n such that for any 0 ≤ x ≤ y ≤ n -The function p is non-decreasing on R + .
The form of this assumption comes from the conditions of [21, Proposition 1] that we will apply to get the next result. The condition g(0) = p(0) = σ(0) = 0 ensures that the process stays non-negative and that 0 is an absorbing state. Notice that the second point makes sense from a biological point of view. The value of p(x) corresponds to the rate of large reproductive events when the population is of size x. It thus means that more individuals produce more offspring.
Proposition 2.1. Suppose that Assumption A holds. Then, Equation (2.1) has a pathwise unique non-negative strong solution absorbed at 0 and ∞. It is a Markov process with infinitesimal generator G, satisfying for all f ∈ C 2 b (R + ), Here, we adopt the formalism of [21] for the definition of a solution to the SDE (2.1): a [0, ∞]-valued process X = (X t , t ≥ 0) is a solution if it satisfies (2.1) up to the time τ n := inf{t ≥ 0, X t ≥ n} for all n ≥ 1, and X t = ∞ for all t ≥ τ := lim n→∞ τ n .
We can now study the long time behaviour of the process X solution to (2.1).
The study of the absorption of the process X relies on the construction of a sequence of martingales. Let us define and the set of functions G a given for a ∈ A ∪ (0, 1) and x > 0 by where The behaviour around 0 of G a characterizes the likelihood of the process X to reach 0. It depends on everything except the negative jump term close to 0. This is because the division rate r is bounded for small values of the process (r(0) < ∞ and r is continuous), and thus the process cannot reach 0 due to an accumulation of jumps. We entitle the conditions SN0 (Small Noise around 0) and LN0 (Large Noise around 0).
(SN0) There exists a ∈ A and a non-negative function f on R + such that Remark 3.1. Condition (LN0) can be generalized. Indeed, a careful reading of the proof shows that a sufficient condition is that there exists a positive non-increasing function f on R + such that i) there exist a < 1 and x 0 > 0 such that for all x ≤ x 0 , ii) there exist ε < 1 and δ > 0 such that Under a first moment assumption for the positive jumps, Conditions (SN0) and (LN0) may be simplified.
and (3.6) is equivalent to See Section 7.2 for the proof of this result.
We can now state results on the absorption of the process in terms of those two conditions. Theorem 3.3. Suppose that Assumption A holds and let X be the pathwise unique solution to (2.1).  [18,Theorem 2.3] where (ln(x −1 )) r , r < 1 and (ln(x −1 )) r , r > 1 were considered instead of the right-hand sides of (3.5) and (3.6). Theorem 3.3 shows that the behaviour of the noise around zero determines the fate of the process in terms of absorption: if it is large enough compared to the growth rate of the parasites around zero, then the probability of being absorbed in finite time is positive. Notice that under our conditions, the process cannot be absorbed because of negative jumps.

Explosion of the process
We now focus on the possibility for the process to explode in finite time. For the modelling of a parasite infection, it would correspond to an explosion of the quantity of parasites in a lineage or in the cell population. Unable to overcome the infection, the latter would thus be likely to die (see companion paper [20]). In this case, the behaviour of G a at infinity determines the likelihood of the process to reach infinity in finite time. Unlike for the absorption behaviour, the law and frequency of negative jumps may impact the probability of explosion of the process. We entitle the conditions SN∞ (Small Noise for large values) and LN∞ (Large Noise for large values). (SN∞) There exist a < 1 and a non-negative function f on R + such that (LN∞) There exist a ∈ A, η > 0 and x 0 > 0 such that for all x ≥ x 0 The next result describes the possible behaviours for the process in terms of explosion.  , and again we use some ideas of this previous work. However, several adaptations are again needed as negative jumps may occur in our process. Moreover, we completed the proof of [18, Theorem 2.8] as one argument seemed to be missing to conclude. Finally, as in Theorem 3.3, we give tighter bounds than in [18,Theorem 2.8].
It is interesting to notice that the explosion of the process depends on all the components of the population dynamics. The Malthusian growth rate g increases the likelihood of the explosion phenomenon when increasing, whereas the fluctuations of the Brownian part and due to the large reproductive events decrease it. An interesting consequence of this result is that the presence of the catastrophic events may prevent the explosion of the population. In particular, if the process represents the quantity of parasites in a cell line and the catastrophes correspond to the sharing of the parasites between the two daughter cells at division, the cell population may avoid the explosion of the quantity of parasites by increasing its division rate or by modifying the law of the sharing of the parasites between the two daughter cells.

Simpler conditions for absorption or explosion of the process
When the fluctuations of the process X are no too strong (see Proposition 5.1 for details), the conditions for absorption and explosion of the process X take simpler expressions. In particular, they do not rely on the existence of a positive real number a satisfying some conditions. As we will see, they allow to make links with previous results on CSBP in random environment.
From now on, we will always assume that the following conditions hold: We introduce a new function H (linked to the family (G a , a = 1)) whose behaviour at 0 (resp. infinity) is linked to the absorption (resp. explosion) behaviour of the process X: and I a is defined in (3.4). We refer the reader to Appendix A for the derivation of the limit. Note that I is well-defined under the classical moment assumption R + z ∧ z 2 π(dz) < ∞. Using Theorems 3.3 and 4.1 we can prove the following result.
We thus see that for the phenomena of absorption and explosion, the trade-off between the growth of the parasites and the division of the parasites between the two daughter cells is fully described by the behaviour of the function H at zero and infinity. In fact, and as we will see in the next section, the long time behaviour of the infection in a cell line is governed by the behaviour of this function. However, to conclude on the behaviour of the process in finite time, we also need the variance of the noise and the rate of positive jumps to be small enough around 0 or infinity. Note that if g(x) ≡ gx, r(x) ≡ r and p(x) ≡ 0, we have 2 x −2 and we retrieve the key quantity g + E[ln Θ] found in [4] but unfortunately, the condition on σ for the case of absorption is too strong to be satisfied by the standard noise of Feller diffusions σ(x) 2 = σ 2 x. However, the weaker assumption (LN0) is satisfied in this case. But for the finite time behaviour of the process, rather than the sign of g + E[ln Θ], it is the strength of the fluctuations that matters.
More generally when X is a CSBP in random environment, there exists a Lévy process K such that (X t e −Kt , t ≥ 0) is a non-negative local martingale, and as such a non-negative supermartingale which converges to a non-degenerate random variable (see [1,23,19,9,2] for instance). The expectation ofK 1 thus gives information on the long time behaviour of the process X in this case. Our function H is in fact an extension of this expectation in the non-linear case. This link will be more explicit in the next section.

Long time behaviour of the process
The long time behaviour of the process X depends on the interplay between g, which tends to increase (resp. decrease) it when positive (resp. negative), r, which decreases it, and the fragmentation kernel κ which has a less intuitive effect. It is also impacted by the random fluctuations of the large birth events. We consider the following possibilities for the relative strengths of g and r (Local Slow/Fast Growth (LSG, LFG), Global Slow/Fast/Very Fast Growth (GSG, GFG, GVFG): (GFG) There exist r > 0 and η > 0 such that r(x) ≥ r, ∀ x ≥ 0 and (GVFG) There exist r > 0 and η ≥ 0 such that r(x) ≥ r, ∀ x ≥ 0 and Remark 6.1. Let us make some remarks on these conditions • Condition (LSG) is satisfied in particular if there exist η, x 0 > 0 such that . This follows from the fact that if A is non-empty, we can find a ∈ A such that the following inequality holds (see the proof on page 27): In particular, the process X cannot reach 0 under Assumption (GVFG).
The next result states in particular that under Condition (LSG), the division mechanism and the random fluctuations overcome the growth of X. In this case, the process X converges to a finite variable, which may be 0 if X can be absorbed. Theorem 6.2. Suppose that Assumptions A holds.
• If Conditions (SN0), (SN∞) hold and (LSG) or (LFG) is satisfied, then, for all x ≥ 0, the process (X t , t ≥ 0) converges in law as t tends to infinity to X ∞ satisfying Moreover, the distribution of X ∞ is the unique stationary distribution of the process X and for every bounded and measurable function f , almost surely, The second point of this result generalizes [4, Proposition 1.1] to the case of more general parasites dynamics. Indeed, in [4], the authors considered the case g(x) = gx, σ(x) 2 = σ 2 x and p(x) ≡ 0 for some g, σ > 0. Note that in this case, if r is constant, (LN0) and (SN∞) always hold and (LSG) reduces to g + rE[ln Θ] < 0, which is the condition stated in [4, Proposition 1.1 i)]. If r is a non-increasing or non-decreasing function, we also retrieve the same conditions as in [4, Proposition 1.1 ii)iii)]. The function H describes the strengths of the different mechanisms, so that Conditions (LSG) and (LFG) determine the fate of the infection, depending on which mechanism overcomes the others at critical parasites concentrations (small or large).
In the last case, we additionally prove in the next corollary that with positive probability, X grows (at least) exponentially. Moreover, when the diffusion term is large enough (σ(x) larger than √ x, which corresponds to Feller diffusion), we are able to provide a bound on the absorption rate in the two first cases.
iii) Under the assumptions of point iii) of Proposition 6.3, there exists a stochastic process (K t , t ≥ 0), larger than a Lévy process with drift η, and a non-decreasing function ρ such that ρ(t) ≥ rt and where W is a finite non-negative random variable satisfying P(W > 0) > 0.
Absorption rates of CSBPs in random environment have been intensively studied during the last decade [7,1,23,19,2]. In these references, g(x) = gx , σ 2 (x) = σ 2 x, for some σ ≥ 0, p(x) = x and r(x) ≡ r is independent of X, whereas these assumptions are relaxed in our case (notice however that we make moment assumptions on the jump measures). Corollary 6.4 thus provides bounds on the survival probability for a new class of processes.
Let us finally describe the long time behaviour of the process X under Condition (GVFG). Proposition 6.5. Suppose that Assumption A is satisfied.
This result describes quantitatively how much the growth of the process has to overcome its fluctuations to drift to infinity.
The rest of the paper is dedicated to the proofs.

Proofs
Using recent results on SDEs with jumps, we first prove that the class of processes we are interested in may be realized as unique pathwise solutions to SDEs.

Proofs of Section 2.
Proof of Proposition 2.1. The proof is a direct application of Proposition 1 in [21]. First according to their conditions (i) to (iv) on page 60, our parameters are admissible. Second, we need to check that conditions (a), (b) and (c) in [21] are fulfilled.
In our case, condition (a) writes as follows: for any n ∈ N, there exists A n < ∞ such that for any 0 ≤ x ≤ n, The function r is continuous, and thus bounded on [0, n]. As a consequence, condition (a) holds.
To verify condition (b), it is enough to check that for any n ∈ N there exists B n < ∞ such that for 0 ≤ x ≤ y ≤ n, Indeed, the function r n : z → B n φ(z) on R + is concave and non-decreasing and satisfies But recall that a function that is locally Lipschitz on a compact interval is Lipschitz on this interval. Hence, r is Lipschitz on [0, n], and condition (b) holds under Assumption A. Finally, let us focus on condition (c). First, as p is non-decreasing, the function x → x + z1 {u≤p(x)} is non-decreasing for all (z, u) ∈ R 2 + . Second, the following inequality must be satisfied: for any n ∈ N there exists D n < ∞ such that for 0 ≤ x, y ≤ n, The first term fulfills the condition as σ is Hölder continuous with index 1/2. The second term is equal to and we conclude using again that p is Lipschitz on [0, n]. Hence, condition (c) is satisfied. We can thus conclude that Proposition 1 in [21] applies, which in particular justifies that X admits the infinitesimal generator given in ( where we used that p is locally Lipschitz on R + and p(0) = 0 (Assumption A) which implies that p is Lipschitz on [0, 1] and that x → p(x)/x is bounded in the vicinity of 0. For the second part, first, note that for all x > 0 and z ∈ [x, ∞) Then, Therefore, if R + zπ(dz) < ∞, the part in G a corresponding to the positive jumps does not affect the boundedness of G a in the vicinity of 0.
We now prove Theorem 3.3. As mentioned previously, the proof uses ideas of the proof of [18, Theorem 2.3]. However, as we extend this theorem, several steps of the proof have to be modified. For the sake of readability we provide the whole proof, including parts which were done similarly in [18]. The proof relies on a martingale, whose construction is detailed in the next lemma. Recall the definitions of τ ± in Equations (3.1) and (3.2).
where G a has been defined in (3.3) and (M t , t ≥ 0) is a local martingale. For the last equality, we used formula (A.1) for I a . Next, using integration by parts we get t∧T , t ≥ 0 is a local martingale. Similarly to [18], we have using Assumptions A, so that from [25, Theorem 51 p.38], Z t∧T , t ≥ 0 is a martingale.
Proof of Theorem 3.3. We first focus on point i). Let n ∈ N be such that n ≥ 2 and let 0 < ε < b < 1 and a ∈ A be such that (3.5) holds for all t∧Tn is an F t -martingale. As in [18], using Fatou's lemma, we have We distinguish three cases.
then there exists a sequence (α n , n ∈ N) converging to 0 as n goes to ∞ and such that ε n ≤ α n ≤ b, and In the three cases, we obtain according to (SN0), As a > 1, we have X 1−a Then, we get from (7.1) and (7.2), We thus obtain By the Borel-Cantelli Lemma, we have where i.o. stands for infinitely often. As a consequence we get that, P ε -a.s., for n large enough. If there are infinitely many n so that then we have τ − (0) = ∞. If (7.4) holds for at most finitely many n, then by (7.3), we have τ − (ε n ) > τ + (b) for all n large enough. We conclude that for all 0 < ε < b, We will now use a coupling to show that P ε (τ − (0) < ∞) = 0. Let for N ∈ N, which is finite as r is a continuous function. Let X be the unique strong solution to where the Brownian motion B and the Poisson random measures Q and N are the same as in (2.1). We will use four properties of this equation.
a) It has a unique strong solution according to Proposition 2.1. b) If X (1) and X (2) are two solutions with X for any positive t. c) If X is a solution with X 0 = X 0 , then X t ≤ X t for any t smaller than τ − (0) ∧ τ + (N ). d) Equation (7.5) holds for both X and X. Our aim now is to prove that P ε τ − (0) < ∞ = 0, (7.6) where the τ 's are defined as the τ 's in (3.1) and (3.2) but for the process X. Using the coupling described in point c), it will imply that P ε τ + (N ) ≤ τ − (0) = 1, and letting N tend to infinity, we will get P ε τ − (0) = ∞ = 1.
Before proceeding to the proof of (7.6), let us notice that from coupling b) we have: Now the strategy to prove (7.6) will be to show that for any λ > 0 For any 0 < θ ≤ 1, (7.5) yields where the last inequality comes from the Markov property combined with (7.7). Moreover, using again the Markov property, we have The process can cross the level ε either because of the diffusion or because of a negative jump. In both cases, X τ − (ε) ≥ εΘ almost surely, where we recall that Θ is a random variable distributed according to κ and independent of the process before time τ − (ε). Then, using again (7.7), We thus get we conclude that A(λ, ε) = 0, which ends the proof of point i).
We thus have This ends the proof of point ii).
Moreover, the probability that during the time t x 0 , conditionally on {τ + (2y) > t x 0 }, the process makes at least N κ negative jumps, with a jump size in (0, ν] is larger than: This entails, using (7.17), (7.18) which ends the proof of iii).

Proofs of Section 4.
We now focus on the explosion behaviour of the process.
Proof of Theorem 4.1. As we gave all the details of the proof of Theorem 3.3, we will here only provide the elements of the proof which differ from the proof of [18, Theorem 2.8].
We take a small enough b −1 and ε satisfying 0 < b < ε −1 . We begin the proof of point i) similarly as in [18,Theorem 2.8], except that we take and obtain in the same way using the Borel Cantelli lemma that The authors of [18] then claim that they can conclude the proof as the proof of point i) of their Theorem 2.3. However, in the latter case, they only need the strong Markov property to obtain that for any λ > 0, E ε [e −λτ − (0) ; τ − (0) < ∞] = 0 and consequently, P ε (τ − (0) < ∞) = 0, as their process does not have negative jumps. In the current case, to obtain that E ε −1 [e −λτ + (∞) ; τ + (∞) < ∞] = 0, we (and they) have to take into account the fact that there are positive jumps and that X τ + (ε −1 ) may be strictly bigger than ε −1 . Let us first notice that for any ε −1 ≤ y ≤ 2ε −1 , the same reasoning as the one to obtain (7.19) leads to P y (τ + (∞) = ∞ or τ − (b) < τ + (∞) < ∞) = 1. (7.20) Let us thus fix λ > 0 and introduce the following real number: For any ε < 1, y ≤ ε −1 , we have by the Markov inequality Using Equation (7.20) and the strong Markov property, we thus get, for any ε −1 ≤ y ≤ 2ε −1 : Using again the strong Markov property, we get for all where the last inequality is obtained by considering the event {ε −1 ≤ X τ + (ε −1 ) ≤ 2ε −1 } and its complement. Finally, combining the last two inequalities, we obtain But there exists C(b) < 1 such that for 2b ≤ y, and thus τ − (b) would converge to 0 when the initial condition of the process goes to ∞ which would contradict our assumptions on the regularity of the negative jumps. Hence, as for ε small enough, 2b ≤ ε −1 , we obtain for such an ε We thus deduce that lim y→∞ E y e −λτ + (∞) ; τ + (∞) < ∞ = 0. Now, let us take x, µ > 0. Then, there exists N 0 such that for any N ≥ N 0 , Hence, and thus for all The proof of point ii) is the same as the proof of point ii) of [18, Theorem 2.8], except that we modify the function t(·) as we did for the proof of point ii) of Theorem 3.3.
We now prove point iii). Assume that for any positive x, p(x) + σ(x) > 0. Let x 1 > 0 be such that P y (τ + (∞) < ∞) > 0, ∀y ≥ x 1 . Let y < x 1 . If p(y) > 0, there exists η 1 > 0 such that p stays positive on [y − η 1 , y + η 1 ] as it is a continuous function according to Assumption A. Hence, for η 2 > 0 small enough, starting from y, we can show as in the proof of Theorem 3.3iii), that the probability that the process is bigger than x 1 thanks to a positive jump is positive: and using the Markov property, we obtain P y (τ + (∞) < ∞) > 0. Now assume that p(y) = 0 but σ(y) > 0. As σ is continuous, if σ(z) > 0 for z ∈ [y, x 1 ] then P y (X s ≥ x 1 ) > 0, ∀s > 0 thanks to the diffusion and we end the proof by applying again the Markov property. Else, if σ is only positive on an interval of the form [y, x 2 ) with y < x 2 < x 1 , then by continuity of p and σ given by Assumption A, p(x 2 ) > 0 and we are back to the first case. We thus have proven that P y (τ + (∞) < ∞) > 0, ∀y ≥ 0, as soon as p + σ > 0 on R * + . 7.4. Proof of Section 5.
Proof of Proposition 5.1. Let ε > 0. We first focus on absorption. According to the assumptions of point i), there exists x 0 > 0 such that for all x < x 0 , Let us prove that there exists a > 1 such that (SN0) is satisfied i.e. that there exists a > 1 and a positive function f such that To study the last term of (7.21), let us define for all x, z ≥ 0, From the definition of I a and I in (3.4) and (5.2), respectively, we have Moreover, for every x, z > 0, where the last equality is obtained using integration by part in the second integral. Then, computing the integral, we get Then, according to Taylor-Lagrange's formula, there exists y ∈ (1 − a, 0) such that and we obtain for someŷ ∈ (1 − a, y) according to Taylor-Lagrange's formula. Then, for a > 1, and using that R + z 2 π(dz) < ∞, we obtain And, using again Taylor-Lagrange's formula, for any x > 0 there existsã(x) ∈ (1, a) such that Now using the previous computations, we obtain that there exists also x 1 < x 0 and a 0 > 1 such that for all x < x 1 and 1 < a < a 0 , Finally, combining the last inequalities with (7.21), we obtain for all x < x 1 and a ∈ (1, a 0 ), and thus Condition (SN0) holds and we may apply Theorem 3.3.
The proof of the second point is similar except that we have to adapt the bounds to the case a < 1. By assumption, there are η > 0 and x 0 > 0 such that for all x < x 0 , We prove that there exist a < 1, η ′ > 0 and x 1 > 0 such that for all x < x 1 , H a (x) ≤ − ln(x −1 )(ln ln(x −1 )) 1+η ′ by bounding the difference between H a and H on [0, x 0 ]. Note that for a < 1, similarly to (7.22), there exist y ∈ (0, 1 − a) andŷ ∈ (y, 1 − a) such that so that we can conclude as before.
We now turn to the proof of results on the explosion of the process. According to the assumptions of point i), there exists x 0 > 0 such that for all x > x 0 , We now prove that there exists a < 1 such that (SN∞) is satisfied i.e. that there exist a < 1 and a positive function f such that We have For the first term of (7.25), let us consider for θ ∈ (0, 1) the functions f θ : y ∈ R → θ y and g θ : y ∈ R → 1 − θ y y . Then Using Taylor's formula twice, we get the existence of 0 ≤ λ(θ, y), µ(θ, y) ≤ 1 such that We deduce that the function is non-increasing. Moreover, As Condition (5.1) holds, we deduce by monotone convergence that For the last term of (7.25), we can use again (7.23), which is satisfied in this case according to (7.24). Now combining the previous computations, we obtain that there exists x 1 > x 0 and a 0 ∈ (0, 1) such that for all x > x 1 and a 0 < a < 1, Finally, combining the last inequalities with (7.25), we obtain for all x > x 1 and a ∈ (a 0 , 1), and thus Condition (SN∞) holds and we may apply Theorem 4.1.
The proof for the case a > 1 is similar.
7.5. Proofs of Section 6. We now turn to the proof of Theorem 6.2. Let t 0 > 0 be fixed. First, we prove that if the division mechanism of the cells and the random fluctuations are stronger than the growth of the parasites in the sense of (LSG) for some x 0 > 0, then the stopping times T i (x 0 ), are finite a.s. for all i ≥ 0 where T 0 = 0 and for all i ≥ 1, Proof of Lemma 7.2. Let us consider τ = τ − (x 0 ) ∧ τ + (x 1 ) where x 1 ≥ x 0 and the τ ± 's have been defined in (3.1). According to the strong Markov property, we only have to prove that E x (τ − (x 0 )) < ∞ for all x ≥ 0. By Itô's formula, we have for all t ≥ 0 where (M s∧τ , s ≥ 0) is a martingale with null expectation. Then, using Condition (LSG), we obtain As a consequence, for all t ≥ 0 ln (X t∧τ ) ≥ ln (Θx 0 ) , almost surely. Then, taking the expectation in (7.27), using the last inequality and letting t tend to infinity yield for all x > 0 According to Theorem 4.1, Condition (SN∞) yields that for all which ends the proof.
Let t 1 > 0. Similarly, if the growth of the parasites is stronger than the division mechanism of the cells and the random fluctuations in the sense of (LFG) for some x 1 > 0, then the stopping times T i (x 1 ), are finite a.s. for all i ≥ 0 where T 0 = 0 and for all i ≥ 1,

Lemma 7.3. Under Assumptions A and Condition
Proof of Lemma 7.3. Without loss of generality, we assume that x 1 > 1. Following the same lines as in the proof of Lemma 7.2, we obtain As in the proof of Lemma 7.2, considering both cases of X exceeding x 1 thanks to a jump or not, we obtain for all t ≥ 0 Then, taking the expectation in (7.28), using the last inequality and letting t tend to infinity yield for all x > 0 According to Theorem 3.3, Condition (SN0) yields that for all x > 0, P x (τ − (0) < ∞) = 0, so that lim inf x 0 →0 τ − (x 0 ) ∧ τ + (x 1 ) = τ + (x 1 ), and we conclude by Fatou's Lemma as before.
Proof of Theorem 6.2. Apart from the proof of Equation ( Let us now prove (6.3). To do this, we first show that there exist y 0 , t 0 and α > 0 such that, inf Let us fix a < 1 such that (LN0) is satisfied and δ < (3− 2a) −1 . On page 16, we have proved that there exist two non-negative functions on R + , t and p such that for all ε < e −1/(1−δ) such that (LN0) is satisfied for x ≤ ε, and z ∈ (ε 1+δ , ε 1−δ ), t(·) has been defined in (7.11). By a classical functional study, we can check that the function t is non-decreasing and the function p is non-increasing. Let us take ε > 0 such that (7.30) is satisfied and z ≤ ε 1−δ . Then, there exists ε 1 ≤ ε such that ε 1+δ 1 < z < ε 1−δ 1 . Now, by monotonicity, we get: Equation (7.29) is thus proven, if we take y 0 = ε 1−δ , t 0 = t(ε) and α = p(ε). Next, we need to show that there exist t 1 , α > 0 such that where x 0 > 0 is such that (LSG) is satisfied. We obtain this property by following the proof of (7.18).
Recall the definition of T i (x 0 ) in (7.26). By the strong Markov property and (7.29), we get for all x ≥ 0 and all i ≥ 0, Applying Lemma 7.2 and the strong Markov property, we deduce that for any x ≥ 0, This concludes the proof of the second point.
Finally, we prove (6.4). The proof is very similar to the one of (6.3) and we will not give all the details. Following the proof of [18,Theorem 2.8] with the only difference that we choose the function t as defined in (7.11), we obtain the existence of a > 1 and of a small positive δ such that for every small enough ε, We end the proof as in the case of absorption.
We now prove Proposition 6.3 and Corollary 6.4 which concern the absorption of the process.
Proof of Proposition 6.3. Let us introduce the following time change: According to Theorem 1.4 in Section 6 in [8], there is a version of X satisfying (7.31) for a process Y that is a solution of the martingale problem with associated generator and is a weak solution to where we chose on purpose the same Poisson Point measures as in the definition of X in (2.1). In fact, as (7.32) admits a unique strong solution (see the proof of Proposition 2.1), Y is even pathwise unique. Now let us introduce the processes (K t , t ≥ 0) and (Z t , t ≥ 0) via and Then an application of Itô's formula with jumps gives Hence (Z t , t ≥ 0) is a non-negative local martingale. In particular it is a non-negative supermartingale and there exists a finite random variable W such that lim t→+∞ Y t e −Kt = W, a.s. (7.33) Under the assumptions of point i), K is smaller than a Lévy process with drift −η. As a consequence, e −Kt goes to +∞, and we deduce from (7.33) that Y goes to 0. As by assumption t 0 r(X s )ds ≥ rt, we deduce from the time change (7.31) that X goes to 0.
We turn to the proof of ii) and consider the associated assumptions. In this case, K is smaller than an oscillating Lévy process, and we have lim inf t→∞ K t = −∞. This implies lim inf t→∞ Y t = 0. Again, we deduce from the time change (7.31) that lim inf t→∞ X t = 0.
Let us now prove iii) as well as point iii) of Corollary 6.4. We use arguments similar to the ones needed to prove [1, Corollary 2]. As we are in a more general setting, we need to adapt several of these arguments. Most adaptations are obtained by couplings with well-chosen processes.
We denote by M a finite bound of the function x → (σ 2 (x) + p(x))/(xr(x)). The first step consists in showing that P(W > 0|K) > 0. To this aim, we look for a functionṽ t (s, λ, K, Y ), differentiable with respect to the variable s, such that F (s, Z s ) is a martingale conditional on K = (K s , s ≥ 0), where F (s, x) := exp{−xṽ t (s, λ, K, Y )}.
Taking λ = 1 and letting t go to infinity we get E y e −W K ≤ e −yv∞(0,1,K) < 1, where the last inequality comes from [1] (see the proof of Corollary 2 on page 7). This allows us to conclude that P(W > 0|K) > 0. (7.37) Under the assumptions of point iii) K is larger than a Lévy process with drift η and as a consequence, e −Kt goes to 0. From (7.33) and the previous inequality, we deduce that Proof of Corollary 6.4. Let us begin with point iii). We showed in the proof of Proposition 6.3 (see (7.33)) that under the assumptions of point iii), lim t→+∞ Y t e −Kt = W, a.s., (7.38) where K is larger than a Lévy process with drift η. Moreover, as r(X s ) ≥ r for any s ≥ 0, we have for any t ≥ 0, ρ(t) := t 0 r(X s )ds ≥ rt. Finally, (7.37) allows to conclude the proof of (6.5).
Let us now focus on points i) and ii). The idea of the proof is to compare the survival probability of X with the survival probability of a Feller diffusion with jumps, whose asymptotic behaviour has been studied in [1].

But
A direct application of [1,Theorem 7] with F (x) = 1−e −y(ax) −1 gives the long time behaviour of the right hand side of the previous inequality. Finally, P y (X t > 0) = P y Y t 0 r(Xs)ds > 0 ≤ P y Y rt > 0 ≤ 1 − E e −y(a rt 0 e −Ks ds) −1 .
We end this proof section by the study of the conditions under which the process X or its superior limit drift to infinity.
Using that for any u > 0, ln(u) < u − 1, we obtain that And by continuity, we deduce that there exists a ∈ A such that This ends the proof.
Proof of Proposition 6.5. Recall that the time-changed process Y is a weak solution to (7.32), and let us introduce the process V via V t := 1/Y t , t ≥ 0, which is well defined as Y t does not reach 0 under Assumption (GVFG) (see the third point of Remark 6.1). Applying Itô's formula with jumps, we obtain that V is a weak solution to the SDE: Now, if we introduce the processesK andZ via: andZ t := V t e −Kt for any t ≥ 0, we obtain, applying again Itô formula with jumps: In other words,Z is a non-negative local martingale, and thus a supermartingale. It converges to a non-degenerated and non-negative random variableW . We conclude the proof as the proof of points i) and ii) of Proposition 6.3.