On the relationship between subordinate killed and killed subordinate processes

We study the precise relationship between the subordinate killed and killed subordinate processes in the case of an underlying Hunt process, and show that, under minimal conditions, the former is a subprocess of the latter obtained by killing at a terminal time. Moreover, we also show that the killed subordinate process can be obtained by resurrecting the subordinate killed one at most countably many times.


Introduction
Let X be a strong Markov process on a state space E. In this paper we will be interested in two types of probabilistic transformations of X.The first one is subordination of X via an independent subordinator T giving a Markov process Y = (Y t : t ≥ 0) on E defined by Y t = X(T t ).The other transformation is killing X upon exiting an open subset D of E. The resulting process X D is defined by X D t = X t for t < τ D = inf{t > 0 : X t / ∈ D}, and X D t = ∂ (the cemetery) otherwise.Now one can kill Y upon exiting D giving the process Y D , and also subordinate X D by the same subordinator T giving the process that we will denote by Z D .Both processes are Markov with the same state space D. The process Y D is called the killed subordinate process (first subordination, then killing), while Z D is called the subordinate killed process (first killing, then subordination).It is an interesting problem to investigate the precise relationship between these two processes.This question can be traced back to [4] in the case when X is a Brownian motion and T a stable subordinator.In this context it was addressed in [10] where by use of pathwise approach it was shown that the semigroup of Z D (subordinate killed) is subordinate to the semigroup of Y D (killed subordinate).Recently, by use of Dirichlet form techniques, He and Ying gave in [5] an answer in a general setting of symmetric Borel right processes on a Lusin space E. Again, the semigroup of Z D is subordinate to the semigroup of Y D .The general theory then implies that Z D can be obtained by killing Y D via a multiplicative functional.The goal of this paper is to give (in our opinion) the complete description of the relationship between Z D and Y D in the context of a Hunt process X (not necessarily symmetric) on a locally compact second countable Hausdorff space E. By defining X and the subordinator T on appropriate path spaces, and considering all relevant processes on the product of these path spaces, we show that Z D is obtained by killing Y D at an identifiable terminal time with respect to a filtration making Y a strong Markov process.Note that killing at a terminal time is a special case of killing by a multiplicative functional, but clearly more transparent.Moreover, we go a step further and show that the process Y D can be recovered from Z D by resurrecting the latter at most countably many times.This easily follows from our setting in which both Z D and Y D are described explicitly in terms of the underlying Hunt process X and the subordinator T .We also compute the resurrection kernel (given, implicitly, in [5]).Having the resurrection kernel, one can now start from any process with the same distribution as Z D and use Meyer's resurrection procedure described in [7] to construct a process with the distribution of Y D .The paper is organized as follows: In the next section we precisely describe our setting.In Section 3 we give a description of the relationship between subordinate killed and killed subordinate processes.In Section 4 the resurrection kernel is computed.In the last section, as an application, we give sufficient conditions for Y to be not on the boundary ∂D at the exit time from D.

Setting and notation
Let E be a locally compact second countable Hausdorff space and let E be the corresponding Borel σ-algebra.Further, let Ω 1 be the set of all functions ω 1 : [0, ∞) → E which are right continuous and have left limits.For each t ≥ 0, let X t : Ω 1 → E be defined by X t (ω 1 ) = ω 1 (t).The shift operator ϑ 1 , be the natural filtration generated by the process X = (X t : t ≥ 0), and let F 0 t+ = ∩ s>t F 0 s .Also, let F = σ(X t : t ≥ 0).We assume that (P x 1 : x ∈ E) is a family of probability measures on (Ω 1 , F) such that (X t , P x 1 ) is a strong Markov process.Let F = (F t : t ≥ 0) be the usual augmentation of the natural filtration F 0 .From now on we assume that X = (Ω 1 , F, F t , X t , ϑ 1 t , P x 1 ) is a Hunt process with the state space (E, E).Let Ω 2 be the set of all functions ω 2 : [0, ∞) → [0, ∞) which are right continuous and have left limits.For each t ≥ 0, let , be the natural filtration generated by the process T = (T t : t ≥ 0), and let G 0 t+ = ∩ s>t G 0 s .Also, let G = σ(T t : t ≥ 0).We assume that (P y 2 : y ∈ [0, ∞)) is a family of probability measures on (Ω 2 , G) such that (T t , P y 2 ) is an increasing Lévy process.In particular, we assume that under P 2 := P 0 2 , the law of T t is given by where Here b ≥ 0 is the drift, and Π the Lévy measure of the subordinator T .Further, let U (dy) denote the potential measure of T under P 2 : For y > 0, let σ y = inf{t > 0 : T t > y} be the first passage time of T across the level y.Then σ y is a (G 0 t+ )-stopping time and the following identity holds true for all t > 0 and y > 0: Let Ω = Ω 1 × Ω 2 , and, for any x ∈ E and y ∈ [0, ∞), let P x,y = P x 1 × P y 2 be the product probability measure on H = F × G.The probability P x,0 will be denoted as P x .The elements of Ω are denoted by ω = (ω 1 , ω 2 ).For each t ≥ 0 we define the shift operator θ t : Ω → Ω by (2.2) We will occasionally write θ t (ω) = (θ 1 t (ω), θ 2 t (ω)).Note that for s, t ≥ 0 we have Following [3] we introduce the following filtration: For t ≥ 0 let and let Remark 2.1.Suppose that S is a function defined on, say, Ω 1 .By abusing notation we will regard S as being defined on Ω by S(ω 1 , ω 2 ) = S(ω 1 ).We use the same convention if S is a function defined on Ω 2 .
The following results are proved in [3]: In the next result we prove that the subordinate process Y is quasi-left-continuous.Proposition 2.3.Let (S n : n ≥ 1) be an increasing sequence of (H t+ )-stopping times, and let S = lim n→∞ S n .Then lim n→∞ Y Sn = Y S , P x -a.s. on {S < ∞} for every x ∈ E.
Proof.Without loss of generality we assume that S < ∞, P x -a.s., for every x ∈ E. Let A = {ω = (ω 1 , ω 2 ) : lim n T Sn(ω1,ω2) (ω 2 ) = T S(ω1,ω2) (ω 2 )}, and let A ω1 be the ω 1 -section of A. For each fixed ω 1 , it follows from Proposition 2.2(ii)(a) that S n (ω 1 , •) is a (G 0 t+ )-stopping time, hence by quasi-left-continuity of the subordinator T , we have that P 2 (A ω1 ) = 1.Thus by Fubini's theorem we have that P 1 -a.s., for every x ∈ E. By the quasi-left-continuity of the process X we obtain that Again by using Fubini's theorem, it follows that P x (B) = 1 for every x ∈ E. Therefore, P x (A ∩ B) = 1, and for Proof.By (2.1) we have that {σ S ≤ t} = {T t ≥ S}.The claim now follows from Proposition 2.2(i).
Let D be an open subset of E, and let τ D = inf{t > 0 : X t / ∈ D} be the first exit time of X from D. We assume for simplicity that P x 1 (τ D < ∞) = 1 for all x ∈ E. By the previous lemma it follows that σ τD is an (H t+ )-stopping time.In the next lemma we prove that it is also a terminal time with respect to (H t+ ).
3 Subordinate killed and killed subordinate processes ∈ D} be the first exit time of the subordinate process Y from D. Then τ Y D is an (H t+ )-stopping time.We note that even when P x 1 (τ D < ∞) = 1 for all x ∈ D, it may happen that τ Y D = ∞, P x -a.s. for every x ∈ D. Indeed, let X be a one-dimensional Brownian motion, D = (−∞, 0) ∪ (0, ∞), and let T be an α/2-stable subordinator with 0 < α < 1.The subordinate process Y is an α-stable process in R. Since 0 < α < 1, points are polar for Y , and in particular, the hitting time to zero, which is precisely equal to τ Y D , is infinite.Clearly, The process Y killed upon exiting D is defined by where ∂ is a cemetery point .We call Y D the killed subordinate process.Note that Y D is a strong Markov process with respect to the filtration (H t+ ).
The other process that we are going to consider is obtained by killing Y at the terminal time σ τD .Define where the equality is a consequence of (2.1).Since σ τD is a terminal time, it follows (similarly as in the proof of Theorem 12.23(i) of [9], p. 71) that Z D is also a strong Markov process with respect to the filtration (H t+ ).We can also introduce the process X D as the process X killed upon exiting D. Clearly, if T t < τ D , then X D Tt = X Tt .This shows that Z D is in fact obtained by first killing X as it exits D, and then by subordinating the killed process with the subordinator T .Therefore we call Z D the subordinate killed process.Note that if t < σ τD , then T t < τ D , and therefore Y t = X Tt ∈ D. This shows that σ τD ≤ τ Y D .As a consequence, we see that Z D can be obtained by killing Y D at the terminal time σ τD : For any nonnegative Borel function ) is an increasing sequence of (H t+ )-stopping times.The limit S = lim n→∞ S n is an (H t+ )-stopping time.Clearly, S ≤ τ Y D .The next proposition shows that these stopping times are in fact equal P x -a.s. for every x ∈ D. Proposition 3.2.It holds that τ Y D = S, P x -a.s. for every x ∈ D. Proof.Let A = {ω = (ω 1 , ω 2 ) : lim n T Sn(ω1,ω2) (ω 2 ) = T S(ω1,ω2) (ω 2 )}.It was shown in Proposition 2.3 that P x (A) = 1 for all x ∈ E. Therefore, there exists Ω 2 ⊂ Ω 2 with P 2 ( Ω 2 ) = 1 and such that P x 1 (A ω2 ) = 1 for every there is nothing to prove.Therefore we assume that . By Proposition 2.2, T S(•,ω2 )(ω 2 ) and T Sn(•,ω2 )(ω 2 ), n ∈ N, are (F 0 t+ )-stopping times.For n ≥ 1 define Then τ n+1 is an (F 0 t+ )-stopping time.Moreover, we have that , and by the quasi-leftcontinuity of X, we conclude that , integrating the last equality with respect to P 2 , gives that for every x ∈ D,

Resurrection kernel
Proposition 3.2 clearly shows that the process Y D can be obtained from Z D by resurrecting the latter at most countably many times.Our next goal is to compute the resurrection kernel.For t ≥ 0 and x ∈ E, let P t (x, dy) denote the transition kernel of X.To be more precise, P t (x, dy) = P x 1 (X t ∈ dy) = P x (X t ∈ dy).Similarly, for x ∈ D, let P D t (x, dy) denotes the transition kernel of the killed process X D .Throughout this paper we will assume the following (A1) X admits a Lévy system of the form (J X , dt).Here J X (x, dy) is a kernel on (E, E).The assumption (A1) is not very restrictive.For example, all Lévy processes satisfy this assumption.Under the assumption (A1), one can easily check that the killed process X D has a Lévy system of the form (J X D , dt), where J X D (x, dy) is the restriction of J X on (D, B(D)).That is, for x ∈ D and a Borel subset B ⊂ D, J X D (x, B) = J X (x, B).By slightly abusing the notation, we will denote J X D simply by J X .The subordinate process Y admits a Lévy system of the form (J Y , dt), where (see [8] for a proof, and also [3], p.74, for the case b = 0).Similarly, the subordinate killed process Z D admits a Lévy system of the form (J Z D , dt), where Next, A 1 can be written as where the next to last line follows from (4.4), and the last line from the fact that U has no atoms, and X s− = X s , P x 1 -a.e. for every fixed s ≥ 0. Next, where Equalities (4.7), (4.8) and (4.9) yield that A 1 is equal to the first two lines on the right-hand side of (4.6).
In order to compute A 2 we proceed as follows: where the last line follows from (4.5) and the fact that X(τ D ) / ∈ D.
For y ∈ D, let q(y, dz) := J Y (y, dz) − J Z D (y, dz).We call q the resurrection kernel.Note that for Borel sets B ⊂ D and C ⊂ D, the formula (4.6) can be written as By use of (4.1) and (4.2) one can write the resurrection kernel as q(y, dz) = (0,∞) (P t (y, dz) − P D t (y, dz)) Π(dt) .This is the form that the resurrection kernel appears in [5].(ii) If b > 0, then for every Borel set C ⊂ E, Proof.
(i) That P x (T στ D = τ D ) = P x (T στ D − = τ D ) = 0 is an immediate consequence of the first passage formulae stated before Theorem 4.1 and the fact that the potential measure U has no atoms.In order to show that P x (Y στ D − ∈ D) = 1 we use (4.7): Proposition 5.1.Suppose that (A2) is valid.Assume that X has continuous paths, P t (x, ∂D) = 0 for all x ∈ D and all t > 0, and that there exists a constant c ∈ (0, 1) such that P x 1 (X t ∈ D) ≤ c for every x ∈ ∂D and every t > 0 . (5.2) Assume further that the subordinator T has no drift.Then P x (Y τ Y D ∈ ∂D) = 0 for every x ∈ D. Proof.By the assumptions, X τD ∈ ∂D and N t (x, 1 D ) = P x 1 (X t ∈ D) ≤ c, for all x ∈ ∂D and all t ≥ 0. By (5.1), this implies that for each fixed ω w < τ D ) = C P w (y, dz) − C∩D P D w (y, dz) , it follows from (4.1) and (4(y, dz) − bJ X (y, dz)) − C∩D (J Z D (y, dz) − bJ X (y, dz)) = C∩D c (J Y (y, dz) − bJ X (y, dz)) + C∩D (J Y (y, dz) − J Z D (y, dz)) .(4.9)

2 , P x 1 (Proposition 5 . 2 .
Y στ D ∈ D | F 0 τD+ ) ≤ c, x ∈ D, and therefore P x (Y στ D ∈ D) ≤ c , x ∈ D .(5.3) Recall the notations S 1 = σ τD , and for n ≥ 1, S n+1 = S n + S 1 • θ Sn .By the strong Markov property of Y , it follows from (5.3) that P x (Y Sn ∈ D) ≤ c n , n ≥ 1 .Let N := inf{n ≥ 1 : S n = τ Y D } with the usual convention inf ∅ = ∞.It follows from the last displayed formula that P x (N = ∞) = 0 for every x ∈ D. Hence, there are only finitely many S n which are less than τ Y D .From Corollary 4.4, P x (Y στ D ∈ ∂D) = P x (Y S1 ∈ ∂D) = 0, and therefore by iteration P x (Y Sn ∈ ∂D) = 0 for all n ∈ N and all x ∈ D. Since τ D Y = S n for some n ∈ N, the claim of the proposition follows.If X is a Brownian motion in R d , then it was shown in[10] that (5.2) holds true provided D is a bounded domain satisfying the exterior cone condition.The next result should be compared to Lemma 1 from[12].Suppose that (A2) is valid.Assume that X has continuous paths, P t (x, ∂D) = 0 for all x ∈ D and all t > 0, b = 0, andsup x∈D P x (Y στ D ∈ D) < 1.Then P x (Y τ Y D ∈ ∂D) = 0 for every x ∈ D.Proof.Again note that by Corollary 4.4,P x (Y στ D ∈ ∂D) = 0. Therefore, if Y τ Y D ∈ ∂D, then σ τD < τ Y D ,and hence Y στ D ∈ D. Let γ := sup x∈D P x (Y τ Y D ∈ ∂D).By the strong Markov property of Y at σ τD and the assumptions we haveP x (Y τ Y D ∈ ∂D) = P x (Y τ Y D ∈ ∂D, Y στ D ∈ D) = P x (P Yσ τ D (Y τ Y D ∈ ∂D), Y στ D ∈ D) ≤ γ P x (Y στ D ∈ D).By taking the supremum over x ∈ D, it follows that γ ≤ γ sup x∈D P x (Y στ D ∈ D).Therefore, γ = 0.