Concatenation and Pasting of Right Processes

A universal method for the concatenation of a sequence of Markov right processes is established. It is then applied to the continued pasting of two Markov right processes, which can be used for pathwise constructions of locally defined processes like Brownian motions on compact intervals.


The objective
The concatenation of a sequence of (strong) Markov processes (X n , n ∈ N) on state spaces (E n , n ∈ N) forms a stochastic process X on n∈N E n as follows: Started in E n , the process X behaves like X n until this process dies, afterwards is revived as X n+1 at a point in E n+1 which is chosen by a probability measure which takes Markovian information of X n until its death into account, then behaves like X n+1 until it dies, and so on.
In earlier works on Markov processes and their applications, the theory of this technique, in contrast to other well-known modes of transformation like killing or time substitution, has not been developed much further-if at all-than on restricting it to special cases, despite the fact that it is not at all trivial to show that the resulting process X will inherit the (strong) Markov property of the subprocesses. This gap in the literature is quite surprising, considering it is natural in manifold applications to construct processes via local solutions and pasting them together, from immediate constructions of Markov chains and branching processes [9], extending Markov processes over their lifetime by instant revivals [13], introduction of isolated jump discontinuities into diffusion processes, up to the pathwise construction of stochastic processes via local solution techniques such as in the construction of Brownian motions on intervals [10,11] or on metric graphs [12,6,17].
In this paper, we are establishing the technique of concatenation of countably many processes in the general context of right processes [16]. This class of strong Markov processes encompasses a majority of classical types of Markov processes, such as Feller, Hunt, standard, and-in some sense [7]-even Ray processes. Our main result will guarantee that the process constructed by the concatenation of a sequence of right processes on disjoint state spaces via transfer kernels will again be a right process, thus especially maintaining the strong Markov property of its subprocesses. This generalizes [16] from two to countably many processes, and extends the corresponding results of [13], where the concatenation of a sequence of identical processes is considered. 1 We will then weaken the assumption on the disjointedness of the state spaces to the concatenation of alternating copies of two right processes by imposing some consistency conditions on both partial processes. This method can be used to glue two Markov processes on not necessarily disjoint state spaces together, extending a result of [14], or to form instant revival processes in the sense of [8,13]. We thus provide an unified way to extend or join an extensive class of Markov processes.

The context: Markov right processes and strong Markov property
We understand a Markov process X on a Radon space E (equipped with a σ-algebra E ) to be defined in the canonical sense of the standard works of Dynkin [4], Blumenthal-Getoor [1] and Sharpe [16], that is, as a sextuple X = Ω, G , (G t , t ≥ 0), (X t , t ≥ 0), (Θ t , t ≥ 0), (P x , x ∈ E) with the following properties: (X t , t ≥ 0) is a right continuous, E-valued stochastic process on the measurable space (Ω, G ), adapted to the filtration (G t , t ≥ 0), and equipped with shift operators (Θ t , t ≥ 0) on Ω. (P x , x ∈ E) is a family of probability measures satisfying X 0 = x P x -a.s. for all x ∈ E (normality of the process), such that for all t ≥ 0, B ∈ E , x → P x (X t ∈ B) is measurable and the Markov property holds: 2,3 ∀x ∈ E, s, t ≥ 0, f ∈ bE : E x f (X s+t ) G s = E Xs f (X t ) .
We are basing our results in the context of one of the most general classes of Markov processes, namely the class of right processes. Right processes are Markov processes which satisfy the following condition of right continuity in the topology of excessive functions: For α ≥ 0, the class S α of α-excessive functions is the set of all non-negative, measurable functions which satisfy e −αt T t f ↑ f pointwise as t ↓ 0, with (T t , t ≥ 0) being the semigroup associated to X, that is Then a Markov process X, equipped with an augmented and right continuous filtration, It is well-known (see [16,Theorem 7.4]) that in order to establish (HD2), it is sufficient to check the right continuity of the process on the α-potentials (U α , α > 0) of bounded, uniformly continuous functions 4 on E. Furthermore, (HD2) implies the 1 With the technique of [13,Section 3], their result can be extended to the concatenation of right processes on finitely many disjoint spaces. Figure 1: Concatenation of two processes X 1 and X 2 on E 1 , E 2 , resulting in the process X, which, if started in E 1 , behaves like X 1 until R = ζ 1 , afterwards is revived on some point in E 2 (chosen by a transfer kernel K 1 ), where it then runs like X 2 .
strong Markov property of the process [loc. cit.], that is, for every (G t , t ≥ 0)-stopping time τ , with F being the universal completion of σ(X s , s ≥ 0): The strong Markov property is often crucial for the examination of stochastic processes, in particular it allows to decompose the resolvent of a strong Markov process X at stopping times τ via Dynkin's formula [4,Section 5.1]: We impose the usual hypotheses (cf. [16,11, A1]): E is the universal completion of the Borel σ-algebra on E, the underlying filtration (G t , t ≥ 0) is augmented and right continuous, and there exists an isolated, absorbing cemetery state ∆ ∈ E, such that with the lifetime of the process ζ := inf{t ≥ 0 : X t = ∆}, X t = ∆ holds for all t ≥ ζ. Furthermore, there is a dead path [∆] ∈ Ω with ζ([∆]) = 0, and we constitute that f (∆) = 0 for any measurable function f , which in conjunction with X ∞ := ∆, Θ ∞ := [∆] allows to drop the restricting functions 1 {τ <∞} in the above formulas of the strong Markov property.

Concatenation of processes: construction approach and main result
Let (X n , n ∈ N) be a sequence of right processes on disjoint spaces (E n , n ∈ N). For the pathwise definition of a concatenating process X on Ω := n∈N Ω n , we set, for ω := (ω n , n ∈ N) ∈ Ω, t ≥ 0, In order to define initial measures (P x , x ∈ E) for the process X, we need to constitute a transfer mechanism between the subprocesses (X n , n ∈ N), more precisely: a law on EJP 26 (2021), paper 50. how the process X n+1 initiates in E n+1 after X n died. This mechanism can depend on all information until the lifetime ζ n of the subprocess X n , but it should admit a memoryless property in order to ensure the Markov property of the resulting process X. The main principle which allows to salvage the Markov property is the following invariance under time shifts: Definition 1.1. For a right process X on E and a terminal time T for X, the left germ field F [T −] for X at T consists of all F T − -measurable random variables H which satisfy Here, terminal times are a well-known concept for memoryless stopping times: The prime examples for terminal times are the first entrance times. Most notably, the lifetime ζ of a right process is always a terminal time. As ∆ is absorbing, we even have a stronger version of shift invariance of ζ for any random time R: The revival information is then encoded in kernels which are memoryless with respect to the lifetimes of the partial processes: . With the help of transfer kernels K n from X n to (X n+1 , E n+1 ), the paths of the concatenated process are chosen for any x ∈ E n , n ∈ N, by the initial measure P x (dω 1 , . . . , dω n−1 , dω n , dω n+1 , . . .) , being the Dirac-measure in [∆ i ], ensuring that X starts P x -a.s. in E n .
Our main result on the concatenation of countably many right processes, which extends the concatenation of two processes given in [16,Section 14], is as follows: Theorem 1.4. Let (X n , n ∈ N) be a sequence of right processes on disjoint spaces (E n , n ∈ N), such that the topological union E := n∈N E n is a Radon space, and let a transfer kernel K n from X n to (X n+1 , E n+1 ) be given for each n ∈ N. Then the concatenation X of the processes (X n , n ∈ N) via the transfer kernels (K n , n ∈ N) is a right process on E. With R n := inf{t ≥ 0 : A standard method of constructing transfer kernels is by imposing conditional distributions k 1 (x, · ) for the transfer point (that is the "revival point" of X 2 ) given the "exit point" X 1 ζ 1 − = x of X 1 (cf. [16, p. 78]): Example 1.5. Let X 1 , X 2 be right processes on E 1 , E 2 respectively, such that X 1 ζ 1 − exists a.s. in E 1 , and let k 1 : defines a transfer kernel from X 1 to (X 2 , E 2 ).
start as X −1 : Figure 2: Consistency condition for pasting together two processes X −1 , X +1 on a common state space: The process behavior must be independent of the chosen starting process. The left-hand picture shows a path behavior if the concatenated process is started as X −1 (black), which is then revived after its death at ζ −1 as X +1 (red), afterwards revived as X −1 at ζ +1 (blue), etc. The concatenated process must show the same behavior if started as X +1 , as illustrated in the right-hand picture.

Pasting of two processes: construction approach and main result
It is possible to weaken the assumption of disjoint subspaces (E n , n ∈ N), in order to apply the above described technique to paste together two right processes. However, we then need to impose additional conditions on the subprocesses, namely, they need to coincide on the shared state space, and their entry and exit distributions into this subset must be equal irrespective of the mode of entry or exit (namely by either subprocess behavior or revival), see Figure 2.
We define alternating copies of these processes and transfer kernels on disjoint state spaces by setting for each n ∈ N Then X n is a right process on E n := {n} × E (−1) n , E n = {n} ⊗ E (−1) n , and K n is a transfer kernel from X n to (X n+1 , E n+1 ). Let X be the concatenation of (X n , n ∈ N) via the transfer kernels (K n , n ∈ N). By Theorem 1.4, it is a right process on E = n∈N E n , equipped with the universal measurable sets E . Set E := E −1 ∪ E +1 , and let π : E → E be the canonical projection onto the second coordinate. The consistency conditions which ensure the pasted process π(X) to be a right process on E are as follows: Theorem 1.6. Let X −1 , X +1 be right processes on spaces E −1 , E +1 respectively, and X be concatenation of (X n , n ∈ N) via (K n , n ∈ N), as defined in EJP 26 (2021), paper 50.
The reader may observe that the second condition of the above theorem is not present in [14], as Nagasawa only considers continuous processes with instant revivals at the exit points of the subprocesses.
If we only consider one process X 0 on E and one transfer kernel K 0 from X 0 to (X 0 , E), and set X −1 = X +1 = X 0 , K −1 = K +1 = K 0 , no special conditions are required such that the pasted process π(X) is a right process. We then obtain the following result for the instant revival process (in the sense of [8,13]), constructed of copies of X 0 with the revival kernel K 0 : Theorem 1.7. In the context of Theorem 1.6, if X −1 = X +1 , K −1 = K +1 , then π(X) is a right process on E.

Concatenation of right processes
In this section, let (X n , n ∈ N) be a sequence of right processes on disjoint state spaces (E n , n ∈ N), and for each n ∈ N, let a transfer kernel K n from X n to (X n+1 , E n+1 ) be given. The objective is to give a rigorous construction of the concatenation and to prove Theorem 1.4, which will be done incrementally by lifting the concatenation of finitely many processes to the countable case.

Concatenation of two processes
Carrying out the specification given in section 1.3 for the case of two processes, we set the concatenated process X of X 1 and X 2 via the transfer kernel K := K 1 on the sample space Ω : as well as introduce a family of operators (Θ t , t ≥ 0) on Ω, defined by We use the transfer kernel K to concatenate the processes X 1 and X 2 probabilistically by giving a transition between the distributions (P 1 x , x ∈ E 1 ) and (P 2 The main result for the concatenation X of two processes X 1 and X 2 via the transfer kernel K is as follows: This theorem is proved in detail in [16,Theorem (14.8)] by an examination of the resolvent and of the excessive functions of the resulting concatenated process X. We give a short sketch: EJP 26 (2021), paper 50.
Using Dynkin's formula (1.1) for decomposing the resolvent (U α , α > 0) of X at the revival time R (which a.s. coincides with the terminal time ζ 1 of X 1 ), one obtains for 2}. An extensive analysis of the above components under the utilization of the strong Markov property of X 1 and X 2 as well as the properties of the transfer kernel K then shows the Laplace-transformed equivalent of the Markov property for X. But U 2 α f 2 is α-excessive for X 2 , and both U 1 α f 1 and, by the shift properties of the transfer kernel K, the function x → E 1 x (e −αζ 1 KU 2 α f 2 ) are α-excessive for X 1 . As X 1 and X 2 satisfy (HD2), it is immediate from the above decomposition that t → U α f (X t ) is a.s. right continuous, which yields (HD2) for X.

Concatenation of finitely many processes
Next, we consider for fixed m ∈ N the concatenation of the right processes X 1 , . . . , X m via the transfer kernels K 1 , . . . , K m−1 : For every n ∈ {1, . . . , m} set E (n) := n j=1 E j as topological union of the spaces (E j , j ∈ {1, . . . , n}), as well as E := E (m) . Directly extending the construction of section 2.1, we define the concatenated process X on the sample space Ω : , Furthermore, we introduce a family of operators (Θ t , t ≥ 0) on Ω by setting for each t ≥ 0, ω = (ω 1 , . . . , ω m ) ∈ Ω: The formal proof that (Θ t , t ≥ 0) is indeed a family of shift operators for (X t , t ≥ 0) is a straight-forward computation with the help of the shift property (1.2) of the lifetime. Like in the construction for two processes in above section 2.1, we use the transfer kernels (K n , n ∈ {1, . . . , m − 1}) to concatenate the separate measures (P n x , x ∈ E n ), n ∈ {1, . . . , m}, of the partial processes (X n , n ∈ {1, . . . , m}). For every x ∈ E, we define the measure P x on F by setting for x ∈ E n , H ∈ bF : EJP 26 (2021), paper 50.
which is terminal time, as X is right continuous by construction, and every subspace E n+1 is isolated in E.
The extension of Theorem 2.1 to the finite concatenation X of X 1 , . . . , X m via the transfer kernels K 1 , . . . , K m−1 then reads as follows: We will prove this theorem iteratively, that is, by assuming that the concatenation X (n) of the processes X 1 , . . . , X n via the transfer kernels K 1 , . . . , K n−1 is already a right process for any fixed n ∈ {1, . . . , m − 1}, and then applying Sharpe's result (Theorem 2.1) in order to concatenate X (n) with X n+1 via the transfer kernel K n . Before doing this, we need to lift the transfer kernels K n from X n (to (X n+1 , E n+1 )) to transfer kernels from X (n) (to (X n+1 , E n+1 )). We begin with a general result on stopping times: Lemma 2.3. Let X be a right continuous strong Markov process, and S, T be stopping times over the natural filtration (F t , t ≥ 0), such that S + T As Θ −1 S (A) ∈ F t+S , we see that, by the definition of F t+S , the inner term satisfies So every set of the countable union above is an element of F T − .
Proof. Obviously, K n • π n is a probability measure in the second argument, because K n is a Markov kernel. In order to show the F (n) [ζ (n) −] -measurability of K n • π n ( · , dy), we start by observing that This can be seen by the following argument: The σ-algebra F n ζ n − is generated by f (X n t ) 1 {t<ζ n } , f ∈ bE n , and these functions, extended to Ω (n) , fulfill It remains to prove that the shift invariance also lifts from K n to K n • π n : Fix t ≥ 0 and let N n be a null set on F n such that, for all ω n ∈ N n , But then N (n) := (π n ) −1 (N n ) is a null set on F (n) , because and for all ω = (ω 1 , . . . , ω n ) ∈ N (n) (thus, ω n ∈ N n ), we have for t < ζ (n) (ω): where we used the shift invariance of K n for the last identity.
We are ready to prove the extension of Theorem 2.1 to finitely many processes: Proof of Theorem 2.2. The case m = 2 is already proved, see Theorem 2.1.
Assume now that, for some m ∈ N, the process X (m) resulting from the concatenation of X 1 , . . . , X m via the transfer kernels K 1 , . . . , K m−1 is a right process and satisfies for all n ∈ {1, . . . , m − 1}, x ∈ E (n) , f ∈ bE n+1 , with R (n) := inf{t ≥ 0 : X (m) ∈ E (n+1) }: Let X (m+1) be the concatenation of X (m) and X m+1 via the transfer kernel K (m) := K m • π m . By the pathwise definitions at the beginning of sections 2.1 and 2.2, X (m+1) is equal to the process X arising from the concatenation of X 1 , . . . , X m , X m+1 via the transfer kernels K 1 , . . . , K m−1 , K m . In particular, the initial measures P (m+1) x , P x of X (m+1) , X respectively, coincide for all x ∈ E (m+1) . Now Theorem 2.1 states that X = X (m+1) is a right process, and that, with the revival time R m = inf{t ≥ 0 : X t ∈ E m+1 } =: R (m) , it satisfies, with π (m) : Ω → Ω 1 × · · · × Ω m : Assumption (2.2) for X (m) concludes the proof, as we get for n ∈ {1, . . . , m − 1}: Here, the equality of both conditional expectations is seen as follows: Because R n = R (n) • π (m) and X t = X (m) t • π (m) hold for all t < R (m) , we have X R n = X (m) R n • π (m) . The σ-algebras F R n − and F (m) R (n) − are generated by the multiplicatively closed classes of functions . . , f k ∈ bE , and it is immediate that J = J (m) • π (m) . Therefore, the integrals of both functions are the same (over their respective spaces), that is, we obtain

Concatenation of countably many processes
We are ready to turn to the concatenation of the processes (X n , n ∈ N) via the transfer kernels (K n , n ∈ N): We assume the topological union E = n∈N E n of the disjoint spaces (E n , n ∈ N) to be a Radon space. For instance, this is the case if the spaces E n , n ∈ N, are Lusin, see [15, Corollary to Lemma II.5]. Adjoin a point ∆ / ∈ E as a new, isolated point and form E ∆ := E ∪ {∆}.
Set F := n∈N F n , and introduce the measures (P x , x ∈ E) on (Ω, F ) by constituting a transition between the subprocesses' distributions (P n x , x ∈ E n ), n ∈ N, via the transfer kernels (K n , n ∈ N). To this end, we define the measures (P x , x ∈ E) as projective limits of the following prescriptions: For any m ∈ N and H ∈ b(F 1 ⊗ · · · ⊗ F m ), EJP 26 (2021), paper 50.
An easy calculation shows that the above definitions admit consistency and therefore, by the Kolmogorov existence theorem, exist as probability measures on (Ω, F ).
We are going to prepare the main method for the proof that X is a right process. A stability result for right processes, which will be made rigorous in Lemma 2.5 below, states the following: Assume we are given a stochastic process X and an increasing sequence of terminal times (R n , n ∈ N). If process X killed at R n is a right process for every n ∈ N, then X killed at R := lim n R n is a right process as well. This result is then directly applicable in our context, because, for every n ∈ N, the concatenated process X killed at the n-th revival time R n is just the finite concatenation of X 1 , . . . , X n via K 1 , . . . , K n−1 , which is a right process by the results of section 2.2. Thus, X killed at lim n R n = n ζ n (which equals X by construction) is proved to be a right process.
Lemma 2.5. Let (X t , t ≥ 0) be a right continuous stochastic process on a measurable space (Ω, F ) with values in a Radon space E, (P x , x ∈ E) be a family of probability measures on a measurable space (Ω, F ), (R n , n ∈ N) be an increasing sequence of random times with R := sup n∈N R n , and (E R,n , n ∈ N) be an increasing sequence of Radon spaces. Define the processes (X R,n t , t ≥ 0), n ∈ N, and (X R t , t ≥ 0) on Ω by being the natural filtration of X R and (Θ R t , t ≥ 0) being an arbitrary family of shift operators for X, is a right process on E, if the following conditions are fulfilled: (i) (R n , n ∈ N) is a sequence of stopping times over (F R t , t ≥ 0); (ii) (E R,n , n ∈ N) increases to E, that is, n∈N E R,n = E; (iii) for each n ∈ N, there exist a filtration (F R,n t , t ≥ 0) on (Ω, F ) and a family of operators (Θ R,n t , t ≥ 0) on Ω, such that X R,n := Ω, F , (F R,n t ) t≥0 , (X R,n t ) t≥0 , (Θ R,n t ) t≥0 , (P x ) x∈E R,n is a right process on E R,n ; (iv) for each n ∈ N, R n is a terminal time for the process X R,n , satisfying R n > 0 P x -a.s.
for all x ∈ E R,n .
Proof. The process X R is normal, because for any x ∈ E, with n ∈ N such that x ∈ E R,n , the normality of X R,n gives Turning to the Markov property of X R , let s, t ≥ 0 and f ∈ bE . For any k ∈ N, 0 = t 0 < t 1 < t 2 < · · · < t k ≤ t, g 0 ∈ bE , g 1 , . . . , g k ∈ bE , set J R := g 0 (X R t0 ) g 1 (X R t1 ) · · · g k (X R t k ), J R,n := g 0 (X R,n t0 ) g 1 (X R,n t1 ) · · · g k (X R,n t k ), n ∈ N.
As the set of functions of the type J R forms a multiplicatively closed generator of bF R t , and as E X R t f (X R s ) is measurable with respect to the natural filtration (F R t , t ≥ 0), it suffices to show that We start by observing that {s+t < R} = n {s+t < R n } and X R s+t = X R,n s+t on {s+t < R n }, so Lebesgue's dominated convergence theorem yields By employing both the terminal time property and the stopping time property of R n with respect to X R,n next, we obtain Now, we are able to apply the Markov property of X R,n , which yields ); s < R n · J R,n ; t < R n , and by carrying out the above steps in reverse order, we conclude that It remains to verify that t → f (X R t ) is a.s. right continuous for all α-excessive functions f . To this end, let S α (X R,n ), S α (X R ), α > 0, be the sets of all α-excessive functions, T n t , T R t , t ≥ 0, be the transition operators, and U n α , U R α , α > 0, be the α-potential operators of the processes X R,n , X R respectively, that is, Of course, U R α h m is in S α (X R ) (see, e.g., [2, Proposition 2.2]). However, we are going to prove now that this potential, as a function restricted to E R,n , is also in S α (X R,n ). As X R,n is a subprocess of X R , we have EJP 26 (2021), paper 50.
The Markov property of X R and the stopping time property of R n with respect to X R imply that this is equal to Therefore, we have e −αt T n t U R α h m ≤ U R α h m for all t ≥ 0, and because R n > 0 holds P x -a.s. for all x ∈ E R,n , Levi's monotone convergence theorem yields We are now able to conclude that X satisfies (HD2): We have just seen that, for any f ∈ S α (X R ), f restricted on E R,n is α-excessive for X R,n for all n ∈ N, so as X R,n is a right process, the map t → f (X R,n t ) is a.s. right continuous for each n ∈ N. With X R t = X R,n t on t < R n , lim n R n = R and f (∆) = 0, we immediately get that t → f (X R t ) is a.s. right continuous.
Let X be the concatenation of the right processes (X n , n ∈ N) via the transfer kernels (K n , n ∈ N), as constructed above, and (R n , n ∈ N) be the revival times of X. As announced, we are going to apply Lemma 2.5 with X R,n being the subprocesses of X killed at the revival times R n , that is, we consider for all ω = (ω 1 , ω 2 , . . .) ∈ Ω, t ≥ 0, We first need to show that the subprocesses X R,n , n ∈ N, fulfill the requirements of Lemma 2.5. In particular, they are right processes: EJP 26 (2021), paper 50. Lemma 2.6. For every n ∈ N, the process with (F R,n t , t ≥ 0) being its natural filtration, is a right process on the state space x ) x∈E (n) be the concatenation of X 1 , . . . , X n with the transfer kernels K 1 , . . . , K n−1 . Then X (n) is a right process on E (n) by Theorem 2.2.
Consider the canonical projection π (n) : Ω → Ω (n) . By checking the decomposition (2.3) and the definition of X (n) in section 2.2, it is evident that The definitions of the measures P x , P (n) x for the countable and finite concatenations yield that for all x ∈ E (n) , Thus, X R,n and X (n) have the same finite dimensional distributions (with respect to their corresponding measures P and P (n) ): This easily transfers the normality and Markov property from X (n) to X R,n . Turning to (HD2) for X R,n , we observe that the α-excessive functions of X (n) and X R,n coincide, as the transition operators T Figure 3: Construction of the pasting of two subprocesses X −1 , X +1 on E −1 , E +1 , via concatenation of alternating subprocess copies on (2N − 1) × E −1 , 2N × E +1 respectively, and subsequent projection onto E −1 ∪ E +1 .

Application to pasting
As described in section 1.4, we achieve the pasting of two right processes X −1 , X +1 on non-disjoint spaces E −1 , E +1 by introducing a counting coordinate, defining copies of the two processes on the disjoint spaces {n} × E (−1) n , n ∈ N, concatenating these processes to a process X on N × (E −1 ∪ E +1 ), and then discarding the first coordinate by projecting to π(X), see Figure 3. We now need to ensure that π(X) is a right process.

Mapping of the state space
In general, the state space transformation ψ(X) of a (strong/right) Markov process X on a state space E to a new state spaceÊ via a surjective mapping ψ : E →Ê does not yield a (strong/right) Markov process. Heuristically speaking, the original process X needs to "behave identically" on points of E that are mapped together by ψ. A classical consistency condition with salvages the Markov property of ψ(X) is found, e.g., in [4,Theorem 10.13], it reads In the context of right processes the result is almost the same, flavored only by some measurability conditions. It is found in [16,Theorem (13.5)]: x ∈ E) be a right process on a Radon space E with semigroup (T t , t ≥ 0) and resolvent (U α , α > 0). Let (Ê,Ê ) be a Radon space and ψ : E →Ê be a mapping, satisfying the following conditions: Define the transformed process Y t := ψ(X t ), t ≥ 0, on Ω := ω ∈ Ω : t → ψ X t (ω) is right continuous inÊ , equipped with shift operatorsΘ t := Θ t , t ≥ 0, onΩ, and σ-algebras generated by Ŷ and choose measures forP y , y ∈Ê, bŷ P y := P x onF , for x ∈ E with ψ(x) = y ∈Ê. Furthermore, letF , (F t , t ≥ 0) be the usual completion and augmentations ofF 0 , (F 0 t , t ≥ 0) respectively, relative to the family (P y , y ∈Ê).
As usual, property (iii) can be extended to all functions f ∈ bÊ by using the monotone class theorem and standard completion arguments (see [16,Remarks (13.6)]). Because of this property, the definition of the measures P y onF in (3.1) is independent of the representatives chosen for y = ψ(x), x ∈ E: For any f ∈ bÊ , t ≥ 0, we havê Typically, the fundamental condition (iii) must be verified manually. There is a Laplace-transformed version of this condition, which sometimes is easier to control, and which is more suitable in our context:  Proof of Theorem 1.6. π is clearly surjective. It is E /E -measurable, as the preimage of π reads The right process X is right continuous and the projection π is continuous, so π(X) is right continuous as well. By Theorem 3.2, it therefore suffices to prove that for all α > 0, f ∈ bE , there exists f α ∈ bE such that U α (f • π) = f α • π holds true. As the process X is constructed of alternating copies, we look at cycles of two revivals, that is, we examine for (n, x) ∈ E: For m = 0, we decompose the partial resolvent at the revival time R n and obtain by employing the terminal time property of R n+1 , the strong Markov property of X at R n , and the revival formula of Theorem 1.4: For general m ∈ N 0 , we will show inductively that e −αt f • π(X t ) dt = g (−1) n m (x) (3.2) holds with g −1 m ∈ bE −1 , g +1 m ∈ bE +1 being independent of n ∈ N. The case m = 0 is already done. Assuming that (3.2) is proved for an m ∈ N 0 , we calculate for m + 1, by using the same course of actions as above, as well as the definitions of the transfer kernels K n : e −αt f • π(X t ) dt • π n+1 • π n = E (−1) n x 1 {ζ (−1) n <∞} e −αζ (−1) n K (−1) n E (−1) n+1 · 1 {ζ (−1) n+1 <∞} e −αζ (−1) n+1 e −αt f • π(X t ) dt EJP 26 (2021), paper 50.
e −αt f • π(X t ) dt + E (n,x) e −α(τ−1∧τ+1) U α (f • π)(X τ−1∧τ+1 ) . τ −1 ∧ τ +1 is the exit time of the process X from E −1 ∩ E +1 . The above formula will turn out to be independent of n if the process' behavior on E −1 ∩ E +1 and its exit/entry behavior into E\(E −1 ∩E +1 ) (represented by e −α(τ−1∧τ+1) and X τ−1∧τ+1 ) are independent of n. It has already been shown that this is the case for all odd-numbered n, and for all even-numbered n. It remains to compare the odd-numbered and even-numbered starting processes, that is, the behavior of the original processes X −1 and X +1 together with the transfer kernels K −1 and K +1 : For odd-numbered n o ∈ (2N−1), the starting process is X (−1) no = X −1 , living on E −1 , so the process π(X) starting at (n o , x) only enters E +1 \E −1 when the first subprocess dies. Therefore, τ −1 ∧ τ +1 = τ −1 ∧ R no holds true in this case, and using Dynkin's formula (1.1) again, we get where R no ≤ τ −1 can be replaced by R no < τ −1 , as equality only occurs if R no = ∞.