Probability computation for high-dimensional semilinear SDEs driven by isotropic $\alpha-$stable processes via mild Kolmogorov equations

Semilinear, $N-$dimensional stochastic differential equations (SDEs) driven by additive L\'evy noise are investigated. Specifically, given $\alpha\in\left(\frac{1}{2},1\right)$, the interest is on SDEs driven by $2\alpha-$stable, rotation-invariant processes obtained by subordination of a Brownian motion. An original connection between the time-dependent Markov transition semigroup associated with their solutions and Kolmogorov backward equations in mild integral form is established via regularization-by-noise techniques. Such a link is the starting point for an iterative method which allows to approximate probabilities related to the SDEs with a single batch of Monte Carlo simulations as several parameters change, bringing a compelling computational advantage over the standard Monte Carlo approach. This method also pertains to the numerical computation of solutions to high-dimensional integro-differential Kolmogorov backward equations. The scheme, and in particular the first order approximation it provides, is then applied for two nonlinear vector fields and shown to offer satisfactory results in dimension $N=100$.


Introduction
In this paper, we are concerned with the study of quantities related to the N −dimensional, semilinear stochastic differential equation (SDE) with a specific interest in the case N high.Here, given α ∈ 1 2 , 1 , L is an α−stable subordinator (i.e., an increasing Lévy process) independent from (β n ) n=1,...,N , which in turn are independent Brownian motions; we write W = β 1 , . . ., β N .All these processes are defined in a common complete probability space (Ω, F, P) , which we endow with the minimal augmented filtration generated by the subordinated Brownian motion W L .Moreover, T > 0 is a finite time horizon and s ∈ [0, T ] is the initial time.As for A, Q ∈ R N ×N , they are diagonal matrices with A negative-definite and Q positive-definite.For our numerical experiments we will consider Q = σ 2 Id, being Id ∈ R N ×N the identity matrix, so that σ > 0 is a parameter describing the strength of the noise.Finally, the nonlinear bounded vector field B 0 : [0, T ] × R N → R N is subject to suitable regularity conditions which will be specified in the sequel and guarantee, among other things, the existence of a pathwise unique solution of (1): it will be denoted by X s,x = (X s,x t ) t∈[s,T ] .Connected to the SDE in (1), we have the following Kolmogorov backward equation: where φ : R N → R, D = z ∈ R N , |z| ≤ 1 is the closed unit ball and we fix t ∈ [0, T ].Here ν (dz) is the Lévy measure of W L , and up to a positive multiplicative constant is of the form ν (dz) = |z| −(N +2α) dz (see, e.g., [20,Theorem 30.1]).The link between the equations in (1) and ( 2) is provided by Theorem 7 (ii) below (see also the book [15] for related results), where it is shown that the time-dependent Markov transition semigroup E [φ (X s,x t )] associated with (1) satisfies (2) in the closed interval [0, t] for every φ ∈ C 3 b R N .Moreover, we are able to extend the validity of this connection in [0, t) to every function φ ∈ B b R N through an original procedure based on regularization-by-noise and a mild, integral formulation of (2) (see Remark 1).
In the present work, we are precisely interested in these expected values, with particular attention to the case φ (x) = 1 {|x|>R} (for some threshold R > 0), where one has E [φ (X s,x t )] = P (|X s,x t | > R).Hence we want to describe a method which allows to compute probabilities related to the solution of the SDE (1).Trying to get an estimate of them by numerically solving the integro-differential equation (2) is a typical example of curse of dimensionality (CoD), and since we intend to deal with a high dimension (in the simulations we take N = 100), this is an unfeasible way to proceed.The canonical approach to tackle our problem is the Monte Carlo method: several paths of X s,x are simulated by the Euler-Maruyama scheme with a fine time step, and then the final points of these trajectories are averaged to get an approximation of the desired expected values by virtue of the strong law of large numbers.However, if we were to follow this scheme (which is known to be free of the CoD), then we would have to start over the procedure every time we change the starting point x and the starting time s, the noise strength σ and even the nonlinearity B 0 , a practice that is very common in a wide range of applications including weather forecasts and calibration of financial models (see [1] and references therein).In order to overcome this setback, we aim to extend to our framework the ideas developed in the papers [9,10] for the Gaussian case, namely we search for an iterative scheme which relies on a single bulk of Monte Carlo simulations independent from the aforementioned parameters.Specifically, to approximate the value of the iterates v n s (t, x) , n ∈ N ∪ {0} , we just need to simulate once and for all, using the Euler-Maruyama scheme, a large number of sample paths of the subordinator L and of the stochastic convolution Z 0 t = t 0 e (t−r)A dW Lr , t ∈ [0, T ], which is the unique (up to indistinguishability) solution of the linear SDE d Z 0 t = A Z 0 t dt + dW Lt , Z 0 0 = 0.
The main novelty of the approach that we propose consists in the structure of the noise W L , which is a 2α−stable, rotation-invariant Lévy process (cfr.[20,Example 30.6]).In particular, the introduction of L considerably complicates the framework compared to the Brownian one treated in [9,10].This fact leads us to develop an original procedure -essentially based on conditioning with respect to the σ−algebra generated by the subordinator-to get an expression for the iterates which is suitable for applications.Moreover, the theoretical foundation of the iterative method analyzed in this work, Theorem 3, has a remarkable interest on its own.Indeed, it establishes a connection between the time-dependent Markov transition semigroup associated with (1) and a mild, integral formulation of (2) (see Equation (11)) that, at the best of our knowledge, is new when it comes to isotropic Lévy processes.
The paper is structured as follows.Section 2 describes the setting and recalls the main concepts that will be widely used in the rest of the paper.In addition, it introduces the integral formulation of the Kolmogorov equation ( 2) and shows its well-posedness.Next, in Section 3 (see Theorem 3) we provide the probabilistic interpretation of (2) in mild form, along with other interesting regularization-by-noise results for SDEs driven by subordinated Wiener processes.In Section 4 we define the iterative scheme and prove its convergence to the expected values that we are trying to approximate.Next, Section 5 is concerned with the computation of the first iterate v 1 s (t, x); it is divided into two subsections referring to the deterministic and random time-shifts, respectively.Its results are used in Section 6 as the base case for the induction argument that allows to calculate v n s (t, x) (see Theorem 17).The last part (Section 7) is devoted to numerical experiments in dimension N = 100 for two choices of the nonlinear vector field B 0 , with particular attention on the improvements provided by the first iteration over the linear approximation corresponding to the Ornstein-Uhlenbeck (hereafter OU) processes.Finally, Appendix A contains the proof of Lemma 4.

Preliminaries and Kolmogorov backward equation in mild form
Fix N ∈ N and a complete probability space (Ω, F, P).Consider N independent Brownian motions (β n ) n=1,...,N : we write W = β 1 , . . ., β N .Moreover, for α ∈ (0, 1) we take a strictly α−stable subordinator L = (L t ) t≥0 independent from (β n ) n , and denote by F L the augmented σ−algebra it generates, i.e., F L = σ F L 0 ∪ N , where F L 0 is the natural σ−algebra generated by L and N is the family of F−negligible sets.In other words, L is an increasing Lévy process with (cfr.[20,Example 24.12]) Let us introduce the diagonal matrices being the natural filtration of W L .Given T > 0 and a continuous function is the OU process starting from x at time s, i.e., it is the unique solution of the next linear SDE We denote by R = (R s,t ) , 0 ≤ s ≤ t ≤ T , the time-dependent, Markov transition semigroup associated with this family of processes: where B b R N denotes the space of real-valued, Borel measurable and bounded functions defined on R N .The Chapman-Kolmogorov equations ensure that For every 0 ≤ s < t ≤ T we define Moreover, R s,t φ ∈ C 1 b R N and the following gradient estimate holds true for some constant c α > 0: In the sequel, for every x ∈ R N and t ∈ (0, T ] we are going to need the continuity of R In order to prove this property, we first note that a variation of constants formula lets us consider (from (4)) This expression shows that the process (Z s,x t ) s∈[0,t] is stochastically continuous (in the variable s).As a consequence, if φ ∈ C b R N , then we can easily deduce the continuity of R •,t φ (x) in [0, t] applying the continuous mapping and Vitali's convergence theorems to (5).In the general case φ ∈ B b R N , one can use the same argument combined with the regularizing property of R and ( 6) to obtain the continuity of R •,t φ (x) in [0, t), as desired.Finally, observe that there exists a constant We refer to [5, Remark 5] for a similar computation.Let us assume α ∈ 1 2 , 1 : in this way, denoting by γ = 1/ (2α), we have γ ∈ (0, 1) and the bound in (8) entails For a given measurable and bounded vector field B : [0, T ] × R N → R N , we are concerned with the analysis of the following Kolmogorov backward equation in mild, integral form: where t ∈ (0, T ] and φ ∈ B b R N .We denote by B 0,T = sup 0≤t≤T B (t, •) ∞ .In order to study (11), for every 0 < t 1 < t 2 ≤ T, we consider the Banach space When t 1 = 0, we are careful to remove the left-end point of the interval [t 1 , t 2 ] in the previous definitions, so that we will be working with the space The following lemma proves the wellposedness of (11).We refer to [8,Theorem 9.24] for an analogous result concerning the Kolmogorov forward equation in mild form associated with OU processes in infinite dimension corresponding to Brownian motions.
Proof.Let us fix φ ∈ B b R N , t ∈ (0, T ] , s ∈ (0, t] and introduce the map Γ 1 : Λ γ 1 (0, s] → Λ γ 1 (0, s] given by for every V ∈ Λ γ 1 (0, s].Notice that such an application is well defined and with values in Λ γ 1 (0, s], thanks to the properties of R discussed above, the dominated convergence theorem and the next computations based on (10): ) Here C = C (α, A, Q) > 0 is the same constant as in (10), and the last inequality is obtained using the bound where for the second equality we perform the substitution u = 2t − s − r.Estimates similar to those in (13) allow to write, for every This shows that, for s sufficiently small, the map Γ 1 is a contraction in Λ γ 1 (0, s]: we denote by V 1 its unique fixed point.Now define and notice that At this point, we can repeat the same procedure to construct the solution of (11) in the interval [t − 2s, t − s], because the relation among constants in (15) -which is necessary to get a contraction-does not depend on the initial condition.Specifically, we take Computations analogous to the ones in the previous step show that ] is a contraction: its unique fixed point is denoted by V 2 .Then we call By the Chapman-Kolmogorov equations and Fubini's theorem we realize that u φ (t, •) is the unique local solution of (11) (in the strip In the sequel, we can simply denote it by u φ (t, •).
This argument by steps of lenght s can be repeated iteratively to cover the whole interval [0, t] and obtain the unique, global solution u φ (t, •) of ( 11) such that u φ t− (t, •) ∈ Λ γ 1 (0, t].Thus, the proof is complete. If φ ∈ C 1 b R N , then recalling (9) one can directly write ∇R s,t φ (x) = E [∇φ (Z s,x t )] e (t−s)A .Next, considering that e (t−s)A ≤ 1, 0 ≤ s ≤ t ≤ T , an application of ( 7)- (10) where C = C (α, A, Q) > 0 is the same constant as in (10).This argument can be iterated to claim that, given an integer n ≥ 2 and φ The previous consideration allows to extend Lemma 1.To this purpose, for an integer n ≥ 2 and 0 < t 1 < t 2 ≤ T we introduce the Banach space As we have done before, when t 1 = 0 we remove the left-end point of where γ = 1/ (2α).
Proof.Take an integer n ≥ 2; the argument parallels the one in the proof of Lemma 1, so here we only show that, for a given φ ∈ C n−1 b R N and s ∈ (0, t] sufficiently small, the map 12) is well defined and a contraction.First, we note that for every V ∈ Λ γ n (0, s] and multi-index j such that (17).Secondly, invoking the estimates in ( 14) and ( 17), for every 0 < s ≤ s, sup , where (10).It then follows that Γ 1 V ∈ Λ γ n (0, s], with which reduces to (15) when n = 1 and proves the contraction property of Γ 1 for s small enough.

The time-dependent Markov transition semigroup
Let α ∈ (0, 1) and introduce a vector field to be the unique (up to indistinguishability) solution of the semilinear stochastic differential equation We denote by P = (P s,t ) , 0 ≤ s ≤ t ≤ T , the corresponding time-dependent Markov transition semigroup given by The connection between the SDE in ( 18) and the Kolmogorov backward equation in mild integral form (11) is provided by the next, fundamental result.
where γ = 1/ (2α).The purpose of this section is to develop a self-contained procedure which is specific to our framework and allows to prove Theorem 3 via important, preliminary results.In the case of time-independent nonlinearities and f ≡ 0 (hence for Kolmogorov forward equations in mild form), Theorem 3 is known for noises different from our W L .As regards independent α−stable Lévy processes in finite dimension, it has been established in [19,Lemma 5.12] (its proof relies on the theory of one-parameter semigroups, so it cannot be adapted to our framework).As for Brownian motions in infinite dimension, we refer to [8,Theorem 9.27].
Let α ∈ (0, 1) R N and recall that the subordinated Brownian motion W L is an isotropic (i.e., rotation-invariant), 2α−stable, R N −valued Lévy process with compensator ν (dz) |z| −(N +2α) dz and no continuous martingale part (see [20,Theorem 30.1]).Here denotes the equality up to a positive multiplicative constant.By [18, Theorem 3.1] (see also [17]) there is a sharp stochastic flow X s,x t generated by the SDE (18) which is jointly measurable in (s, t, x, ω) and, P−a.s., simultaneously continuous in x and càdlàg in s and t.More specifically, there exists an almost-sure event Ω such that the following facts hold true for every ω ∈ Ω : x ∈ R N : from now on, we work with such a stochastic flow X s,x t .The next result shows that, under additional regularity requirements on B 0 , it is differentiable with respect to x. Analogous claims concerning differentiability of stochastic flows can be found in literature in, e.g., [6,Theorem 8.18] for the Brownian case and in [15,Theorem 3.4.2]for the jumps one, although the latter requires regularity assumptions on the coefficients which are not fulfilled by our framework.The proof, which carries out a path-by-path argument thanks to the already mentioned properties guaranteed by [18], is postponed to Appendix A. Lemma 4. Let α ∈ (0, 1) , n ≥ 2 be an integer and and there exists a constant C > 0 depending only on A, B 0 , T, n and N such that The previous claim implies the following result regarding persistence of regularity.
Proof.We start off by proving Point (i).Fix 0 ≤ s ≤ T and x ∈ R N ; from (18), Gronwall's lemma, [16, Theorem 3.2] and the continuity in probability of the Lévy process for every p ∈ (1, 2α), and that the process X s,x we have, by Vitali's and dominated convergence theorems, where we denote by DB 0 T,∞ = sup 0≤t≤T DB 0 (t, •) ∞ .As for II n , note that A (r) φ is continuous in R N , and that for every y ∈ R N (see ( 21)), Therefore by the continuous mapping and Vitali's convergence theorem we obtain II n → 0 as n → ∞, proving Point (i).
We now move on to Point (ii), where it is sufficient to require α ∈ (0, 1).Fix 0 ≤ r ≤ t ≤ T ; observe that for every More specifically, in the previous computation we are allowed to differentiate under the integral sign because The hypotheses prescribe We are now in position to prove the following, crucial result concerning Kolmogorov equations (cfr.[15, Theorem 4.5.1]for an analogous claim in a different setting).Theorem 7. Take α ∈ 1 2 , 1 .
, then the function t → P s,t φ (x) is continuously differentiable in [s, T ] and satisfies the Kolmogorov forward equation and satisfies the Kolmogorov backward equation Proof.Recall that by [20, Theorem 14.7 (iii)] the process W L is centered in 0 when α ∈ 1 2 , 1 .As a consequence, denoting by N the Poisson random measure associated with its jumps and by N the compensated measure, (18) an application of Itô formula ensures that Qz N (dr, dz) Qz N (dr, dz) , which holds true P−a.s. for every t ∈ [s, T ].Taking expectations in the previous equation and using Fubini's theorem we obtain which in turn implies (23) by Lemma 6 (i).
We now focus on Point (ii).Take 0 ≤ t ≤ T and x ∈ R N ; arguing as in [15, Proposition 3.8.2]we see that X •,x t follows the backward dynamics (P−a.s.) Hence invoking the backward Itô formula (see, e.g., [15, Theorem 2.7.1]) we deduce that, for every φ ∈ which holds true P−a.s.Taking expectations in the previous equation and using Fubini's theorem (remember Lemma 4) we obtain Since by hypotheses we are working with , by Lemma 6 (ii) we can differentiate in x the expression in (25), showing the continuity of the mapping r → ∇P r,t φ (x) in [0, t].This, together with (21), the fact that (25) also provides the continuity of the mapping r → P r,t φ (x) in [0, t] and a dominated convergence argument based on Corollary 5, ensures the continuity of the function r → A (r) P r,t φ (x) in the same interval.Therefore differentiating (25) with respect to s we infer (24).The proof is now complete.
Another step that we need to prove Theorem 3 consists in a regularization result for the time-dependent Markov transition semigroup P s,t (see Lemma 10) which -at the best of our knowledge-is not established in literature with this type of noise.We start by recalling the Bismut-Elworthy-Li's type formula presented in [22, Theorem 1.1] (see also [21] for a related work treating multiplicative Lévy noise); such a formula is adapted to our framework, where we have to account for an initial time s not necessarily equal to 0.
Furthermore, there exists a constant C α > 0 such that the next gradient estimate holds true: We are able to extend the previous claim to functions φ ∈ C b R N with an approximation procedure, effectively making Theorem 8 a regularization-by-noise result.We need the next estimate, which derives from [4, Eq. ( 14)]: for some c = c (α, p) > 0, for every p > 0. (28) Then, for every φ ∈ C b R N and 0 ≤ s < t ≤ T , the function P s,t φ is differentiable at x ∈ R N in every direction h ∈ R N , and the expression in (26) holds true.
Note that for every φ ∈ C b R N the expression on the right-hand side of (26) is continuous in x for every h ∈ R N .Indeed, let us fix x ∈ R N and consider (x n ) n ⊂ R N such that x n → x as n → ∞.Then, using the same techniques as in the previous proof (cfr.(29)), together with Lemma 4 and a dominated convergence argument, we get (for some p, q > 1 determined by a generalized Holder's inequality, and Recalling (4), we introduce the family of integro-differential operators A (s) 0≤s≤T , defined for every where x ∈ R N .Let us take 0 ≤ s < t, x ∈ R N , and observe that by the definition in (21) and Corollary 5 there exists a constant C > 0 such that, for every We study the mapping [s, t] r → R s,r (P r,t φ) (x): using ( 25) and (30), it is easy to argue that it is continuous in its domain by Theorem 7 (ii) coupled with Vitali's and dominated convergence theorems.It is also differentiable, with Indeed, take r ∈ [s, t] and a generic sequence (r n ) n ⊂ [s, t] \ {r} such that r n → r as n → ∞; then R s,rn (P rn,t φ) (x) − R s,r (P r,t φ) (x) We immediately notice that II n → R s,r A (r) P r,t φ (x) as n → ∞ by Theorem 7 (i) and Corollary 5.As for I n , we split it again as follows: By a dominated convergence argument based on ( 22), (25), Corollary 5 and Theorem 7 (ii) we have III n → −R s,r (A (r) P r,t φ) (x) as n → ∞.Finally we focus on IV n , estimating by ( 25) Notice that the random variables inside the expected value in the previous inequality converge to 0 in probability as n → ∞ by (30).Such a convergence is true also in the L 1 −sense, thanks to the estimates in ( 22) and Vitali's convergence theorem.Thus, IV n → 0 as n → ∞, fact which completely shows (31).
Observe that ∂ r R s,r (P r,t φ) (x) is continuous in [s, t] by Vitali's and dominated convergence theorems, the mean value theorem, Corollary 5 and the continuity of the mapping r → ∇P r,t φ (x) in [s, t] (see (25) and the subsequent sentence).Therefore we can integrate it with respect to r on the interval [s, t] and infer that which coincides with (11).Next, we take φ ∈ C b R N and consider a sequence Since by ( 27) and Lemma 10 (for some constant C α > 0) by dominated convergence it is immediate to get the validity of (32) for φ, as well.Finally, we tackle the case φ ∈ B b R N .We consider φ to be the indicator function of an open set to begin with.Then, by Urysohn's lemma there exists a sequence (φ n ) n ⊂ C b R N such that 0 ≤ φ n ≤ φ and φ n → φ pointwise as n → ∞.By construction and dominated convergence we have Now we focus on the integral term in (32).Let us fix y, h ∈ R N , r ∈ (s, t) and u ∈ (r, t).Then, exploiting the Chapman-Kolmogorov equations and (26), we write (n ∈ N) Since, with the same argument as in (33), P u,t φ n → P u,t φ pointwise in R N as n → ∞, and (see, e.g., (29)) we can pass to the limit in (34) to obtain, by dominated convergence, Observe that the second-to-last equality in the previous equation is due to (26) and Lemma 10.As a consequence, for every r ∈ (s, t) we infer that where we use once again the dominated convergence theorem, thanks to the next bound that we get using (27) and Lemma 10: Moreover, this inequality also allows to pass the limit under the integral sign, so that we end up with Combining ( 33)-( 35) we conclude that (32) holds true for φ, i.e., for every indicator function of an open set.Note that the passages of the previous step do not require the continuity of the approximating functions (φ n ) n , as long as they are equibounded, satisfy (32) and converge pointwise to φ.Therefore, we can state that (32) holds true for every φ ∈ B b R N by the functional monotone class theorem (see, e.g., [3, Theorem 2.12.9]).
We notice that, from (32), the continuity of P •,t φ (x) , x ∈ R N , in the interval [0, t) can be argued by dominated convergence (see (41) below for an analogous computation).Furthermore, the measurability of P s,t φ (x) with respect to (s, x) is a consequence of the measurability of the stochastic flow X s,x t (ω) and Tonelli's theorem.These facts, together with Lemma 10 and the gradient estimate in (27), entail that P t− ,t φ (•) ∈ Λ γ 1 (0, t] , γ = 1/ (2α) .Recalling Theorem 1 the proof is complete.Remark 1. Suppose that the requirements of Theorem 3 are satisfied.Given 0 ≤ s < t ≤ T and φ ∈ B b R N , we consider r ∈ (s, t) and call φ = P r,t φ.By Theorem 3 and the Chapman-Kolmogorov equations, where u φ s (r, x) is the unique solution of (11) such that u φ r− (r, •) ∈ Λ γ 1 (0, r] , γ = 1/ (2α).Observing that φ ∈ C 1 b R N by Lemma 10, we invoke Corollary 2 to say that P s,t φ ∈ C 2 b R N .An iteration of this argument shows that P s,t φ ∈ C 4 b R N .In particular, the Kolmogorov backward equation (24) holds true in the interval [0, t) for every φ ∈ B b R N .

The iteration scheme
R N , so that Theorem 3 holds true.The proof of Theorem 1 (see, in particular, ( 12)-( 16)) suggests to approximate the unique solution u u0 s (t, x) (= P s,t u 0 (x)) of ( 11) such that u u0 t− (t, •) ∈ Λ γ 1 (0, t] , γ = 1/ (2α) , with the iterates Here we recall that B = B 0 − f.If we define v 0 s (t, x) = u 0 s (t, x) and v n+1 s (t, x) = u n+1 s (t, x) − u n s (t, x) , n ∈ N ∪ {0}, then these new functions satisfy the iteration scheme In the Brownian case, (36) has been investigated in [10].In order to study the convergence of ∞ n=0 v n s (t, x) to u u0 s (t, x) (in a sense that will be clarified later on), we need the next, preliminary result. and where s 0 = 0 and s n+1 = t − s.
We notice that the constant C in (37)-( 38) is the same as the one appearing in the gradient estimate (10).
Proof.We proceed by induction to prove that, for every u, s ∈ [0, t) and n ∈ N ∪ {0}, one has v n s (t, where C = C (α, A, Q) > 0 is the same constant as in (10).In (39), s 0 = u and s n+1 = t.The estimates in (37)-( 38) are an immediate consequence of (39) upon shifting the domain of integration and applying Tonelli's theorem.
For n = 0, the smoothing effect of the time-dependent Markov semigroup R guarantees that v 0 s (t, To fix the ideas, consider the case n = 1.Since k 0 u,t ∈ C b R N for every 0 ≤ u < t, the dominated convergence theorem, (36) and ( 40 and by ( 10)-( 40) we get Suppose now that our statement holds true at step n ∈ N. Then by the same argument as before and (39) where in the last inequality we apply the inductive hypothesis and consider s 0 = u, s n+2 = t.Thus, the claim is completely proved.
Another important property of the functions v n • (t, x) , x ∈ R N , is the continuity in the interval [0, t).In the case n = 0, this follows from the property of R discussed in Section 2 ; for a generic n ∈ N, it can be argued by (39) and dominated convergence writing Thanks to the estimates in (37)-( 38), the convergence of the iteration scheme (36) is proved in the same way as in the Brownian case with no time-shift, see [9,Section 2.4].Overall, the next result is true.

The first term of the iteration scheme
Let α ∈ 1 2 , 1 .The goal of this section is to study the first term v 1 s (t, x) = t s R s,u k 0 u,t (x) du of (36) for every 0 ≤ s < t ≤ T and x ∈ R N .In particular, starting from we want to find an alternative, explicit expression (see Lemma 14) for In order to do this, we propose an approach which at first analyzes a deterministic time-shift, and then allows to recover the subordinated Brownian motion case by conditioning with respect to F L .The results of this part represent the base case for the induction argument that we will develop to compute the general term v n+1 s (t, x) , n ≥ 1 (see Section 6).

Deterministic time-shift
Denote by S the set of real-valued, strictly increasing càdlàg functions defined on R + and starting at 0. Take ∈ S and note that W = (W t ) t≥0 is a càdlàg martingale with respect to the filtration F W t t≥0 , where F W t t≥0 is the minimal augmented filtration generated by W .For every x ∈ R N and 0 ≤ s < T , the OU process Z t (s, x) t∈[s,T ] is the unique, càdlàg solution of the linear SDE It can be expressed with a variation of constants formula as follows: For every 0 ≤ s < t ≤ T , define I s,t = t s e 2(t−r)A Q d r ∈ R N ×N .It is possible to argue as in [5,Equation (12)] to deduce that Z t (s, x) ∼ N e (t−s)A x + F s,t , I s,t .
Note that, for every 0 ≤ s < u < t ≤ T , Z t (s, x) = e (t−u)A Z u (s, x) + F u,t + t u e (t−r)A Q dW r , P − a.s., therefore Z (s, x) x∈R N is a family of (F t ) t∈[s,T ] −Markov processes as s varies in [0, T ).In particular, its transition probability kernels µ u,t : In the sequel, we denote by φ u,t (y, •) the density of µ u,t (y, •) .Straightforward changes to [5, Theorem 4] ensure that, for any 0 ≤ s < t ≤ T, the function With all these preliminaries in mind, we fix 0 ∈ S, 0 ≤ u < t ≤ T and define -by analogy with (42)the function ) is continuous and bounded, as well.The next claim provides us an analogue of (43) in this framework.
Lemma 13.Consider 0 ≤ s < t ≤ T .Then for every x ∈ R N , u ∈ (s, t) and 0 , 1 ∈ S, one has, P−a.s., Proof.Fix x ∈ R N , 0 ≤ s < u < t ≤ T and 0 , 1 ∈ S; by (45) we have Note that Going back to (48) we write, with the substitution ξ − e (t−u)A y − F u,t + e (t−u)A y + F u,t suggested by the previous calculations, .
Remark 2. The function k 0 u,t , 0 ∈ S, 0 ≤ u < t ≤ T, does not depend on the probability space where the underlying OU processes Z 0 t (u, x) , x ∈ R N , are defined.

Random time-shift
Here we investigate the subordinated Brownian motion case (see Lemma 14) after some further preparation.In what follows, we denote by Ω k , k ∈ N ∪ {0} , copies of the probability space Ω.Let W be the space of continuous functions from R + to R N vanishing at 0 and endow it with the Borel σ-algebra B (W) associated with the topology of locally uniform convergence.The pushforward probability measure generated by W (•) : (Ω, F, P) → (W, B (W)) is denoted by P W and makes the canonical process x = (x t ) t≥0 a Brownian motion.We work with the usual completion W, B (W), P W of this probability space: x is still a Brownian motion with respect to its minimal augmented filtration (cfr.[14, Theorem 7.9]).The completeness of the space (Ω, F, P) implies the measurability of W (•) : (Ω, F, P) → W, B (W) and the fact that P W is still the pushforward probability measure generated by W (•). Since W (•) is independent from F L , a regular conditional distribution of W (•) given F L is P W (A) , A ∈ B (W).Moreover, we denote by (coherently with Subsection 5.1) and by ] the expectation of a random variable defined on Ω k [resp., W].We are now in position to prove the next claim, which is the analogue of [10, Corollary 2.2].Lemma 14.For every x ∈ R N and 0 ≤ s < t ≤ T one has Proof.Fix 0 ≤ s < t ≤ T ; combining the definition in (42) and the expression in (7) we get, by the law of total expectation, for every u ∈ (s, t) , The discussion preceding this lemma together with the usual rules of change of probability space (see, e.g., [11, §X-2]) and the substitution formula in [5, Lemma 5] lets us apply the disintegration formula for the conditional expectation to get, from (48)-(50) and Remark 2, Since we aim to compute (43), for a generic x ∈ R N we focus on with the last equality which is obtained by the same argument as in (51).At this point we combine (51) and (52) to write, using Fubini's theorem, Recalling that (47) in Lemma 13 provides us with an expression for k 0 u,t Z 1 u (s, x) , we can use the law of total expectation and reason backwards with the conditioning in F L to conclude that .
Integrating the previous expression in the interval (s, t) with respect to u we obtain (49) completing the proof.
6.The general term of the iteration scheme Let α ∈ 1 2 , 1 .We want to analyze the general term v n+1 s (t, x) = t s R s,u k n u,t (x) du, 0 ≤ s < t ≤ T, of the iteration (36) for an integer n ≥ 1. Therefore we search for an explicit expression of (53)

Deterministic time-shift
We continue the construction carried out in Subsection 5.1.Specifically, fix an integer n ≥ 1 and t ∈ (0, T ]; for every i = 1, . . ., n, (i + 1) −tuple (s n−i+1 , s n−i+2 , . . ., s n+1 ) such that 0 ≤ s n−i+1 < s n−i+2 < • • • < s n+1 < t and 0 , . . ., i ∈ S we define (see ( 46)) Note that, by the continuity and boundedness of B, an induction argument shows that all these functions are well defined and in C b R N .Moreover, as in Remark 2 we observe that their value does not depend on the probability space where the underlying OU processes are constructed.By (45) we have (y where s n+2 = t.
In the previous expression, we interpret the empty sum to be 0: we adopt this convention hereafter.
Proof.Fix 0 ≤ s < t ≤ T and an integer n ≥ 1.We proceed by induction on i, observing that the base case i = 0 has been proven in (47), where s n = s and s n+1 = u.

Random time-shift
We argue by conditioning with respect to F L as in Subsection 5.2.First, we present a result which generalizes (51) in the proof of Lemma 14.
Lemma 16.Consider 0 ≤ s < t ≤ T and an integer n ≥ 1.Then for every i = 0, . . ., n, s 1 ∈ (s, t) and y ∈ R N , In this expression, we ignore the time-integrals when i = 0.
Proof.Take an integer n ≥ 1 and proceed by induction on i.For i = 0, there are no integrals in time in (60), which then reduces to (51) with s 1 = u.Suppose that the statement holds true for i = m − 1, for some m = 1, . . ., n: we want to prove its validity also for i = m.In order to do so, let us fix y ∈ R N and s 1 ∈ (s, t); recalling the definition of k m s1,t in (36), by Lemma 11 we can apply (7) to get .
By the inductive hypothesis, we substitute the expression for k m−1 s2,t , s 2 ∈ (s 1 , t) , in the previous equality to obtain (ignoring the inner time-integral when m = 1) , which we rewrite by Fubini's theorem -whose application is guaranteed by [9, Lemma 2.12], upon carrying out computations similar to those in the proof of [5, Theorem 6] (see also [2, Proposition 3.2])-as follows: . This provides us with (60), once we plug in the expression of k 0 ,..., m s1,...,sm+1,t (y) in (55).Thus, the proof is complete.

Numerical simulations
In this section we report on the results obtained by implementing the iterative scheme described above for two choices of the nonlinear vector field B 0 .We interpret the SDE in (18) as a finite-dimensional approximation of the reaction-diffusion SPDE dX (t, ξ) = (∆X (t, ξ) + B 0 (t, X (t, ξ))) dt + σ dW Lt , t ≥ s, X (s, ξ) = x (ξ) , ξ ∈ T 1 , where T 1 = R 1 /Z 1 is the one-dimensional torus (we refer to [5, Example 1] for an accurate description of this framework).Hence we consider λ k = |k| 2 , k = 1, . . ., N , and we take Q = σ 2 Id.Here σ > 0 is a parameter describing the strength of the noise.Before moving to the application of the model, we have to determine the time-shift function f ∈ C [0, T ] ; R N appearing in the OU process Z s,x , x ∈ R N (see (4)).Since we are dealing with a rotation-invariant noise and α ∈ 1 2 , 1 , E [W Lt ] = 0, t ≥ 0. As a consequence, the choice of f can be motivated as in [10,Introduction] for the Brownian case.In brief, we consider where x (•) : [0, T ] → R N is the unique solution of the integral equation and x (t) = x, t ∈ [0, s] .Of course, x (•) is computed numerically.Note that (63) is the deterministic counterpart of the semilinear SDE (18), and that the expected value function of the OU process coincides with x (•) in the interval [s, T ] by the choice of f .The intuition is that, at least when the noise is weak, the trajectories of the semilinear solutions are "close" to x (•) , allowing the 0−th iterate to perform better than it would do with f ≡ 0. Figure 1 clearly displays this idea in the case of (bounded) cubic nonlinearity treated below (see (64)).Furthermore, in the sequel we monitor the effect of the time-shift on the first order approximation provided by our scheme.All the simulations are carried out using the High Performance  The maps S, S + are smooth approximations of the maximum function and replace the infinity norm in (64), allowing B 0 ∈ C 3 b R N ; R N , coherently with our theoretical framework.Therefore B 0 is to be interpreted as a cubic nonlinearity with a cutoff for large values of x ∞ .For our experiments, we consider b 0 = 2, ȳ = 2e and a = 1e 04.In Tables 3-4 we report the outcomes of simulations with and without f , respectively, when σ = 0.7, t = 1 and α varies in 1  2 , 1 .In particular, Table 3 shows that, in the case of time-shift, the first iterate always remarkably outperforms the linear approximation.On the contrary, when f ≡ 0 (Table 4), v 1 0 (1, e) deteriorates the OU estimate, and we are forced to implement the second iterate to get an accuracy similar to the one provided by the time-shift (compare the columns 1 r , Table 3, and 2 r , Table 4).Of course, the trade-off in the introduction of v 2 0 (1, e) consists in substantially increasing the computational time.Finally, in Figure 3 we investigate the trajectories of P 0,• u 0 (e) and of the first order approximation in the time interval [0, 1], as well as the corresponding absolute relative errors.Here we fix α = 0.6 and consider two strengths of noise: σ = 0.1 and σ = 1.3.As already observed in the sine case, the advantages in introducing the first iterate are rather evident.Overall, we conclude that v 1 0 (•, e) proves to be a versatile and computationally cheap method to improve on the performances of the linear approximation.

Notation:
Let d, m, n ∈ N. In this paper, elements of R d are columns vectors.For any u, v ∈ R d , we denote by |u| the Euclidean norm and by u, v = u v the standard scalar product.For a matrix A ∈ R d×m , |A| = sup x∈R m : |x|=1 |Ax| is the operator norm.Given a vector field B : R d → R m×n , the uniform norm is B ∞ = sup x∈R d |B (x)|.In particular, if n = 1 then the Jacobian matrix is denoted by DB ∈ R m×d , and D h B = DBh, h ∈ R d ; if also m = 1 (so that B is a scalar function) then the gradient ∇B is a row vector and D 2 B ∈ R d×d represents the Hessian matrix.For an integer k ∈ N ∪ {0}, the space C k b R d ; R m×n is constituted by the continuous vector fields B which are bounded, continuously differentiable up to order k with bounded derivatives.Taken h = 1, . . ., k and B Therefore, P s,t φ ∈ C 1 b R N for every φ ∈ C b R N .At this point, the next result is a straightforward consequence of the Chapman-Kolmogorov equations, the mean value theorem and [7, Lemma 7.1.5].Lemma 10.Let α ∈ 1 2 , 1 and B 0 ∈ C 0,1 b [0, T ] × R N ; R N .Then, for every φ ∈ B b R N and 0 ≤ s < t ≤ T , one has P s,t φ ∈ C 1 b R N , and the gradient estimate in (27) holds true.Finally we are in position to prove Theorem 3.

Figure 2 :
Figure 2: Behavior in time of the first order approximation in the sine case with time-shift.In each line, the panel on the left shows the evolution of the probabilities, and the one on the right the corresponding errors.The top line refers to σ = 0.1, the bottom line to σ = 1.3.α = 0.6 everywhere.

Figure 3 :
Figure 3: Behavior in time of the first order approximation in the bounded cubic case with time-shift.In each line, the panel on the left shows the evolution of the probabilities, and the one on the right the corresponding errors.The top line refers to σ = 0.1, the bottom line to σ = 1.3.α = 0.6 everywhere.
To shorten the notation, in what follows we set n i = n − i.Once again, motivated by (53) we want to find an explicit formula for the term k Lemma 15.Consider 0 ≤ s < t ≤ T and an integer n ≥ 1.Then, for every x ∈ R N , i = 0, . . ., n, (i + 2) −tuple (s ni , s ni+1 , . . ., s n+1 ) such that s ≤ s ni < s ni+1 < • • • < s n+1 < t and 0 , . . ., i+1 ∈ S, one has Theorem 17.For every x ∈ R N and 0 ≤ s < t ≤ T one has [10,s and i = n), we just plug it into (61), apply the law of total expectation and reason backwards with the conditioning in F L to deduce the next result (cfr.[10,Theorem2.3]).

Table 1 :
First order approximation in the sine case with time-shift; noise strength σ = 1.

Table 3 :
First order approximation in the bounded cubic case with time-shift; noise strength σ = 0.7.