Overcoming the curse of dimensionality in the approximative pricing of financial derivatives with default risks

Parabolic partial differential equations (PDEs) are widely used in the mathematical modeling of natural phenomena and man made complex systems. In particular, parabolic PDEs are a fundamental tool to determine fair prices of financial derivatives in the financial industry. The PDEs appearing in financial engineering applications are often nonlinear and high dimensional since the dimension typically corresponds to the number of considered financial assets. A major issue is that most approximation methods for nonlinear PDEs in the literature suffer under the so-called curse of dimensionality in the sense that the computational effort to compute an approximation with a prescribed accuracy grows exponentially in the dimension of the PDE or in the reciprocal of the prescribed approximation accuracy and nearly all approximation methods have not been shown not to suffer under the curse of dimensionality. Recently, a new class of approximation schemes for semilinear parabolic PDEs, termed full history recursive multilevel Picard (MLP) algorithms, were introduced and it was proven that MLP algorithms do overcome the curse of dimensionality for semilinear heat equations. In this paper we extend those findings to a more general class of semilinear PDEs including as special cases semilinear Black-Scholes equations used for the pricing of financial derivatives with default risks. More specifically, we introduce an MLP algorithm for the approximation of solutions of semilinear Black-Scholes equations and prove that the computational effort of our method grows at most polynomially both in the dimension and the reciprocal of the prescribed approximation accuracy. This is, to the best of our knowledge, the first result showing that the approximation of solutions of semilinear Black-Scholes equations is a polynomially tractable approximation problem.

The PDEs appearing in financial engineering applications are often high dimensional since the dimension corresponds to the number of financial assets (such as stocks, commodities, interest rates, or exchange rates) in the involved hedging portfolio. A major issue is that most approximation methods suffer under the so-called curse of dimensionality (see Bellman [5]) in the sense that the computational effort to compute an approximation with a prescribed accuracy ε > 0 grows exponentially in the dimension d ∈ N of the PDE or in the reciprocal 1 /ε of the prescribed approximation accuracy (cf., e.g., E et al. [36,Section 4] for a discussion of the curse of dimensionality in the PDE approximation literature) and nearly all approximation methods have not been shown not to suffer under the curse of dimensionality. Recently, a new class of approximation schemes for semilinear parabolic PDEs, termed full history recursive multilevel Picard (MLP) algorithms, were introduced in E et al. [35,36] and it was proven, under restrictive assumptions on the regularity of the solution of the PDE that they overcome the curse of dimensionality for semilinear heat equations. Building on this work, [59] proposed for semilinear heat equations an adaption of the original MLP scheme in [35,36]. Under the assumption that the nonlinearity in the PDE is globally Lipschitz continuous [59,Theorem 1.1] proves that the proposed scheme does indeed overcome the curse of dimensionality in the sense that the computational effort to compute an approximation with a prescribed accuracy ε > 0 grows at most polynomially in both the dimension d ∈ N of the PDE and the reciprocal 1 /ε of the prescribed approximation accuracy.
In this paper we generalize the MLP algorithm of [59] and the main result of this article, Theorem 3.20 below, proves that the MLP algorithm proposed in this paper overcomes the curse of dimensionality for a more general class of semilinear PDEs which includes as special cases the important examples of semilinear Black-Scholes equations used for the pricing of financial derivatives with default risks. In particular, we show for the first time that the solution of a semilinear Black-Scholes PDE with a globally Lipschitz continuous nonlinearity can be approximated with a computational effort which grows at most polynomially in both the dimension and the reciprocal of the prescribed approximation accuracy. Put differently, we show that the approximation of solutions of such semilinear Black-Scholes equations is a polynomially tractable approximation problem (cf., e.g., Novak & Wozniakowski [81]). To illustrate the main result of this paper, Theorem 3.20 below, we present in the following theorem, Theorem 1.1 below, a special case of Theorem 3.20. Theorem 1.1 demonstrates that the MLP algorithm proposed in this article overcomes the curse of dimensionality for the approximation of solutions of certain semilinear Black-Scholes equations. Theorem 1.1. Let T ∈ (0, ∞), p, P, q ∈ [0, ∞), α, β ∈ R, Θ = ∪ ∞ n=1 Z n , let f : R → R be a Lipschitz continuous function, let ξ d ∈ R d , d ∈ N, and g d ∈ C 2 (R d , R), d ∈ N, satisfy that sup d∈N,x∈R d , d ∈ N, be polynomially growing functions which satisfy for all d ∈ N, t ∈ (0, T ), x = (x 1 , x 2 , . . . , x d ) ∈ R d that u d (T, x) = g d (x) and let (Ω, F , P) be a probability space, let R θ : Ω → [0, 1], θ ∈ Θ, be independent U [0,1] -distributed random variables, let R θ = (R θ t ) t∈[0,T ] : [0, T ] × Ω → [0, T ], θ ∈ Θ, be the stochastic processes which satisfy for all t ∈ [0, T ], θ ∈ Θ that R θ t = t + (T − t)R θ , let W d,θ = (W d,θ,i ) i∈{1,2,...,d} : [0, T ] × Ω → R d , θ ∈ Θ, d ∈ N, be independent standard Brownian motions, assume that (W d,θ ) d∈N,θ∈Θ and (R θ ) θ∈Θ are independent, for every d ∈ N, θ ∈ Θ, t ∈ [0, T ], s ∈ [t, T ], x = (x 1 , x 2 , . . . , x d ) ∈ R d let X d,θ,x t,s = (X d,θ,x,i t,s ) i∈{1,2,...,d} : Ω → R d be the function which satisfies for all i ∈ {1, 2, . . . , d} that let V d,θ M,n : [0, T ] × R d × Ω → R, M, n ∈ Z, θ ∈ Θ, d ∈ N, be functions which satisfy for all d, M, n ∈ N, θ ∈ Θ, t ∈ [0, T ], x ∈ R d that V d,θ and for every d, n, M ∈ N, t ∈ [0, T ], x ∈ R d let C d,M,n ∈ N 0 be the number of realizations of standard normal random variables which are used to compute one realization of V d,0 M,n (t, x) (see (336) below for a precise definition). Then there exist functions N = (N d,ε ) d∈N,ε∈(0,1] : N × (0, 1] → N and C = (C δ ) δ∈(0,∞) : (0, ∞) → (0, ∞) such that for all d ∈ N, ε ∈ (0, 1], δ ∈ (0, ∞) it holds that C d,N d,ε ,N d,ε ≤ C δ d 1+(P+qp)(2+δ) ε −(2+δ) and Theorem 1.1 is an immediate consequence of Theorem 4.4 below. Theorem 4.4 in turn is a consequence of Theorem 3.20 below, the main result of this paper. We now provide some explanations for Theorem 1.1. In Theorem 1.1 we present a stochastic approximation scheme (cf. (V d,0 M,n ) M,n,d∈N in Theorem 1.1 above) which is able to approximate in the strong L 2 -sense the initial value u d (0, ξ d ) of the solution of an uncorrelated semilinear Black-Scholes equation (cf. (1) in Theorem 1.1 above) with a computational effort which grows at most polynomially in both the dimension d ∈ N and the reciprocal 1 /ε of the prescribed approximation accuracy ε > 0. The time horizon T ∈ (0, ∞), the drift parameter α ∈ R, the diffusion parameter β ∈ R, as well as the Lipschitz continuous nonlinearity f : R → R of the semilinear Black-Scholes equations in Theorem 1.1 above (cf. (1) in Theorem 1.1 above) are fixed over all dimensions (cf. Theorem 4.3 for a more general result with dimension-dependent drift and diffusion coefficients and dimensiondependent nonlinearities which may additionally depend on the time and the space variable). The approximation points (ξ d ) d∈N and the terminal conditions (g d ) d∈N of the PDE (1) in Theorem 1.1 above are both allowed to grow in a certain polynomial fashion determined by the constants p, P, q ∈ [0, ∞). The idea for the full history multilevel Picard scheme (cf. (V d,θ M,n ) M,d∈N,n∈N 0 ,θ∈Θ in Theorem 1.1 above) is based on a reformulation of the semilinear PDE in (1) as a stochastic fixed point equation. For this we consider the independent solution fields (X d,θ ) d∈N,θ∈Θ of the stochastic differential equation (SDE) associated to the PDE in (1) and for every t ∈ [0, T ] we consider independent U [t,T ] -distributed random variables (R θ t ) θ∈Θ . As a consequence of the Feynman-Kac formula we obtain that (u d ) d∈N are the unique at most polynomially growing functions which satisfy for all d ∈ N, θ ∈ Θ, t ∈ [0, T ], x ∈ R d that Note that for all d, On a distributional flow property for stochastic differential equations (SDEs) In our analysis of the proposed MLP algorithm in Section 3 below, we will make use of random fields which satisfy a certain flow-type condition (see (154) in Setting 3.1 below). The main intent of this section is to establish that solution processes of SDEs enjoy, under suitable conditions (see Lemma 2.19 below for details), this flow-type property. To rigorously prove this result we need a series of elementary and well-known results, presented in Subsections 2.1-2.7 below, many of which will be reused in Section 3.
This and (10) establish that for all n ∈ {0, 1, 2, . . . , N} it holds that The fact that for all x ∈ R it holds that (1 + x) ≤ exp(x) therefore ensures that for all n ∈ {0, 1, 2, . . . , N} it holds that The proof of Lemma 2.1 is thus completed.
Then it holds for all n ∈ N 0 ∩ [0, N] that be the Frobenius norm on R d×m , and let µ : Proof of Lemma 2.3. Throughout this proof let σ i,j : Note that the chain rule, the fact that the function R d ∋ x → 1 + x 2 ∈ (0, ∞) is infinitely often differentiable, and the fact that for every p ∈ [2, ∞) the function (0, ∞) ∋ s → s p 2 ∈ (0, ∞) is infinitely often differentiable establish item (i). It thus remains to prove item (ii). For this, observe that the chain rule ensures that for all and 7 This implies that for all t ∈ [0, T ], x = (x 1 , . . . , In addition, note that the Cauchy Schwarz inequality assures that for all t ∈ [0, T ], This, (18), and (23) Young's inequality (with p = p /2, q = p /(p−2) = p /2 p /2−1 for p ∈ (2, ∞) in the usual notation of Young's inequality) hence proves that for all t ∈ [0, T ], x ∈ R d , p ∈ (2, ∞) it holds that Moreover, note that (25) ensures that for all t ∈ [0, T ], x ∈ R d it holds that Combining this and (26) establishes item (ii). The proof of Lemma 2.3 is thus completed.
let (Ω, F , P, (F t ) t∈[0,T ] ) be a filtered probability space which satisfies the usual conditions, let W : [0, T ]× Ω → R m be a standard (Ω, F , P, (F t∈[0,T ] ))-Brownian motion, and let X : -adapted stochastic process with continuous sample paths which satisfies that for all t ∈ [0, T ] it holds P-a.s. that Then it holds for all t ∈ [0, T ] that Proof of Lemma 2.4. Throughout this proof assume w.l.o.g. that T > 0 and let V : Note that the fact that Observe that items (II)-(IV) and (28) show that for all t ∈ [0, T ], x ∈ R d it holds that Combining this with Itô's formula demonstrates that for all t ∈ [0, T ] it holds that Therefore, we obtain that for all t ∈ [0, T ] it holds that The proof of Lemma 2.4 is thus completed.
let (Ω, F , P, (F t ) t∈[0,T ] ) be a filtered probability space which satisfies the usual conditions, let W : [0, T ]× Ω → R m be a standard (Ω, F , P, (F t∈[0,T ] ))-Brownian motion, and let X : -adapted stochastic process with continuous sample paths which satisfies that for all t ∈ [0, T ] it holds P-a.s. that Then it holds for all t ∈ [0, T ] that Proof of Lemma 2.5. Throughout this proof assume w.l.o.g. that ρ 1 > 0 (cf. Lemma 2.4) and that T > 0 and let V : Note that the fact that V ∈ C 2 (R d , (0, ∞)) ensures that for all t ∈ [0, T ], x ∈ R d it holds that Observe that items (II)-(IV) and (35) assure that for all t ∈ [0, T ], x ∈ R d it holds that Combining this with Itô's formula demonstrates that for all t ∈ [0, T ] it holds that Therefore, we obtain that for all t ∈ [0, T ] it holds that The fact that for all a ∈ R it holds that e a − 1 ≤ ae a hence ensures that for all t ∈ [0, T ] it holds that The proof of Lemma 2.5 is thus completed.
let (Ω, F , P, (F t ) t∈[0,T ] ) be a filtered probability space which satisfies the usual conditions, let W : [0, T ]× Ω → R m be a standard (Ω, F , P, (F t∈[0,T ] ))-Brownian motion, and let X : -adapted stochastic process with continuous sample paths which satisfies that for all t ∈ [0, T ] it holds P-a.s. that Then it holds for all p ∈ [0, ∞), t ∈ [0, T ] that Proof of Lemma 2.6. Throughout this proof let (ρ and let V p : R d → (0, ∞), p ∈ [2, ∞), be the functions which satisfy for all p ∈ [2, ∞), x ∈ R d that Observe that Lemma 2.3 and (43) assure that for all t This, Jensen's inequality, and the fact that for all p ∈ [0, 2] it holds that 3 p /2 ≤ p + 1 assure that Combining this with (49) implies (45). The proof of Lemma 2.6 is thus completed.

Temporal regularity properties for solutions of SDEs
Then it holds that adapted stochastic processes with continuous sample paths which satisfies that E X 0 2 < ∞ and which satisfies that for all t ∈ [0, T ] it holds P-a.s. that Then it holds that Proof of Lemma 2.8. Throughout this proof let |||·||| : and letμ : R d+1 → R d+1 andσ : R d+1 → R (d+1)×m be the functions which satisfy for all y = (y 1 , y 2 , . . . , y d+1 ) ∈ R d+1 that µ(y) = 1 µ min{max{y 1 , 0}, T }, (y 2 , . . . , y d+1 ) ∈ R d+1 and (56) Observe that the hypothesis that µ and σ are globally Lipschitz continuous functions and the fact that R ∋ y → min{max{y, 0}, T } ∈ R is a globally Lipschitz continuous function assure thatμ andσ are globally Lipschitz continuous functions. Moreover, note that it holds for all t ∈ [0, T ], This and (53) assure that for all t ∈ [0, T ] it holds P-a.s. that The fact thatμ andσ are globally Lipschitz continuous functions and Lemma 2.7 (with d = d + 1, m = m, T = T , µ =μ, σ =σ, X = Y in the notation of Lemma 2.7) hence prove that Hence, we obtain that The proof of Lemma 2.8 is thus completed.
The following very elementary and well-known result will be helpfull in the proof of Lemma 2.10 below and will be repeatedly used throughout this paper.
Proof of Lemma 2.9. Note that Hölders inequality demonstrates that The proof of Lemma 2.9 is thus completed.
Lemma 2.10 (Explicit temporal regularity for solutions of SDEs with deterministic initial values).
) be a filtered probability space which satisfies the usual conditions, let W : and let X : -adapted stochastic processes with continuous sample paths which satisfies that for all t ∈ [0, T ] it holds P-a.s. that Then it holds that Proof of Lemma 2.10. Throughout this proof let ·, · : R d × R d → R be the Euclidean scalar product on R d and let C ∈ (0, ∞) be given by 13 Note that (64) and the triangle inequality assure that for all t ∈ [0, T ], x ∈ R d it holds that This assures that for all t ∈ [0, T ], x ∈ R d it holds that In addition, note that (69) implies that for all t ∈ [0, T ], x ∈ R d it holds that Moreover, observe that (65), Lemma 2.9, Tonelli's theorem, and Itô's isometry demonstate that The triangle inequality, (68), and (69) therefore ensure that for all t Furthermore, note that (70), (71), (65), and Lemma 2.6 (with d = d, m = m, T = T , C 1 = C, C 2 = C, ξ = ξ, µ = µ, σ = σ, X = X in the notation of Lemma 2.6) assure that for all t ∈ [0, T ] it holds that This, (73), the fact that C ≥ 1, the fact that for all x ∈ [0, ∞) it holds that max{x, 1 + x} ≤ e x , and the fact that for all x, y ∈ [0, ∞) it holds that This implies (66). The proof of Lemma 2.10 is thus completed.  [77]).

Strong error estimates for Euler-Maruyama approximations
-adapted stochastic processes with continuous sample paths which satisfies that E X 0 2 < ∞ and which satisfies that for all t ∈ [0, T ] it holds P-a.s. that and let X : {0, 1, . . . , N}×Ω → R d be the stochastic process which satisfies for all n ∈ {1, 2, . . . , N} that Then it holds that Proof of Proposition 2.11. Throughout this proof assume w.l.o.g. that t 0 < t 1 < t 2 < . .

On identically distributed random variables
The next elementary and well-known result, Lemma 2.13 below, provides a sufficient condition for two random variables to have the same distribution.
Lemma 2.13. Let (Ω, F , P) be a probability space, let (E, d) be a metric space, let X, Y : Ω → E be random variables which satisfy that for all globally bounded and Lipschitz continuous functions Then it holds that X and Y are identically distributed random variables.
Proof of Lemma 2.13. Throughout this proof for every n ∈ N let h n : [0, ∞) → [0, 1] be the function which satisfies for all r ∈ [0, ∞) that for every closed and non-empty set and for every n ∈ N and every closed and non-empty set A ⊆ E let f A,n : E → [0, 1] be the function which satisfies for all e ∈ E that Note that the triangle inequality assures that for all closed and non-empty sets A ⊆ E and all The fact that for all closed and non-empty sets A ⊆ E and all e ∈ E, ε ∈ (0, ∞) there exists a ∈ A such that d(e, a) ≤ D A (e) + ε hence assures that for all closed and non-empty sets A ⊆ E and all e 1 , e 2 ∈ E it holds that Moreover note that for all n ∈ N, r 1 , r 2 ∈ [0, ∞) with r 1 ≤ r 2 it holds that Combining this with (105) establishes that for all closed and non-empty sets A ⊆ E and all n ∈ N, e 1 , e 2 ∈ E it holds that This demonstrates that for every closed and non-empty set A ⊆ E and every n ∈ N it holds that f A,n : E → [0, 1] is a globally bounded and Lipschitz continuous function. Next observe that the fact that for all closed and non-empty sets A ⊆ E and all e ∈ A it holds that D A (e) = 0 assures that for all closed and non-empty sets A ⊆ E and all n ∈ N, e ∈ A it holds that Moreover, note the fact that for all closed and non-empty sets A ⊆ E and all e ∈ E \ A there exists n ∈ N such that D A (e) > 1 n and the fact that for all n ∈ N it holds that h n is a non-increasing function assure that for all closed and non-empty sets A ⊆ E and all e ∈ E \ A there exist n ∈ N such that for all m ∈ {n, n + 1, . . .} it holds that Combining this and (108) establishes that for all closed and non-empty sets A ⊆ E and all e ∈ E it holds that lim The theorem of dominated convergence, the fact that for all closed and non-empty sets A ⊆ E and all n ∈ N it holds that f A,n : E → [0, 1] is a globally bounded and Lipschitz continuous function, and (100) therefore imply that for all closed and non-empty sets A ⊆ E it holds that The proof of Lemma 2.13 is thus completed.

On random evaluations of random fields
This subsection collects elementary and well-known results about random variables originating from evaluations of random fields at random indices.
Observe that the hypothesis that X : Ω → S is an F /S-measurable function assures that X : Ω → S × Ω is an F /(S ⊗ F )-measurable function. Combining this with the fact that U : S × Ω → E is an (S ⊗ F )/E-measurable function demonstrates that is an F /E-measurable function. The proof of Lemma 2.14 is thus completed.
A proof for the next two elementary and well-known results (see Lemma 2.15 and Lemma 2.16 below) can, e.g., be found in [59, Lemma 2.3 and Lemma 2.4].
Lemma 2.15. Let (Ω, F , P) be a probability space, let (S, δ) be a separable metric space, let U = (U(s)) s∈S : S × Ω → [0, ∞) be a continuous random field, let X : Ω → S be a random variable, and assume that U and X are independent. Then it holds that Lemma 2.16. Let (Ω, F , P) be a probability space, let (S, δ) be a separable metric space, let U = (U(s)) s∈S : S × Ω → R be a continuous random field, let X : Ω → S be a random variable, assume that U and X are independent, and assume that

Brownian motions and right-continuous filtrations
The next result, Lemma 2.17 below, states that a Brownian motion with respect to a filtration is also a Brownian motion with respect to the smallest right-continuous filtration containing the original filtration (cf. (117) Then it holds that W is a standard (Ω, F , P, Proof of Lemma 2.17. Throughout this proof let · : for every closed and non-empty set and for every n ∈ N and every closed and non-empty set A ⊆ R d let f A,n : R d → [0, 1] be the function which satisfies for all x ∈ R d that Observe that the fact that W has continuous sample paths, the fact that for all t ∈ [0, T ), s ∈ (t, T ], k ∈ N it holds that W s − W min{t+ 1 /k,s} and H t are independent, Klenke [66,Theorem 5.4], and the theorem of dominated convergence assure that for all t ∈ [0, T ), s ∈ (t, T ], B ∈ H t and all globally bounded and continuous functions g : Next note that the fact that closed and non-empty sets A ⊆ R d and all x ∈ R d it holds that D A (x) = 0 ⇔ x ∈ A assures that for all closed and non-empty sets A ⊆ R d and all Moreover, note that the fact that for every n ∈ N it holds that h n : [0, ∞) → [0, 1] is a continuous function and the fact that for every closed and non-empty set is a continuous function assure that for every n ∈ N and every closed and non-empty set is a continuous function. Combining this, (121), (122), and the theorem of dominated convergence shows that for all t ∈ [0, T ), s ∈ (t, T ], B ∈ H t and all closed and non-empty sets A ⊆ R d it holds that This proves that for all t Combining this with the hypothesis that W is a Brownian motion, and the fact that W : ) be a filtered probability space which satisfies the usual conditions, let W : and let ξ : Proof of Lemma 2.18. Throughout this proof assume w.l.o.g. that s > t, let (u N,r n ) n∈{0,1,2,...,N },N ∈N,r∈(t,s] ⊆ [t, s] satisfy for all N ∈ N, n ∈ {0, 1, 2, . . . , N}, r ∈ (t, s] that u N,r n = t + n(r−t) N , for every N ∈ N, r ∈ (t, s] let X N,r = (X N,r n (x)) n∈{0,1,2,...,N },x∈R d : {0, 1, 2, . . . , N} × R d × Ω → R d be the continuous random field which satisfies for all n ∈ {1, 2, . . . , N}, x ∈ R d that X N,r 0 (x) = x and Note that (124) s] in the notation of Lemma 2.10) assure that for all x ∈ R d , N ∈ N, r ∈ (t, s] it holds that · 1 + (1 + x ) exp 10 max{ µ(t, 0) , |||σ(t, 0)|||, L, 1} + LT 2 (T + 1)(L + 1) This ensures that for all r ∈ [t, s], x ∈ R d it holds that lim sup N →∞ E[ X r (x)−X N,r N (x) 2 ] = 0. This and the fact that for all r ∈ [t, s], x ∈ R d , N ∈ N it holds that X N,r Combining this with the fact that ξ : Ω → R d is an F t /B(R d )-measurable function and the fact that W : [0, T ] × Ω → R m is a standard (Ω, F , P, (F r ) r∈[0,T ] )-Brownian motion demonstrates for all r ∈ [t, s], N ∈ N it holds that (X r (x) − X N,r N (x)) x∈R d and ξ are independent. Lemma 2.15 and (128) hence assure that for all N ∈ N, r ∈ (t, s] it holds that Moreover, observe that (126)  ..,N } , (X n ) n∈{0,1,...,N } = (X N,r n (ξ)) n∈{0,1,...,N } for N ∈ N, r ∈ (t, s] in the notation of Corollary 2.12) demonstrates that for all N ∈ N, r ∈ (t, s] it holds that The triangle inequality and (129) hence show that for all r ∈ (t, s] it holds that Combining this with the fact that (X r (ξ)) r∈[t,s] and (Y r ) r∈[t,s] are continuous random fields demonstrates that This and (130) prove that for all r ∈ [t, s] it holds P-a.s. that The proof of Lemma 2.18 is thus completed.
Then it holds for all r, Furthermore, note that the hypothesis that µ and σ are globally Lipschitz continuous, (136), (138), (139), (140), and Corollary 2.12 demonstrate that there exists a real number C ∈ (0, ∞) which satisfies that for all N ∈ N it holds that This implies that Moreover, observe that the hypothesis that µ and σ are globally Lipschitz continuous implies that Lemma 2.6 therefore demonstrates that s,h (X 2 t,s (x))) h∈[s,r] , (r n ) n∈{0,1,...,N } = (v N n ) n∈{0,1,...,N } , (X n ) n∈{0,1,...,N } = (Z N n ) n∈{0,1,...,N } for N ∈ N in the notation of Corollary 2.12) hence demonstrate that there exists a real number K ∈ (0, ∞) which satisfies that for all N ∈ N it holds that This and (144) imply that lim sup Furthermore, observe that (138)-(141) assure that for all N ∈ N it holds that X N 2N and Z N N have the same distribution. This, (145), and (150) imply that for all globally bounded and Lipschitz continuous functions g : R d → R it holds that E g(X 1 s,r (X 2 t,s (x))) = lim Lemma 2.13 hence assures that X 1 s,r (X 2 t,s (x)) and X 1 t,r (x) are identically distributed. Combining this with (143) completes the proof of Lemma 2.19.

Full history recursive multilevel Picard (MLP) approximation algorithms
In this section we present the proposed MLP scheme and perform a rigorous complexity analysis.

A priori bounds for solutions of stochastic fixed point equations
Hence, we obtain that Moreover, observe that (157), (158), and the hypothesis that for all t ∈ [0, T ] it holds that ǫ(t) ≤ α + β T t ǫ(r) dr assure that for all t ∈ [0, T ] it holds that Combining this and (159) Hence, we obtain that for all t ∈ [0, T ] it holds that This establishes items (i)-(ii). The proof of Lemma 3.2 is thus completed.
and assume that (cf. item (iv) in Lemma 3.6). Note that (155) and the triangle inequality ensure that for all t ∈ [0, T ] it holds that Jensen's inequality hence assures that for all t ∈ [0, T ] it holds that Furthermore, observe that (164), the fact that X 0 and X 1 are independent and continuous random fields, (154), and Lemma 2.15 demonstrate that for all t ∈ [0, T ] it holds that In addition, note that Minkowski's integral inequality (cf., e.g., Jentzen & Kloeden [61, Proposition 8 in Appendix A.1]), (164), the fact that X 0 and X 1 are independent and continuous random fields,

29
(154), and Lemma 2.15 imply that for all t ∈ [0, T ] it holds that Moreover, observe that (152) ensures that for all t This, (168), and the triangle inequality imply that for all t ∈ [0, T ] it holds that Furthermore, note that Lemma 2.9 assures that for all t ∈ [0, T ] it holds that T t E f r, X 0 0,r (ξ), 0 T t E f r, X 0 0,r (ξ), 0 2 dr Combining this with (163), (166), (167), and (170) implies that for all t ∈ [0, T ] it holds that The hypothesis that

Properties of MLP approximations
In this subsection we establish in Lemma 3.6 below some elementary properties of the MLP approximations (cf. (156) in Setting 3.1 above) introduced in Setting 3.1 above. For this we need two elementary and well known results on identically distributed random variables (see Lemma 3.4 and Lemma 3.5 below). Lemma 3.4. Let d, N ∈ N, let (Ω, F , P) be a probability space, let X k : Ω → R d , k ∈ {1, 2, . . . , N}, be independent random variables, let Y k : Ω → R d , k ∈ {1, 2, . . . , N}, be independent random variables, and assume for every k ∈ {1, 2, . . . , N} that X k and Y k are identically distributed. Then it holds that N k=1 X k : Ω → R d and N k=1 Y k : Ω → R d are identically distributed random variables.
Proof of Lemma 3.4. Throughout this proof let X, Y : Ω → R N d be the random variables which satisfy that X = (X 1 , . . . , X N ) and and Observe that the hypothesis that (X k ) k∈{1,2,...,N } are independent, the hypothesis that (Y k ) k∈{1,2,...,N } are independent, and the hypothesis that for every k ∈ {1, 2, . . . , N} it holds that X k and Y k are identically distributed random variables assure that for all (B k ) k∈{1,2,...,N } ⊆ B(R d ) it holds that Hence, we obtain that for all B ∈ B(R d ) it holds that This shows that N k=1 X k : Ω → R d and N k=1 Y k : Ω → R d are identically distributed random variables. The proof of Lemma 3.4 is thus completed.
Lemma 3.5. Let (Ω, F , P) be a probability space, let (S, δ) be a separable metric space, let (E, δ) be a metric space, let U, V : S × Ω → E be continuous random fields, let X, Y : Ω → S be random variables, assume that U and X are independent, assume that V and Y are independent, assume for all s ∈ S that U(s) and V (s) are identically distributed, and assume that X and Y are identically distributed. Then it holds that U(X) = (U(X(ω), ω)) ω∈Ω : Ω → E and V (Y ) = (V (Y (ω), ω)) ω∈Ω : Ω → E are identically distributed random variables. Proof of Lemma 3.5. First, note that Grohs et al. [3,Lemma 2.4], the fact that U and V are continuous random fields, and Lemma 2.14 ensure that U(X) and V (Y ) are random variables. Next observe the hypothesis that U and X are independent, the hypothesis that V and Y are independent, the hypothesis that for all s ∈ S it holds that U(s) and V (s) are identically distributed, the hypothesis that X and Y are identically distributed and Lemma 2.16 demonstrate that for all globally bounded and Lipschitz continuous functions g : E → R it holds that Combining this with Lemma 2.13 assures that U(X) and V (Y ) are identically distributed. The proof of Lemma 3.5 is thus completed.
are identically distributed random variables. Items (iii)-(iv), (156), and Lemma 3.4 therefore ensure that for all t ∈ [0, T ], x ∈ R d it holds that V θ M,n (t, x) : Ω → R d , θ ∈ Θ, are identically distributed random variables. Induction thus establishes item (v). The proof of Lemma 3.6 is thus completed.  Lemma 3.12). For the proofs of the statements in this subsection we need some elementary and well-known results (see Lemma 3.7, Lemma 3.10, and Lemma 3.14) which we state and prove where they are used.

Expectations of MLP approximations
Proof of Lemma 3.7. Throughout this proof assume w.l.o.g. that t < T . Observe that (153) implies that R θ t is U [t,T ] -distributed. Combining this with the fact that U 1 is continuous, the fact that U 1 and R θ t are independent, and Lemma 2.15 assures that In addition, note that the fact that R θ t is U [t,T ] -distributed, the fact that U 2 is continuous, the fact that U 2 and R θ t are independent, the hypothesis that 34 Combining this with (184) establishes (183). The proof of Lemma 3.7 is thus completed.

Lemma 3.8 (Expectations of MLP approximations). Assume Setting 3.1 and assume for all
and Proof of Lemma 3.8. Throughout this proof let M ∈ N, x ∈ R d . Observe that Lemma 3.7, items (i)-(ii) in Lemma 3.6, and the fact that for all n ∈ N it holds that V 0 M,n , X 0 , and R 0 are independent demonstrate that for all n ∈ N 0 , t ∈ [0, T ] it holds that Next we claim that for all n ∈ N 0 , t ∈ [0, T ], s ∈ [t, T ] it holds that We now prove (189) by induction on n ∈ N 0 . For the base case n = 0 observe that the hypothesis that V 0 M,0 = 0 and the hypothesis that for all t ∈ [0, T ] it holds that This establishes (189) in the case n = 0. For the induction step N 0 ∋ (n − 1) → n ∈ N let n ∈ N and assume that for all k ∈ N 0 ∩ [0, n), t ∈ [0, T ], s ∈ [t, T ] it holds that Note that (156) and the triangle inequality ensure that for all t ∈ [0, T ], s ∈ [t, T ] it holds that Furthermore, observe that (154), (155), and item (iv) in Lemma 3.6 assure that for all m ∈ Z, Moreover, note that Lemma 3.7, the hypothesis that (X θ ) θ∈Θ are independent, the hypothesis that (R θ ) θ∈Θ are independent, the hypothesis that (X θ ) θ∈Θ and (R θ ) θ∈Θ are independent, items (i)-(ii) & (iv)-(v) in Lemma 3.6, (154), and Lemma 2.15 demonstrate that for all i, j, l, Combining this with (191), (192), and (193) establishes that for all t ∈ [0, T ], s ∈ [t, T ] it holds that 36 Hence, we obtain that for all t ∈ [0, T ] it holds that The hypothesis that for all t ∈ [0, T ] it holds that T t E |f (r, X 0 t,r (x), 0)| dr < ∞ and the fact that Induction thus proves (189). Combining (188) and (189) establishes item (i). Next observe that (156), (189), items (i)-(ii) & (iv)-(v) in Lemma 3.6, the hypothesis that (X θ ) θ∈Θ are independent, the hypothesis that (R θ ) θ∈Θ are independent, the hypothesis that (X θ ) θ∈Θ and (R θ ) θ∈Θ are independent, and Lemma 3.5 ensure that for all n ∈ N, t ∈ [0, T ] it holds that Lemma 3.7, items (i)-(ii) in Lemma 3.6, the fact that for all n ∈ N 0 it holds that V 0 M,n , X 0 , and R 0 are independent, (189), and Fubini's theorem therefore imply that for all n ∈ N, t ∈ [0, T ] it holds that This establishes item (ii). The proof of Lemma 3.8 is thus completed.

Biases of MLP approximations
Lemma 3.9 (Biases of MLP approximations). Assume Setting 3.1 and assume for all t ∈ [0, T ], Proof of Lemma 3.9. Note that Lemma 3.8, the hypothesis that for all t ∈ [0, T ], x ∈ R d it holds that T t E |f (r, X 0 t,r (x), 0)| dr < ∞, (152), (155), and Tonelli's theorem demonstrate that for all Lemma 2.9 and Jensen's inequality hence show that for all M, n ∈ N, t ∈ [0, T ], x ∈ R d it holds that The proof of Lemma 3.9 is thus completed.
Proof of Lemma 3.11. Throughout this proof let M, n ∈ N, t ∈ [0, T ], x ∈ R d . Observe that Lemma 3.10, item (i) in Lemma 3.8, the fact that for all θ ∈ Θ it holds that E |g(X 0 t,T (x))| < ∞, item (iii) in Lemma 3.6,and (156) imply that Moreover, note that item (iv) in Lemma 3.6 and the fact that for all Z ∈ L 1 (P, R) it holds that In addition, note that items (i)-(ii) & (iv)-(v) in Lemma 3.6, the hypothesis that (X θ ) θ∈Θ are independent, the hypothesis that (R θ ) θ∈Θ are independent, the hypothesis that (X θ ) θ∈Θ and (R θ ) θ∈Θ are independent, the fact that for all Z ∈ L 1 (P, R) it holds that Var(Z) ≤ E[|Z| 2 ], and Lemma 3.5 show that for all k ∈ N 0 ∩ [0, n) it holds that Proof of Corollary 3.13. Throughout this proof let M, n ∈ N, and The proof of Corollary 3.13 is thus completed.
Proof of Lemma 3.14. Observe that Tonelli's theorem assures that The proof of Lemma 3.14 is thus completed.

Complexity analysis for MLP approximation algorithms
In this subsection we consider the computational effort of the MLP scheme (cf. (156) in Setting 3.1 above) introduced in Setting 3.1 and combine it with the L 2 -error estimate in Corollary 3.16 to obtain a complexity analysis for the MLP scheme in Proposition 3.18 below. In Lemma 3.17 we think for all M, n ∈ N of C M,n as the number of realizations of 1-dimensional random variables needed to simulate one realization of V θ M,n (t, x) for any θ ∈ Θ, t ∈ [0, T ], x ∈ R d . The recursive inequality in (246) in Lemma 3.17 is based on (156) and the assumption that the number of realizations of 1-dimensional random variables needed to simulate X θ t,r (x) for any θ ∈ Θ, t ∈ [0, T ], r ∈ [t, T ], x ∈ R d is bounded by αd.

(246)
Then it holds for all n, M ∈ N that C n,M ≤ αd (5M) n .

MLP approximations for semilinear partial differential equations (PDEs)
Thanks to an equivalence between semilinear Kolmogorov PDEs and stochastic fixed points equations we can carry over the complexity analysis of Subsection 3.5 for the approximation of solutions of stochastic fixed points equations to our proposed MLP scheme for the approximation of solutions of semilinear Kolmogorov PDEs (cf. (275) in Subsection 3.6.1 below) resulting in Proposition 3.19. Considering this complexity analysis over variable dimensions shows that our proposed MLP algorithm overcomes the curse of dimensionality in the approximation of solutions of certain semilinear Kolmogorov PDEs (see Theorem 3.20 in Subsection 3.6.2 below, the main result of this paper, for details).