A Generalized It ˆo ’s Formula in Two-Dimensions and Stochastic Lebesgue-Stieltjes Integrals ∗

In this paper, a generalized Itˆo’s formula for continuous functions of two-dimensional continuous semimartingales is proved. The formula uses the local time of each coordinate process of the semimartingale, the left space ﬁrst derivatives and the second derivative ∇ − 1 ∇ − 2 f and the stochastic Lebesgue-Stieltjes integrals of two parameters. The second derivative ∇ − 1 ∇ − 2 f is only assumed to be of locally bounded variation in certain variables. Integration by parts formulae are asserted for the integrals of local times. The two-parameter integral is deﬁned as a natural generalization of both the Itˆo integral and the Lebesgue-Stieltjes integral through a type of Itˆo isometry formula.


Introduction
The classical Itô's formula for twice differentiable functions has been extended to less smooth functions by many mathematicians. Progress has been made mainly in one-dimension beginning with Tanaka's pioneering work [30] for |X t | to which the local time was beautifully linked. Further extensions were made to a time independent convex function f (x) in [21] and [32] as the following Tanaka-Meyer formula: where the left derivative f ′ − exists and is increasing due to the convexity assumption. This can be generalized easily to include the case when f ′ − is of bounded variation where the integral ) is a Lebesgue-Stieltjes integral. The extension to the time dependent case was given in [7]. Recently we proved that L t (x) is of finite p-variation (in the classical sense of Young and Lyons) for any p > 2 in [9]. This new result leads to the construction of ∞ −∞ L t (x)d(f ′ − (x)) as a Young integral, so the Tanaka-Meyer formula still holds when f ′ − is of finite q-variation for a constant 1 ≤ q < 2. Moreover in [10], we extended the above to the case when 2 ≤ q < 3 using Lyons' rough path integration theory.
The purpose of this paper is to extend formula (1) to two dimensions. This is a nontrivial extension as the local time in two-dimensions does not exist. But formally by using the occupation times formula (see (4)), the property that ∞ 0 1 R\{a} (X 1 (s, ω))d s L 1 (s, ω) = 0 a.s. and the "formal integration by parts formula", we observe that for a smooth function f , ( Here the last step needs to be justified, and the final integral needs to be properly defined. It is worth noting that the right hand side does not include any second order derivative of f explicitly. Here ∇ 1 f (a, X 2 (s)) is a semimartingale for any fixed a, following the Tanaka-Meyer formula.
We study the kind of integral +∞ −∞ t 0 g(s, a)d s,a h(s, a) in Section 2. Here h(s, x) is a continuous martingale with cross variation < h(·, a), h(·, b) > s of locally bounded variation in (s, a, b), and E t 0 R 2 |g(s, a)g(s, b)||d a,b,s < h(·, a), h(·, b) > s | < ∞. The integral is different from both the Lebesgue-Stieltjes integral and Itô's stochastic integral. But it is a natural extension to the twoparameter stochastic case and is therefore called a stochastic Lebesgue-Stieltjes integral. To our knowledge, this integral is new. It differs from integration with Brownian sheet defined by Walsh ( [31]) and from integration with respect to a Poisson random measure (see [15]). A generalized Itô's formula in two dimensions is proved in Section 3. Moreover, we also prove the integration by parts formula for the stochastic Lebesgue-Stieltjes integrals involving local times (Theorems 3.2 and 3.3). It is noted that Peskir recently gave a generalized Itô's formula in multi-dimensions using local times on surfaces where the first order derivative might be discontinuous under the condition that their second derivative has a limit from both sides of the surfaces in [24]. Our formula does not need the condition on the existence of limits of second order derivatives when x goes to the surface. There are numerous examples for which the classical Itô's formula and Peskir's formula may not work immediately, but our formula can be used (see Examples 3.1 and 3.2).
Applications e.g. in the study of the asymptotics of the solutions of heat equations with caustics in two dimensions, are not included in this paper. These results will be published in some future work.
Other kinds of relevant results include work for absolutely continuous functions with the first derivative being locally bounded in [26], and for W 1,2 loc functions of a Brownian motion for one dimension in [12] and [13] for multi-dimensions. It was proved in [12] , B] t is the covariation of the processes f (B) and B, and is equal to dB s as a difference of backward and forward integrals. See [29] for the case of a continuous semimartingale. The multi-dimensional case was considered in [13], [29] and [22]. An integral is the local time of the semimartingale X t . This work was extended further to define the local time space integral for a time dependent function f (s, x) using forward and backward integrals for Brownian motion in [5] and to semimartingales other than Brownian motion in [6]. This integral was also defined in [27] as a stochastic integral with excursion fields, and in [14] through Itô's formula without assuming the reversibility of the semimartingale which was required in [5]. Other relevant references include [11] where it was also proved that, if X is an one-dimensional Brownian motion, then f (X(t)) is a semimartingale if and only if f ∈ W 1,2 loc and its weak derivative is of bounded variation using backward and forward integrals ( [19]). But our results are new.

The definition of stochastic Lebesgue-Stieltjes integrals and the integration by parts formula
For a filtered probability space (Ω, F, {F t } t≥0 , P ), denote by M 2 the Hilbert space of all processes X = (X t ) 0≤t≤T such that (X t ) 0≤t≤T is a (F t ) 0≤t≤T right continuous square integrable martingale with inner product (X, Y ) = E(X T Y T ). A three-variable function f (s, x, y) is called left continuous iff it is left continuous in all three variables together i.e. for any sequence (s 1 , F s −adapted f or any x ∈ R , h ∈ V 1 is a continuous (in s) M 2 − martingale f or each x, and the crossvariation < h(·, x), h(·, y) > s is lef t continuous and of locally bounded variation in (s, x, y) .
We now recall some classical results (see [1] and [20]). A three-variable function f (s, x, y) is called monotonically increasing if whenever (s 2 , x 2 , y 2 ) ≥ (s 1 , x 1 , y 1 ), then For a left-continuous and monotonically increasing function f (s, x, y), one can define a Lebesgue-Stieltjes measure by setting Note that, since < h(x), h(y) > s is left continuous and of locally bounded variation in (s, x, y), it can be decomposed to the difference of two increasing and left continuous functions f 1 (s, x, y) and f 2 (s, x, y) (see McShane [20] or Proposition 2.2 in Elworthy, Truman and Zhao [7] which also holds for multi-parameter functions). Note that each of f 1 and f 2 generates a measure so, for any measurable function g(s, x, y), we can define In particular, a signed product measure in the space [0, T ] × R 2 can be defined as follows: for Define |d x,y,s < h(x), h(y) > s | = d x,y,s f 1 (s, x, y) + d x,y,s f 2 (s, x, y).
Moreover, for h ∈ V 2 , define: V 3 (h) := g : g ∈ V 1 , and there exists N such that (−N, N ) covers the compact support of g(s, ·, ω) f or a.a. ω, and s ∈ [0, T ] and has a compact support in x f or a.a. ω, and Consider now a simple function in V 3 , and always assume that, for any s > 0, g(s, −N ) = g(s, N ) = 0, where {t n } ∞ m=0 with t 0 = 0 and lim For h ∈ V 2 , define an integral as: This integral is called the stochastic Lebesgue-Stieltjes integral of the simple function g. It is easy to see for simple functions g 1 , g 2 ∈ V 3 (h), that for any α, β ∈ R. The following lemma plays a key role in extending the integral of simple functions to functions in V 3 (h). It is equivalent to the Itô's isometry formula in the case of the stochastic integral.
Proof: From the definition of , it is easy to see that I t is a continuous martingale with respect to (F t ) 0≤t≤T . As h(s, x, ω) is a continuous martingale in M 2 , using a standard conditional expectation argument to remove the cross product parts, we get:

So the desired result is proved. ⋄
The idea now is to use (6) to extend the definition of the integrals of simple functions to integrals of functions in V 3 (h) and finally in V 4 (h), for any h ∈ V 2 . We achieve this goal in several steps: be bounded uniformly in ω, f (·, ·, ω) be continuous for each ω on its compact support. Then there exist a sequence of bounded simple functions ϕ m,n ∈ V 3 (h) such that as m, n, m ′ , n ′ → ∞.
Proof: Define Then f n (s, x) is continuous in s, x, and when n → ∞, f n (s, x) → k(s, x) a.s.. So for sufficiently large n, f n (s, x) also has compact support in (−N, N ) for all s ∈ [0, T ]. The desired convergence follows from applying Lebesgue's dominated convergence theorem. ⋄ Lemma 2.4. Let h ∈ V 2 and g ∈ V 3 (h). Then there exist functions k n ∈ V 3 (h), bounded uniformly in ω for each n, and as n, n ′ → ∞.

Proof:
Define Then |g N | ≤ |g| and g N → g a.s., as N → ∞. So applying Lebesgue's dominated convergence theorem, we obtain the desired result. ⋄ as m, n, m ′ , n ′ → ∞. For ϕ m,n and ϕ m ′ ,n ′ , we can define stochastic Lebesgue-Stieltjes integrals I t (ϕ m,n ) and I t (ϕ m ′ ,n ′ ). From Lemma 2.1 and (5), it is easy to see that is a Cauchy sequence in M 2 whose norm is denoted by · . So there exists a process I(g) = {I t (g), 0 ≤ t ≤ T } in M 2 , defined modulo indistinguishability, such that I(ϕ m,n ) − I(g) → 0, as m, n → ∞.
By the same argument as for the stochastic integral, one can easily prove that I(g) is well-defined (independent of the choice of the simple functions), and (6) is true for I(g). We now can have the following definition.
as m, n, m ′ , n ′ → ∞. Note ϕ m,n may be constructed by combining the three approximation procedures in Lemmas 2.4, 2.3, 2.2. For g ∈ V 4 (h), we can then define the integral in M 2 as: It is a continuous martingale with respect to (F t ) 0≤t≤T and for each 0 ≤ t ≤ T , The following results will be useful in the proof of our main theorem in the next section.
Moreover, for any g ∈ V 4 (h), h ∈ V 2 and C 1 in Proof: If g is a simple function in V 3 (h) as given in (3), and note that e j,0 = e j,n = 0, we have Here To prove (12), first notice that Second, by the intermediate value theorem again, and from the assumption that ∆g(s, x) is bounded uniformly in s, the second term can be estimated as: So (12) is proved.
To prove (13), first consider g ∈ V 3 (h) to be sufficiently smooth jointly in (s, x). Then (12) and the integration by parts formula give But by the integration by parts formula and the Fubini theorem, By (14), (15) and the integration by parts formula, it follows that for g being sufficiently smooth But any bounded function g ∈ V 3 (h) can be approximated by a sequence of smooth functions g n ∈ V 3 (h). The desired result for g ∈ V 3 (h) follows from (11) and when n → ∞. From Lemmas 2.4, 2.5, we can obtain that (12) and (13) also hold for g ∈ V 4 (h).
for each t and a ∈ R. Then it is well known that, for each fixed a ∈ R, L i (t, a, ω) is continuous, increasing in t, and right continuous with left limit (càdlàg) with respect to a ( [16], [26]). Therefore we can define a Lebesgue-Stieltjes integral ∞ 0 φ(s)d s L i (s, a, ω) for each a for any Borel-measurable function φ. In particular Furthermore if φ is differentiable, then we have the following integration by parts formula Moreover, if g(s, x i , ω) is measurable and bounded, by the occupation times formula (e.g. see [16], [26])), If g(·, x) is absolutely continuous for each x, ∂ ∂s g(s, x) is locally bounded and measurable in [0, t] × R, then using the integration by parts formula, we have s, a, ω)dsda a.s., On the other hand, by the Tanaka formula By a standard localizing argument, we may assume without loss of generality that there is a constant N for which From the property of local time (see Chapter 3 in [16]), for any γ ≥ 1, where the constant C depends on γ and on the bound N . From Kolmogorov's tightness criterion (see [17]), we know that the sequence Y n (a) := 1 nM 1 (t, a), n = 1, 2, · · · , is tight. Moreover for any a 1 , a 2 , · · · , a k , Therefore in our localization argument, we can also assume that L 1 (t, a) and L 2 (t, a) are bounded uniformly in a.
We now assume the following conditions on f : R × R → R: Condition (i) the function f (·, ·) : R × R → R is jointly continuous and absolutely continuous in x 1 , x 2 respectively; is locally bounded, jointly left continuous, and of locally bounded variation in x i (i = 1, 2); is jointly left continuous, and of locally bounded variation in x 1 , x 2 respectively and also in (x 1 , x 2 ).
From the assumption of ∇ − 1 f , we can use the Tanaka-Meyer formula to have, Therefore ∇ − 1 f (a, X 2 (t)) is a continuous semimartingale, which can be decomposed as where h is a continuous local martingale and v is a continuous process of locally bounded variation (in t). In fact h(t, a) = We need to prove h ∈ V 2 . To see this, as  = 1, 2), P 3 be a partition on [0, t] such that P = P 1 × P 2 × P 3 . Then we have: Therefore under the localization assumption, is of bounded variation in (s, a) for s ∈ [0, t], a ∈ [−N, N ]. In fact, V ar s,a v 1 (s, a) = sup is locally bounded and of bounded variation in x 1 . Moreover, in the case V ar s,a v 2 (s, a) = sup

In the general case when
, we can assert that v 2 (s, a) is also of bounded variation in (s, a) by applying the above result to the difference of two increasing functions. So can be well defined. A localization argument implies that it is a semimartingale. Now we recall that the local time L 1 (s, a) can be decomposed as in [9], whereL 1 (s, a) is jointly continuous in s, a, and {x * k } are the discontinuous points of L 1 (s, a). From [26], Again we use the localization argument and assume that the support of the local time is included in (−N, N ). Let g 1 (s, a) := ∇ − 1 f (a, X 2 (s)), by a computation in (4.5) in [9], for any partition {0 = t 0 < t 1 < · · · < t m = t, −N = a 0 < a 1 < a 2 < · · · < a l = N }, Note that the first Riemann sum of the right hand side tends to  g 1 (s, a), and the second Riemann sum of the right hand side has the limit N − NL 1 (s, a)d a g 1 (s, a), when Therefore the left hand side converges as well when δ t → 0, δ x → 0. Denote the limit by  1 (s, a)d s,aL1 (s, a) for almost all ω ∈ Ω and it is easy to see that From Lemma 2.2 in [9], we know thatL 1 (t, a) is of bounded variation in (t, a) for almost every ω ∈ Ω. So aL1 (s, a) is a Lebesgue-Stieltjes integral. Therefore the integral can be well defined.
We will prove the following generalized Itô's formula in two-dimensional space.
Theorem 3.1. Under conditions (i)-(iv), for any continuous two-dimensional semimartingale X(t) = (X 1 (t), X 2 (t)), we have almost surely Proof: By a standard localization argument, we can assume X 1 (t), X 2 (t), their quadratic variations <X 1 > t , <X 2 > t , <X 1 , X 2 > t and the local times L 1 , L 2 are bounded processes and f , 2) are bounded. We divide the proof into several steps: Here c is chosen such that 2 0 ρ(x)dx = 1. Take ρ n (x) = nρ(nx) as mollifiers. Define Then f n (x 1 , x 2 ) are smooth and Because of the absolute continuity assumption, we can differentiate under the integral (13) to Furthermore using Lebesgue's dominated convergence theorem, one can prove that as n → ∞, and each (x 1 , x 2 ) ∈ R 2 .
(B) It turns out for any g(t, x 1 ) being continuous in t and C 1 in x 1 and having a compact support, using the integration by parts formula and Lebesgue's dominated convergence theorem, we see that is of locally bounded variation in x 1 and g(t, x 1 ) has a compact support in x 1 and Riemann-Stieltjes integrable with respect to ∇ − f , so (C) If g(s, x 1 ) is C 2 in x 1 , ∆g(s, x 1 ) is bounded uniformly in s, ∂ ∂s ∇g(s, x 1 ) is continuous in s and has a compact support in x 1 , and E t 0 R 2 |g(s, x)g(s, y)||d x,y,s < h(x), h(y) > s | < ∞, where h ∈ V 2 , then applying Lebesgue's dominated convergence theorem and Proposition 2.1 and the integration by parts formula, (D) In the following we will prove that (19) also holds for any continuous function g(t, x 1 ) with a compact support in x 1 . Moreover, if g ∈ V 3 and continuous, (20) also holds.
To see (19), first note any continuous function with a compact support can be approximated by smooth functions with a compact support uniformly by the following standard smoothing procedure Note that there is a compact set G ⊂ R 1 such that max It is easy to see from (19) and Lebesgue's dominated convergence theorem, that Moreover, But,

So inequality (23) leads to
Now we use (21), (22) and (24) lim sup Similarly we also have So (19) holds for a continuous function g with a compact support in x 1 .
Now we prove that (20) also holds for a continuous function g ∈ V 3 . Define Then there is a compact G ⊂ R 1 such that Then it is trivial to see But from (20), we can see that The last limit holds because of the following: as m → ∞. Here we used (11) and (6) In fact, Noting that ∇ 1 ∇ 2 f n (a, X 2 (s)) is of bounded variation in a, we can use an argument similar to the one in the proof of (24) and (25) to prove (27).
(E) Now we use the multi-dimensional Itô's formula to the function f n (X(s)), then a.s.
As n → ∞, it is easy to see from Lebesgue's dominated convergence theorem and (14), (15), (16), (17) that, (i = 1, 2) To see the convergence of 1 2 t 0 ∆ 1 f n (X(s))d <M 1 > s , first from integration by parts formula and (13), we have But local time L 1 (s, a) can be decomposed as whereL 1 (s, a) is jointly continuous in s, a, and {x * k } are the discontinuous points of L 1 (s, a). From (D) and (10), we have as n → ∞, On the other hand, from Lemma 2.2 in [9], we know thatL 1 (s, a) is of bounded variation in a for each s and of bounded variation in (s, a) for almost every ω ∈ Ω. And also because a) is Riemann-Stieltjes integral. Hence in (9), replacingL 1 (s, a) byL 1 (s, a), g 1 (s, a) by ∇ 1 f n (a, X 2 (s)), we still can obtain an integration by parts formula as follows Note here the integral t 0 ∞ −∞L 1 (s, a)d s,a ∇ 1 f n (a, X 2 (s)) is also a Riemann-Stieltjes integral though it is stochastic. Therefore as n → ∞ by Lebesgue's dominated convergence theorem. So by (30) and (31), as n → ∞. The term 1 2 t 0 ∆ 2 f n (s, X(s))d <M 2 > s can be treated similarly. So we proved the desired formula. ⋄ The following theorem gives the new representation of f (X t ), which leads to integration by parts formula for integrations of local times.
Theorem 3.2. Under conditions (i)-(iv), for any continuous two-dimensional semimartingale X(t) = (X 1 (t), X 2 (t)), we have almost surely In particular, from (10), (11), we have the integration by parts formulae From the assumption of ∇ − 1 f and the definition of f n , recall (5) and from Itô's formula we have ∇ 1 f n (a, X 2 (t)) = ∇ 1 f n (a, X 2 (0)) + h n (t, a) + v n (t, a), where h n , h are continuous local martingales and v n , v are continuous processes with locally bounded variation (in t). From previous computations, we know that h n , h ∈ V 2 , i.e. < (h n − h)(a), (h n − h)(b) > s is of bounded variation in (s, a, b) and v n (s, a), v(s, a) are of bounded variation in (s, a). So Let (−N, N ) cover the compact support of local time L 1 (t, ·), N is fixed for each ω, and We can show that G(s, a, b) is of bounded variation in (s, a, b). In fact, let P be a partition on where P i is a partition on [−N, N ] (i = 1, 2), P 3 is a partition on [0, t] such that P = P 1 × P 2 × P 3 . Then from (8) and standard computations we can show Therefore, G can be decomposed as differences of increasing (in all three variables) functions. But we can prove more results that will be used. Definẽ s, a, b), s, a, b), Then it is easy to see that G(s, a, b) s, a, b)], andG 1 ,G 2 are increasing in (s, a, b). Moreover, by additivity of variation, one can see that for s 2 ≥ s 1 , That is to say,G 1 (s 2 , a, b) is increasing in s for each a and b. In the same way, we can show G 1 (s, a, b) is increasing in a for each s and b, and in b for each s and a. ThereforeG 1 (s, a, b) is increasing in s, a, b respectively. Similarly,G 2 (s, a, b) is also increasing in s, a, b respectively. Define Then But for any s ′ ≤ s, a ′ ≤ a, b ′ ≤ b, by the monotonicity of G 1 in each variable separately, From the assumption, we know H n (s, a, b) is of bounded variation in (s, a, b) and when n → ∞, H n → 0. We only consider the increasing part of H n , still denote it by H n . As H n (s, a, b) is left continuous and increasing, so it generates Lebesgue-Stieltjes measure, denote it by µ n . It is easy to see that µ n ([s 1 , s 2 ) × [a 1 , a 2 ) × [b 1 , b 2 )) → 0, as n → ∞, for any [s 1 , s 2 ) × [a 1 , a 2 ) × [b 1 , b 2 ) ⊂ Then P n W −→ P . Therefore, by the equivalent condition of weak convergence (cf. Proposition 1.2.4 in [15]), for any closed set E, lim sup n→∞ P n (E) ≤ P (E). Now without losing generality, we assume 0 ≤ G 1 (s, a, b) ≤ 1. Using the method in the proof of Proposition 1.2.4 in [15], we have for either Q = P n or P , But E i := {(s, a, b) : G 1 (s, a, b) ≥ i k } is closed, so lim sup n→∞ P n (E i ) ≤ P (E i ), i = 0, 1, · · · , k − 1. We can also easily prove that Similarly we can deal with the terms withL 2 (s, a). So (32) is proved and the integration by parts formulae follow easily. ⋄ The smoothing procedure in Theorem 3.1 can be used to prove that if f : R×R → R is C 1,1 , and the left derivatives ∂ 2− ∂x i ∂x j f (x 1 , x 2 ), (i, j = 1, 2) exist and are locally bounded and left continuous, then f (X(t)) − f (X(0)) = This can be seen from the convergence in the proof of Theorem 3.1 and the fact that x 2 ) under the stronger condition on ∂ 2− ∂x i ∂x j f . The next theorem is an easy consequence of the methods of the proofs of Theorem 3.1 and (33). Theorem 3.3. Let f : R × R → R satisfy conditions (i) and f (x 1 , x 2 ) = f h (x 1 , x 2 ) + f v (x 1 , x 2 ). Assume f h is C 1,1 and the left derivatives ∂ 2− ∂x i ∂x j f h (x 1 , x 2 )(i, j = 1, 2) exist and are left contin-