Wiener Process with Reflection in Non-Smooth Narrow Tubes

Wiener process with instantaneous reflection in narrow tubes of width {\epsilon}<<1 around axis x is considered in this paper. The tube is assumed to be (asymptotically) non-smooth in the following sense. Let $V^{\epsilon}(x)$ be the volume of the cross-section of the tube. We assume that $V^{\epsilon}(x)/{\epsilon}$ converges in an appropriate sense to a non-smooth function as {\epsilon}->0. This limiting function can be composed by smooth functions, step functions and also the Dirac delta distribution. Under this assumption we prove that the x-component of the Wiener process converges weakly to a Markov process that behaves like a standard diffusion process away from the points of discontinuity and has to satisfy certain gluing conditions at the points of discontinuity.


Introduction
For each x ∈ R and 0 < ǫ << 1, let D ǫ x be a bounded interval in R that contains 0. To be more specific, let D ǫ x = [−V l,ǫ (x), V u,ǫ (x)], where V l,ǫ (x), V u,ǫ (x) are sufficiently smooth, nonnegative functions, where at least one of the two is a strictly positive function. Consider the state space D ǫ = {(x, y) : x ∈ R, y ∈ D ǫ x } ⊂ R 2 . Assume that the boundary ∂D ǫ of D ǫ is smooth enough and denote by γ ǫ (x, y) the inward unit normal to ∂D ǫ . Assume that γ ǫ (x, y) is not parallel to the x-axis.
Denote by V ǫ (x) = V l,ǫ (x) + V u,ǫ (x) the length of the cross-section D ǫ x of the stripe. We assume that D ǫ is a narrow stripe for 0 < ǫ << 1, i.e. V ǫ (x) ↓ 0 as ǫ ↓ 0. In addition, we assume that 1 ǫ V ǫ (x) converges in an appropriate sense to a non-smooth function, V (x), as ǫ ↓ 0. The limiting function can be composed for example by smooth functions, step functions and also the Dirac delta distribution. Next, we state the problem and we rigorously introduce the assumptions on V ǫ (x) and V (x). At the end of this introduction we formulate the main result.
Consider the Wiener process (X ǫ t , Y ǫ t ) in D ǫ with instantaneous normal reflection on the boundary of D ǫ . Its trajectories can be described by the stochastic differential equations: Here W 1 t and W 2 t are independent Wiener processes in R and (x, y) is a point inside D ǫ ; γ ǫ 1 and γ ǫ 2 are both projections of the unit inward normal vector to ∂D ǫ on the axis x and y respectively. Furthermore, L ǫ t is the local time for the process (X ǫ t , Y ǫ t ) on ∂D ǫ , i.e. it is a continuous, non-decreasing process that increases only when (X ǫ t , Y ǫ t ) ∈ ∂D ǫ such that the Lebesgue measure Λ{t > 0 : (X ǫ t , Y ǫ t ) ∈ ∂D ǫ } = 0 (eg. see [12]). Our goal is to study the weak convergence of the x−component of the solution to (1) as ǫ ↓ 0. The y−component clearly converges to 0 as ǫ ↓ 0. The problem for narrow stripes with a smooth boundary was considered in [4] and in [5]. There, the authors consider the case 1 ǫ V ǫ (x) = V (x), where V (x) is a smooth function. It is proven that X ǫ t converges to a standard diffusion process X t , as ǫ ↓ 0. More precisely, it is shown that for any T > 0 where X t is the solution of the stochastic differential equation and V x (x) = dV (x) dx .
In this paper we assume that 1 ǫ V ǫ (x) converges to a non-smooth function as described by (5)- (9) below, as ǫ ↓ 0. Owing to the non smoothness of the limiting function, one cannot hope to obtain a limit in mean square sense to a standard diffusion process as before. In particular, as we will see, the non smoothness of the limiting function leads to the effect that the limiting diffusion may have points where the scale function is not differentiable (skew diffusion) and also points with positive speed measure (points with delay).
For any ǫ > 0, we introduce the functions Now, we are in position to describe the limiting behavior of 1 ǫ V ǫ (x). (i). We assume that V l,ǫ , V u,ǫ ∈ C 3 (R) for every fixed ǫ > 0 and that V ǫ (x) ↓ 0 as ǫ ↓ 0 (in particular V 0 (x) = 0). Moreover, there exists a universal positive constant ζ such that 1 ǫ V ǫ (x) > ζ > 0 for every x ∈ R and for every ǫ > 0. (5) (ii). We assume that the functions are well defined and the limiting function u(x) is continuous and strictly increasing whereas the limiting function v(x) is right continuous and strictly increasing. In general, the function u(x) can have countable many points where it is not differentiable and the function v(x) can have countable many points where it is not continuous or not differentiable. However, here we assume for brevity that the only non smoothness point is x = 0. In other words, we assume that for x ∈ R \ {0} and that the function V (x) is smooth for x ∈ R \ {0}.
In addition, we assume that the first three derivatives of V l,ǫ (x) and V u,ǫ (x) (and in consequence of V ǫ (x) as well) behave nicely for |x| > 0 and for ǫ small. In particular, we assume that for any connected subset K of R that is away from an arbitrarily small neighborhood of x = 0 and for ǫ sufficiently small After the proof of the main theorem (at the end of section 2), we mention the result for the case where there exist more than one non smoothness point.
(iii). Let g ǫ (x) be a smooth function and let us define the quantity We assume the following growth condition Remark 1.1. Condition (9), i.e. ξ ǫ ↓ 0, basically says that the behavior of V l,ǫ (x) and V u,ǫ (x) in the neighborhood of x = 0 can be at most equally bad as described by ξ ǫ (·) for ǫ small. This condition will be used in the proof of Lemma 2.4 in section 4. Lemma 2.4 is essential for the proof of our main result. In particular, it provides us with the estimate of the expectation of the time it takes for the solution to (1) to leave the neighborhood of the point 0, as ǫ ↓ 0. At the present moment, we do not know if this condition can be improved and this is subject to further research.
In this paper we prove that under assumptions (5)- (9), the X ǫ t component of the process (X ǫ t , Y ǫ t ) converges weakly to a one-dimensional strong Markov process, continuous with probability one. It behaves like a standard diffusion process away from 0 and has to satisfy a gluing condition at the point of discontinuity 0 as ǫ ↓ 0. More precisely, we prove the following Theorem: Theorem 1.2. Let us assume that (5)-(9) hold. Let X be the solution to the martingale problem for with (11) and and Then we have where C 0T is the space of continuous functions in [0, T ].
As proved in Feller [2] the martingale problem for A, (10), has a unique solution X. It is an asymmetric Markov process with delay at the point of discontinuity 0. In particular, the asymmetry is due to the possibility of having u ′ (0+) = u ′ (0−) (see Lemma 2.5) whereas the delay is because of the possibility of having v(0+) = v(0−) (see Lemma 2.4).
For the convenience of the reader, we briefly recall the Feller characterization of all one-dimensional Markov processes, that are continuous with probability one (for more details see [2]; also [13]). All one-dimensional strong Markov processes, that are continuous with probability one, can be characterized (under some minimal regularity conditions) by a generalized second order differential operator D v D u f with respect to two increasing functions u(x) and v(x); u(x) is continuous, v(x) is right continuous. In addition, D u , D v are differentiation operators with respect to u(x) and v(x) respectively, which are defined as follows: where the left derivative of f with respect to u is defined as follows: provided the limit exists.
The right derivative D u f (x+) is defined similarly. If v is discontinuous at y then A more detailed description of these Markov processes can be found in [2] and [13].
Remark 1.3. Notice that if the limit of 1 ǫ V ǫ (x), as ǫ ↓ 0, is a smooth function then the limiting process X described by Theorem 1.2 coincides with (3).
We conclude the introduction with a useful example. Let us assume that V ǫ (x) can be decomposed in three terms where the functions V ǫ i (x), for i = 1, 2, 3, satisfy the following conditions: (i). There exists a strictly positive, smooth function V 1 (x) > 0 such that uniformly in x ∈ R.
(ii). There exists a nonnegative constant β ≥ 0 such that uniformly for every connected subset of R that is away from an arbitrary small neighborhood of 0 and weakly within a neighborhood of 0. Here χ A is the indicator function of the set A.
in the weak sense. Here µ is a nonnegative constant and δ 0 (x) is the Dirac delta distribution at 0.
Let us define α = V 1 (0). In this case the operator (11) and its domain of definition (12) for the limiting process X become and and For instance, consider 0 < δ = δ(ǫ) ≪ 1 a small ǫ−dependent positive number and assume that: is any smooth, strictly positive function, , as ǫ ↓ 0 (v). and with δ chosen such that ǫ δ 3 ↓ 0 as ǫ ↓ 0. Then, it can be easily verified that (15)-(17) and (9) are satisfied. Moreover, in this case, we have µ = ∞ −∞ V 3 (x)dx. In section 2 we prove our main result assuming that we have all needed estimates. After the proof of Theorem 1.2, we state the result in the case that lim ǫ↓0 1 ǫ V ǫ (x) has more than one point of discontinuity (Theorem 2.6). In section 3 we prove relative compactness of X ǫ t (this follows basically from [8]) and we consider what happens outside a small neighborhood of x = 0. In section 4 we estimate the expectation of the time it takes for the solution to (1) to leave the neighborhood of the point 0. The derivation of this estimate uses assumption (9). In section 5 we: (a) prove that the behavior of the process after it reaches x = 0 does not depend on where it came from, and (b) calculate the limiting exit probabilities of (X ǫ t , Y ǫ t ), from the left and from the right, of a small neighborhood of x = 0. The derivation of these estimates is composed of two main ingredients. The first one is the characterization of all one-dimensional Markov processes, that are continuous with probability one, by generalized second order operators introduced by Feller (see [2]; also [13]). The second one is a result of Khasminskii on invariant measures [11].
Lastly, we would like to mention here that one can similarly consider narrow tubes, i.e. y ∈ D ǫ x ⊂ R n for n > 1, and prove a result similar to Theorem 1.2.

Proof of the Main Theorem
Before proving Theorem 1.2 we introduce some notation and formulate the necessary lemmas. The lemmas are proved in sections 3 to 5.
In this and the following sections we will denote by C 0 any unimportant constants that do not depend on any small parameter. The constants may change from place to place though, but they will always be denoted by the same C 0 .
For any B ⊂ R, we define the Markov time τ (B) = τ ǫ x,y (B) to be: Moreover, for κ > 0, the term τ ǫ x,y (±κ) will denote the Markov time τ ǫ x,y (−κ, κ). In addition, E ǫ x,y will denote the expected value associated with the probability measure P ǫ x,y that is induced by the process (X ǫ,x,y t , Y ǫ,x,y t ). For the sake of notational convenience we define the operators Most of the processes, Markov times and sets that will be mentioned below will depend on ǫ. For notational convenience however, we shall not always incorporate this dependence into the notation. So the reader should be careful to distinguish between objects that depend and do not depend on ǫ.
Throughout this paper 0 < κ 0 < κ will be small positive constants. We may not always mention the relation between these parameters but we will always assume it. Moreover κ η will denote a small positive number that depends on another small positive number η.
Moreover, one can write down the normal vector γ ǫ (x, y) explicitly: Assume that for any (x, y) ∈ D ǫ the family of distributions Q ǫ x,y for all ǫ ∈ (0, 1) is tight. Moreover suppose that for any compact set K ⊆ R, any function f ∈ D(A) and for every λ > 0 we have: x,y corresponding to P ǫ x,y converge weakly to the probability measure P x that is induced by X · as ǫ ↓ 0.
x,y for small nonzero ǫ is tight.
Then for every λ > 0: For every η > 0 there exists a κ η > 0 such that for every 0 < κ < κ η and for sufficiently small ǫ For every η > 0 there exists a κ η > 0 such that for every 0 < κ < κ η there exists a positive κ 0 = κ 0 (κ) such that for sufficiently small ǫ Proof of Theorem 1.2. We will make use of Lemma 2.1. The tightness required in Lemma 2.1 is the statement of Lemma 2.2. Thus it remains to prove that (22) holds. Let λ > 0, (x, y) ∈ D ǫ and f ∈ D(A) be fixed. In addition let η > 0 be an arbitrary positive number.
It is Lemma 2.2 that makes such a choice possible. We assume that x * > |x|.
To prove (22) it is enough to show that for every η > 0 there exists an ǫ 0 > 0, independent of (x, y), such that for every 0 < ǫ < ǫ 0 : Choose ǫ small and 0 < κ 0 < κ small positive numbers. We consider two cycles of Markov times {σ n } and {τ n } such that: where: In figure 1 we see a trajectory of the process Z ǫ t = (X ǫ t , Y ǫ t ) along with its associated Markov chains {Z ǫ σn } and {Z ǫ τn }. We will also write z = (x, y) for the initial point.
We denote by χ A the indicator function of the set A. The difference in (26) can be represented as the sum over time intervals from σ n to τ n+1 and from τ n to σ n . It is equal to: The formally infinite sums are finite for every trajectory for whichτ < ∞.
Assuming that we can write the expectation of the infinite sums as the infinite sum of the expectations, the latter equality becomes The aforementioned calculation can be done if Indeed, by Markov property we have: Clearly max |x|=κ,y∈D ǫ x φ ǫ 1 (x, y) < 1 for κ ∈ (κ 0 , x * ). Therefore equality (29) is valid.
However we need to know how the sums in (31) behave in terms of κ. To this end we apply Lemma 2.3 to the function g that is the solution to . It follows by (32) (for more details see the related discussion in section 8.3 of [6], page 306) that there exists a positive constant C 0 that is independent of ǫ and a positive constant κ ′ such that for every κ < κ ′ and for all κ 0 < κ By the strong Markov property with respect to the Markov times τ n and σ n equality (29) becomes Because of (33), equality (34) becomes By Lemma 2.3 we get that max |x|=κ,y∈D ǫ x |φ ǫ 3 (x, y)| is arbitrarily small for sufficiently small ǫ, so Therefore, it remains to consider the terms |φ ǫ 2 (x, y)|, where (x, y) is the initial point, and 1 κ max |x|=κ0,y∈D ǫ x |φ ǫ 2 (x, y)|. Firstly, we consider the term |φ ǫ 2 (x, y)|, where (x, y) is the initial point. Clearly, if |x| > κ, then Lemma 2.3 implies that |φ ǫ 2 (x, y)| is arbitrarily small for sufficiently small ǫ, so We consider now the case |x| ≤ κ. Clearly, in this case Lemma 2.3 does not apply. However, one can use the continuity of f and Lemma 2.4, as the following calculations show. We have: Choose now a positive κ ′ so that for sufficiently small ǫ and for all |x| ≤ κ ′ . Therefore, for κ ≤ κ ′ and for sufficiently small ǫ we have Secondly, we consider the term 1 κ max |x|=κ0,y∈D ǫ x |φ ǫ 2 (x, y)|. Here, we need a sharper estimate because of the factor 1 κ . We will prove that for (x, y) ∈ {±κ 0 } × D ǫ ±κ0 and for ǫ sufficiently small For Since the one-sided derivatives of f exist, we may choose, a positive κ η such that for every 0 for all w ∈ (0, κ 1 ). Furthermore, by Lemma 2.5 we can choose for sufficiently small κ 2 > 0, a κ 0 (κ 2 ) ∈ (0, κ 2 ) such that for sufficiently small ǫ x . In addition, by Lemma 2.4 we can choose for sufficiently small κ η > 0, a κ 3 ∈ (0, κ η ) such that for sufficiently small ǫ For sufficiently small ǫ and for all (x, y) Because of (43) and the gluing condition p + f x (0+) − p − f x (0−) = θLf (0), the first summand on the right hand side of (46) satisfies Moreover, for small enough x ∈ {±κ 0 , ±κ} we also have that The latter together with (44) imply that for sufficiently small ǫ the second summand on the right hand side of (46) satisfies A similar expression holds for the third summand on the right hand side of (46) as well. Therefore (47)-(49) and the fact that we take κ 0 to be much smaller than κ imply that for all (x, y) ∈ {±κ 0 } × D ǫ ±κ0 and for ǫ sufficiently small, we have The second term of the right hand side of (42) can also be bounded by κ η C0 for κ and ǫ sufficiently small, as the following calculations show. For (x, y) Therefore, Lemma 2.4 (in particular (45)) and the continuity of the function Lf give us for κ and ǫ sufficiently small that The third term of the right hand side of (42) is clearly bounded by κ η C0 for ǫ sufficiently small by Lemma 2.4. As far as the fourth term of the right hand side of (42) is concerned, one can use the continuity of f together with Lemma 2.4.
The latter, (50), (52) and (42) finally give us that Of course, the constants C 0 that appear in the relations above are not the same, but for notational convenience they are all denoted by the same symbol C 0 . So, we finally get by (53), (38), (39), (40) and (37) that This concludes the proof of Theorem 1.2.
In case lim ǫ↓0 1 ǫ V ǫ (x) has more than one points of discontinuity, one can similarly prove the following theorem. Hence, the limiting Markov process X may be asymmetric at some point x 1 , have delay at some other point x 2 or have both irregularities at another point x 3 . Theorem 2.6. Let us assume that 1 ǫ V ǫ (x) has a finite number of discontinuities, as described by (5)-(9), at x i for i ∈ {1, · · · , m}. Let X be the solution to the martingale problem for and Then we have X ǫ · −→ X · weakly in C 0T , for any T < ∞, as ǫ ↓ 0.

Proof of Lemmata 2.1, 2.2 and 2.3
Proof of Lemma 2.1. The proof is very similar to the proof of Lemma 8.3.1 in [6], so it will not be repeated here.
Proof of Lemma 2.2. The tool that is used to establish tightness of P ǫ is the martingale-problem approach of Stroock-Varadhan [15]. In particular we can apply Theorem 2.1 of [8]. The proof is almost identical to the part of the proof of Theorem 6.1 in [8] where pre-compactness is proven for the Wiener process with reflection in narrow-branching tubes.
Before proving Lemma 2.3 we introduce the following diffusion process. Let X ǫ t be the one-dimensional process that is the solution to: and A simple calculation shows that where the u ǫ (x) and v ǫ (x) functions are defined by (4). This representation of u ǫ (x) and v ǫ (x) is unique up to multiplicative and additive constants. In fact one can multiply one of these functions by some constant and divide the other function by the same constant or add a constant to either of them.
Using the results in [9] one can show (see Theorem 4.4 in [10]) that where X is the limiting process with operator defined by (10).
Proof of Lemma 2.3. We prove the lemma just for x ∈ [x 1 , x 2 ]. Clearly, the proof for x ∈ [−x 2 , −x 1 ] is the same. We claim that it is sufficient to prove that as ǫ ↓ 0, whereL ǫ is defined in (56). The left hand side of (59) is meaningful since f is sufficiently smooth for We observe that: Now we claim that where for any function g we define g [x1,x2] = sup x∈[x1,x2] |g(x)|. This follows directly by our assumptions on the function V ǫ (x). Therefore, it is indeed enough to prove (59). By the Itô formula applied to the function e −λt f (x) we immediately get that (59) is equivalent to as ǫ ↓ 0. We can estimate the left hand side of (62) as in Lemma 2.1 of [5]: . (63) It is easy to see that v ǫ is a solution to the P.
where n ǫ (x, y) = γ ǫ 2 (x,y) |γ ǫ 2 (x,y)| and x ∈ R is a parameter. If we apply Itô formula to the function e −λt v ǫ (x, y) we get that e −λt v ǫ (X ǫ t , Y ǫ t ) satisfies with probability one: Since v ǫ (x, y) satisfies (64) we have: For it is far away from the point of discontinuity. Taking into account the latter and the definition of v ǫ (x, y) by (63) we get that the first three terms in the right hand side of (66) are bounded by ǫ 2 C 0 for ǫ small enough. So, it remains to consider the last term, i.e. the integral in local time. First of all it is easy to see that there exists a C 0 > 0 such that As far as the integral in local time on the right hand side of (67) is concerned, we claim that there exists an ǫ 0 > 0 and a C 0 > 0 such that for all ǫ < ǫ 0 Hence, taking into account (67) and (68) we get that the right hand side of (66) converges to zero. So the convergence (62) holds. It remains to prove the claim (68). This can be done as in Lemma 2.2 of [5]. In particular, one considers the auxiliary function It is easy to see that w ǫ is a solution to the P.D.E.
As it can be derived by Theorem 2.5.1 of [3] the function φ ǫ (x, y) = E ǫ x,y τ ǫ (±κ) is solution to Let f ǫ (x, y) = φ ǫ (x, y) −φ ǫ (x). Then f ǫ will satisfy By applying Itô formula to the function f ǫ and recalling that f ǫ satisfies (72) we get that We can estimate the right hand side of (73) similarly to the left hand side of (62) of Lemma 2.3 (see also Lemma 2.1 in [5]). Consider the auxiliary function It is easy to see that w is a solution to the P.D.E.
where n ǫ (x, y) = γ ǫ 2 (x,y) |γ ǫ 2 (x,y)| and x ∈ R is a parameter. Then if we apply Itô formula to the function w ǫ (x, y) and recall that w ǫ satisfies (75), we get an upper bound for the right hand side of (73) that is the same to the right hand side of (66) with λ = 0, v ǫ replaced by w ǫ and τ ǫ (x 1 , x 2 ) replaced by τ ǫ (±κ). Namely, for (x, y) = (x 0 , y 0 ), we have the following Now one can solve (71) explicitly and get that for Using (77) and the form of V ǫ (x) as described by (5)- (9) we get that the first two terms of the right hand side of (76) can be made arbitrarily small for ǫ sufficiently small. For ǫ small enough the two integral terms of the right hand side of (76) are bounded by C 0 ξ ǫ E ǫ x0,y0 τ ǫ (±κ), where ξ ǫ is defined in (9). The local time integral can be treated as in Lemma 2.2 of [5], so it will not be repeated here (see also the end of the proof of Lemma 2.3). In reference to the latter, we mention that the singularity at the point x = 0 complicates a bit the situation. However, assumption (9) allows one to follow the procedure mentioned and derive the aforementioned estimate for the local time integral. Hence, we have the following upper bound for f ǫ (x 0 , y 0 ) Moreover, it follows from (77) (see also [10]) that for η > 0 there exists a κ η > 0 such that for every 0 < κ < κ η , for sufficiently small ǫ and for all x with |x| ≤ κ Therefore, since ξ ǫ ↓ 0 (by assumption (9)), (78) and (79) give us for sufficiently small ǫ that The latter and (79) conclude the proof of the lemma.

Proof of Lemma 2.5
In order to prove Lemma 2.5 we will make use of a result regarding the invariant measures of the associated Markov chains (see Lemma 5.2) and a result regarding the strong Markov character of the limiting process (see Lemma 5.4 and the beginning of the proof of Lemma 2.5).
Of course, since the gluing conditions at 0 are of local character, it is sufficient to consider not the whole domain D ǫ , but just the part of D ǫ that is in the neighborhood of x = 0. Thus, we consider the process ( x } that reflects normally on ∂Ξ ǫ .
Lemma 5.1. Let 0 < x 1 < x 2 , ψ be a function defined in [x 1 , x 2 ] and φ be a function defined on x 1 and x 2 . Then A similar result holds for (x, y) Proof. This lemma is similar to Lemma 8.4.6 of [6], so we briefly outline its proof. First one proves that E ǫ x,y τ (x 1 , x 2 ) is bounded in ǫ for ǫ small enough and for all (x, y) ∈ [x 1 , x 2 ] × D ǫ x . The latter and the proof of Lemma 2.3 show that in this case we can take λ = 0 in Lemma (2.3) and apply it to the function g that is the solution to This gives the desired result.
With this choice for Ψ the left hand side of (84) becomes: where we recall that v ǫ (x) = x 0 V ǫ (y) ǫ dy (see (4)). Moreover, the particular choice if Ψ, also implies that τ1 σ1 Ψ(X ǫ t )dt = 0. Then, the right hand side of (84) becomes: Next, we express the right hand side of (87) through the v and u functions (defined by (6)) using Lemma 5.1. We start with the term E ǫ κ,y σ1 0 Ψ(X ǫ t )dt. Use Lemma 5.1 with φ = 0 and ψ(x) = Ψ(x). For sufficiently small ǫ we have where the term o(1) ↓ 0 as ǫ ↓ 0. Similarly for the term E ǫ Taking into account relations (84) and (86)-(89) and the particular choice of the function Ψ we get the following relation for sufficiently small ǫ At each continuity point of v(x) we have v(x) = lim ǫ↓0 v ǫ (x), so the equality above is true for an arbitrary continuous function Ψ if the following hold: Thus, the proof of Lemma 5.2 is complete.
We show now how the second term on the right hand side of (96) can be made arbitrarily small. For ǫ sufficiently small and for κ ′ 0 much smaller than a small κ, it can be shown that P ǫ κ0,0 [(X ǫ τ (−κ,κ ] is almost identical with the obvious changes. We will not repeat the lengthy, but straightforward calculations here. Hence, the second term on the right hand side of (96) can be made arbitrarily small.