Discounted optimal stopping for maxima in diﬀusion models with ﬁnite horizon

We present a solution to some discounted optimal stopping problem for the maximum of a geometric Brownian motion on a ﬁnite time interval. The method of proof is based on reducing the initial optimal stopping problem with the continuation region determined by an increasing continuous boundary surface to a parabolic free-boundary problem. Using the change-of-variable formula with local time on surfaces we show that the optimal boundary can be characterized as a unique solution of a nonlinear integral equation. The result can be interpreted as pricing American ﬁxed-strike lookback option in a diﬀusion model with ﬁnite time horizon


Introduction
The main aim of this paper is to develop the method proposed by Peskir (18)- (19) and apply its extension for solving optimal stopping problems for the maximum processes in diffusion models with finite time horizon. In order to demonstrate the action of this extension we consider the discounted optimal stopping problem (2.3) for the maximum associated with the geometric Brownian motion X defined in (2.1)- (2.2). This problem is related to the option pricing theory in mathematical finance, where the process X describes the price of a risky asset (e.g., a stock) on a financial market. In that case the value (2.3) can be formally interpreted as a fair price of an American fixed-strike lookback option in the Black-Scholes model with finite horizon. In the infinite horizon case the problem (2.3) was solved by Pedersen (14) and Guo and Shepp (9).
Observe that when K = 0 and T = ∞ the problem (2.3) turns into the Russian option problem with infinite horizon introduced and explicitly solved by Shepp and Shiryaev (23) by means of reducing the initial problem to an optimal stopping problem for a two-dimensional Markov process and solving the latter problem using the smooth-fit and normal-reflection conditions. It was further observed in (24) that the change-of-measure theorem allows to reduce the Russian option problem to a one-dimensional optimal stopping problem that explained the simplicity of the solution in (23). Building on the optimal stopping analysis of Shepp and Shiryaev (23)- (24), Duffie and Harrison (2) derived a rational economic value for the Russian option and then extended their arbitrage arguments to perpetual lookback options. More recently, Shepp, Shiryaev and Sulem (25) proposed a barrier version of the Russian option where the decision about stopping should be taken before the price process reaches a 'dangerous' positive level. Peskir (19) presented a solution to the Russian option problem in the finite horizon case (see also (3) for a numeric algorithm for solving the corresponding free-boundary problem and (5) for a study of asymptotic behavior of the optimal stopping boundary near expiration).
It is known that optimal stopping problems for Markov processes with finite horizon are inherently two-dimensional and thus analytically more difficult than those with infinite horizon. A standard approach for handling such a problem is to formulate a free-boundary problem for the (parabolic) operator associated with the (continuous) Markov process (see, e.g., (12), (8), (28), (10), (13), (20)). Since solutions to such free-boundary problems are rarely known explicitly, the question often reduces to proving the existence and uniqueness of a solution to the free-boundary problem, which then leads to the optimal stopping boundary and the value function of the optimal stopping problem. In some cases, the optimal stopping boundary has been characterized as a unique solution of the system of (at least) countably many nonlinear integral equations (see, e.g., (10,Theorem 4.3)). Peskir (18) rigorously proved that only one equation from such a system may be sufficient to characterize the optimal stopping boundary uniquely (see also (15), (19), (6)-(7), (21) for more complicated two-dimensional optimal stopping problems).
In contrast to the finite horizon Russian option problem (19), the problem (2.3) is necessarily three-dimensional in the sense that it cannot be reduced to a two-dimensional optimal stopping problem. The main feature of the present paper is that we develop the method of proof proposed in (18)- (19) in order to apply its extension for the derivation of a solution to some three-dimensional optimal stopping problem. The proposed extension of the method should correspondingly also work for other optimal stopping problems for the maximum processes and in more general diffusion models with finite time horizon. The key argument in the proof will be the application of the change-of-variable formula with local time on surfaces, which was recently derived in (17).
The paper is organized as follows. In Section 2, for the initial problem (2.3) we construct an equivalent optimal stopping problem for a three-dimensional Markov process and show that the continuation region for the price process is determined by a continuous increasing boundary surface depending on the running maximum process. In order to find analytic expressions for the boundary, we formulate an equivalent parabolic free-boundary problem. In Section 3, we derive a nonlinear Volterra integral equation of the second kind, which also leads to the explicit formula for the value function in terms of the optimal stopping boundary. Using the changeof-variable formula from (17), we show that this equation is sufficient to determine the optimal boundary uniquely. The main result of the paper is stated in Theorem 3.1.

Preliminaries
In this section, we introduce the setting and notation of the three-dimensional optimal stopping problem, which is related to the American fixed-strike lookback option problem with finite time horizon, describe the structure of the continuation and stopping regions, and formulate the corresponding free-boundary problem. For this, we follow the schema of arguments from (19) and (6)-(7).
2.1. For a precise formulation of the problem, let us consider a probability space (Ω, F, P ) with a standard Brownian motion B = (B t ) 0≤t≤T started at zero. Suppose that there exists a process X = (X t ) 0≤t≤T given by: and hence solving the stochastic differential equation: where x > 0 is given and fixed. It can be assumed that the process X describes a stock price on a financial market, where r > 0 is the interest rate and σ > 0 is the volatility coefficient. The main purpose of the present paper is to derive a solution to the optimal stopping problem: where the supremum is taken over all stopping times of the process X whose natural filtration coincides with the same of the Brownian motion B. The value (2.3) coincides with the arbitragefree price of the fixed-strike lookback option of American type with the strike (exercise) price K > 0 and λ = r + ν ≥ r > 0 being a sum of the interest rate r > 0 and the discounting rate ν ≥ 0 (see, e.g., (11) or (27)).

2.2.
In order to solve the problem (2.3), let us consider the extended optimal stopping problem for the Markov process (t, X t , S t ) 0≤t≤T given by: where S = (S t ) 0≤t≤T is the maximum process associates with X defined by: and P t,x,s is a probability measure under which the (two-dimensional) process (X t+u , S t+u ) 0≤u≤T −t defined in (2.1)-(2.2) and (2.5) starts at (x, s) ∈ E, the supremum in (2.4) is taken over all stopping times τ of (X t+u ) 0≤u≤T −t , and we set G(s) = (s − K) + for s > 0. Here by E = {(x, s) ∈ R 2 | 0 < x ≤ s} we denote the state space of the Markov process (X t+u , S t+u ) 0≤u≤T −t . Since G is continuous on [K, ∞ and E t,x,s [S T ] is finite, it is possible to apply a version of Theorem 3 in (26, page 127) for a finite time horizon and by statement (2) of that theorem conclude that an optimal stopping time exists in (2.4).
2.3. Let us first determine the structure of the optimal stopping time in the problem (2.4).
(i) Due to the specific form of the optimal stopping problem (2.4), by applying the same arguments as in (1, pages 237-238) and (16, Proposition 2.1) we may conclude that it is never optimal to stop when X t+u = S t+u for 0 ≤ u < T − t. (This fact will be reproved in Part (v) below.) It also follows directly from the structure of (2.4) that it is never optimal to stop when In other words, this shows that all points (t, x, s) from the set: and from the diagonal {(t, x, s) ∈ [0, T × E | x = s} belong to the continuation region: (Below we will show that V is continuous, so that C is open.) (ii) Let us fix (t, x, s) ∈ C and let τ * = τ * (t, x, s) denote the optimal stopping time in (2.4). Then, taking some point (t, y, s) such that 0 < x < y ≤ s, by virtue of the structure of the optimal stopping problem (2.4) and (2.5) with (2.1), we get: Moreover, we recall that in the case of infinite horizon the stopping time τ * = {t ≥ 0 | X t ≤ g * (S t )} is optimal in the problem (2.3) and 0 < g * (s) < s for s > K is uniquely determined from the equation (2.19) in (14), and thus, we see that all points (t, x, s) for 0 ≤ t ≤ T and s > K with 0 < x ≤ g * (s) belong to the stopping region. These arguments together with the comments in (1, Subsection 3.3) and (16, Subsection 3.3) as well as the fact that x → V (t, x, s) is convex on 0, s for s > 0 (see Subsection 2.4 below) show that there exists a function g satisfying g * (s) ≤ g(t, s) ≤ s for 0 ≤ t ≤ T with s > K such that the continuation region (2.7) is an open set consisting of (2.6) and of the set: while the stopping region is the closure of the set: with all points (T, x, s) for (x, s) ∈ E.
(iii) Since the problem (2.4) is time-homogeneous, in the sense that the function G does not depend on time, it follows that From this we may conclude in (2.9)-(2.10) that t → g(t, s) is increasing on [0, T ] for each s > K fixed.
(v) Let us denote by W (t, x, s) the value function of the optimal stopping problem related to the corresponding Russian option problem with finite horizon, where the optimal stopping time has the structure θ * = inf{0 ≤ u ≤ T − t | X t+u ≤ S t+u /b(t + u)} and the boundary b(t) ≥ 1 is characterized as a unique solution of the nonlinear integral equation (3.4) in (19). It is easily seen that under K = 0 the functions W (t, x, s) and s/b(t) coincide with V (t, x, s) and g(t, s) from (2.4) and (2.9)-(2.10), respectively. Suppose now that Then for any x ∈ s/b(t), g(t, s) given and fixed we have (vi) Let us finally observe that the value function V from (2.4) and the boundary g from (2.9)-(2.10) also depend on T , and let them denote here by V T and g T , respectively. Using the fact Letting in the last expression T go to ∞, we get that g * (s) ≤ g T (t, s) ≤ s, where g * (s) ≡ lim T →∞ g T (t, s) for all t ≥ 0, and 0 < g * (s) < s for s > K is uniquely determined from the equation (2.19) in (14).

Let us now show that the value function
For this it is enough to prove that: for each (t 0 , x 0 , s 0 ) ∈ [0, T ] × E with some ε > 0 and δ > 0 small enough (they may depend on (x 0 , s 0 )). Since (2.11) follows by the fact that s → V (t, x, s) is convex on 0, ∞ , we only need to establish (2.12) and (2.13).
Observe that from the structure of (2.4) and (2.5) with (2.1) it immediately follows that: x → V (t, x, s) is increasing and convex on 0, s (2.14) for each s > 0 and 0 ≤ t ≤ T fixed. Using the fact that sup(f ) − sup(g) ≤ sup(f − g) and for 0 < x < y ≤ s and all 0 ≤ t ≤ T . Combining (2.15) with (2.14), we see that (2.12) follows.
It remains to establish (2.13). For this, let us fix arbitrary 0 ≤ t 1 < t 2 ≤ T and (x, s) ∈ E, and let τ 1 = τ * (t 1 , x, s) denote the optimal stopping time for Observe further that the explicit expression (2.5) yields: and thus, the strong Markov property together with the fact that (2.18) Hence, from (2.16)-(2.18) we get: where the function L is defined by: Therefore, by virtue of the fact that L(t 2 − t 1 ) → 0 in (2.20) as t 2 − t 1 ↓ 0, we easily conclude that (2.13) holds. In particular, this shows that the instantaneous-stopping condition (2.39) is satisfied.
2.5. In order to prove that the smooth-fit condition (2.40) holds, or equivalently, x → V (t, x, s) is C 1 at g(t, s), let us fix a point (t, x, s) ∈ [0, T × E with s > K, lying on the boundary g so that x = g(t, s). By virtue of convexity of x → V (t, x, s) on 0, s for s > 0 fixed, the right-hand derivative V + x (t, x, s) exists, and since it is also increasing, we have: In order to prove the converse inequality, let us fix some ε > 0 such that x < x + ε < s and consider the stopping time τ ε = τ * (t, x + ε, s) being optimal for V (t, x + ε, s). Note that τ ε is the first exit time of the process (X x+ε t+u ) 0≤u≤T −t from the set C in (2.9). Then (2.4) implies: where we write X x t and X x+ε t instead of X t in order to indicate the dependence of the process X on the starting points x and x + ε, respectively. Since the boundary g is increasing in both variables, it follows that τ ε → 0 (P -a.s.), so that max 0≤u≤τε X t+u /x → 1 (P -a.s.) as ε ↓ 0 for x < x + ε < s. Thus, letting ε ↓ 0 in (2.22), we get: by the bounded convergence theorem. This combined with (2.21) above proves that V + x (t, x, s) equals zero.

2.7.
We proceed by proving that the boundary g is continuous on [0, T ] × K, ∞ and g(T, s) = s for all s > K. For this, we fix some (t, s) ∈ [0, T × K, ∞ and observe that for each sequence (t n , s n ) converging to (t, s) we have g(t, s) ≤ g(t, s), where g(t, s) ≡ lim n g(t n , s n ). The latter inequality follows directly by the structure of the set D from (2.10) and the fact that (t n , g(t n , s n ), s n ) ∈ D for all n ∈ N, and thus (t, g(t, s), s) ∈ D, since D is closed.
Suppose that at some point (t * , s * ) ∈ 0, T × K, ∞ the function g is not continuous, so that there is a sequence (t n , s n ) converging to (t * , s * ) such that g (t * , s * ) < g(t * , s * ), where g (t * , s * ) ≡ lim n g(t n , s n ). Let us then fix a point (t n , s n ) close to (t * , s * ) and consider the half-open region R n ⊂ C 2 being a curved trapezoid formed by the vertexes (t n , g(t n , s n ), s n ), (t * , g (t * , s * ), s * ), (t * , x , s * ) and (t n , x , s n ) with any x fixed arbitrarily in the interval g (t * , s * ), g(t * , s * ) . Observe that the strong Markov property implies that the value function V from (2.4) is C 1,2,1 on C 2 . So that, by straightforward calculations, using (2.39) and (2.40), and taking into account the fact that G xx = 0, it follows that: for all (t, x, s) ∈ R n and each n ∈ N fixed. Since t → V (t, x, s) is decreasing, we have: for each (t, x, s) ∈ C 2 . Finally, since the strong Markov property implies that the value function V from (2.4) solves the equation (2.38), using (2.28) and (2.36) as well as the fact that λ ≥ r, we obtain: for all (t, x, s) ∈ R n and each n ∈ N fixed. Hence, by (2.27) we get: as n → ∞. This implies that V (t * , x , s * ) > G(s * ), which contradicts the fact that (t * , x , s * ) belongs to the stopping region D. Thus g (t * , s * ) = g(t * , s * ) showing that g is continuous at (t * , s * ) and thus on [0, T ] × K, ∞ as well. We also note that the same arguments with t = T show that g(T −, s) = s for all s > K.
2.8. Summarizing the facts proved in Subsections 2.3-2.7 above, we may conclude that the following exit time is optimal in the extended problem (2.4): Here g * satisfying 0 < g * (s) < s for all s > K is the optimal stopping boundary for the corresponding infinite horizon problem uniquely determined from the first-order nonlinear differential equation (2.19) in (14), and b is the optimal stopping boundary for the finite horizon Russian option problem uniquely characterized as a unique solution of the nonlinear integral equation (3.4) in (19). We also note that (2.34) follows from the right continuity of the boundary g at s = K, that can be proved by the arguments in Subsection 2.7 above together with the fact that the set C 1 defined in (2.6) belongs to the continuation region C given by (2.7).
We now recall that the Markov process (t, X t , S t ) 0≤t≤T is a three-dimensional Markov process with the state space [0, T ]×E and can change (increase) in the third coordinate only after hitting the diagonal Outside the diagonal the process (t, X t , S t ) 0≤t≤T changes only in the first and second coordinates and may be identified with (t, X t ) 0≤t≤T .
Standard arguments then imply that the infinitesimal operator of (t, X t , S t ) 0≤t≤T acts on a function F ∈ C 1,2,1 ([0, T × E) according to the rule: for all (t, x, s) ∈ [0, T × E (the latter can be shown by the same arguments as in (1, pages 238-239) or (16, pages 1619-1620)). In view of the facts proved above, we are thus naturally led to formulate the following free-boundary problem for the unknown value function V from (2.4) and the unknown boundary g from (2.9)-(2.10): where C 2 and D are given by (2.9) and (2. Note that the superharmonic characterization of the value function (see (4) and (26) Figure 1. A computer drawing of the optimal stopping boundaries s → g * (s), s → g(t 1 , s) and s → g(t 2 , s) for 0 < t 1 < t 2 < T . Figure 2. A computer drawing of the optimal stopping boundaries t → g(t, s 1 ) and t → g(t, s 2 ) for K < s 1 < s 2 .
2.9. Observe that the arguments above show that if we start at the point (t, x, s) ∈ C 1 , then we can stop optimally only after the process (X t , S t ) 0≤t≤T pass through the point (K, K). Thus, using the strong Markov property, we obtain: for all (t, x, s) ∈ C 1 , where we set: and V (t, K, K) = lim s↓K V (t, K, s).
By means of standard arguments (see, e.g., (22, Chapter II, Proposition 3.7)) it can be shown that (2.44) admits the representation: (2.46) for all (t, x, s) ∈ C 1 . Therefore, it remains us to find the function V in the region C 2 and to determine the optimal stopping boundary g.

Main result and proof
In this section, using the facts proved above, we formulate and prove the main result of the paper. For proof we apply the change-of-variable formula from (17).
Theorem 3.1. In the problem (2.3) the optimal stopping time τ * is explicitly given by: where the process X is defined in (2.1)-(2.2) with X 0 = x ≥ K, and the boundary g can be characterized as a unique solution of the nonlinear integral equation: More explicitly, the two terms in the equation (3.2) read as follows: g(t, s)z ∨ s − K I g(t, s)y < g(t + v, g(t, s)z ∨ s) p(v, y, z) dy dz for s > K and 0 ≤ v ≤ T − t with 0 ≤ t ≤ T . The transition density function of the process (X t , S t ) t≥0 with X 0 = S 0 = 1 under P is given by: for 0 < x ≤ s and s ≥ 1 with β = r/σ + σ/2, and equals zero otherwise.
Proof. (i) The existence of the boundary g satisfying (2.32)-(2.35) such that τ * from (3.1) is optimal in (2.4) was proved in Subsections 2.3-2.8 above. Since the boundary g is continuous and monotone, by the change-of-variable formula from (17), it follows that the boundary g solves the equation ( Let us thus assume that a function h satisfying (2.32)-(2.35) solves the equation (3.2), and let us show that this function h must then coincide with the optimal boundary g. For this, let us introduce the function: where the function U h is defined by: for all (t, x, s) ∈ [0, T × E with s > K. Note that (3.7) with s − K instead of U h (t, x, s) on the left-hand side coincides with (3.2) when x = g(t, s) and h = g. Since h solves (3.2) this shows that V h is continuous on [0, T × E \ C 1 , where C 1 is given by (2.6). We need to verify that V h coincides with the value function V from (2.4) and that h equals g. For this, we show that the change-of-variable formula from (17) can be applied for V h and h, and then present the rest of verification arguments following the lines of (18)- (19) and (7)-(6) for completeness.
(ii) Using standard arguments based on the strong Markov property (or verifying directly), it follows that V h , that is U h , is C 1,2,1 on C h and that: where C h is defined as C 2 in (2.9) with h instead of g. It is also clear that V h , that is G, is C 1,2,1 on D h and that: where D h is defined as in (2.10) with h instead of g. Then from (3.8) and (3.9) it follows that LV h is locally bounded on C h and D h . Moreover, since U h x is continuous on [0, T × E \ C 1 (which is readily verified using the explicit expressions (3.3)-(3.4) above with x instead of g(t, s) and h instead of g) and so is h(t, s) by assumption, we see that V h x is continuous on C h , so that Then from the arguments above it follows that for each s > K given and fixed, on the union of the sets x, s) can be represented as a sum of two functions, where the first one is nonnegative and the second one is continuous on the closures of these sets. Therefore, by the obvious continuity of t → V h x (t, h(t, s)−, s) on [0, T ], the change-of-variable formula from Section 4 in (17) can be applied, and in this way we get: 0≤u≤T −t is the local time of (X t+u ) 0≤u≤T −t at the (surface) boundary h (which is increasing in both variables) given by: and (M h u ) 0≤u≤T −t defined by M h u = u 0 e −λv V h x (t + v, X t+v , S t+v )I(X t+v = h(t + v, S t+v ))σX t+v dB t+v is a continuous martingale under P t,x,s . We also note that in (3.12) the integral with respect to dS t+v is equal to zero, since the increment ∆S t+v outside the diagonal {(t, x, s) ∈ [0, T × E | x = s} equals zero, while at the diagonal we have (2.41).
Setting u = T − t in (3.12) and taking the P t,x,s -expectation, using that V h satisfies (2.38) in C h and (2.43) in D h , where the set D h is defined as in (2.10) with h in place of g, we get: where (by the continuity of the integrand) the function F is given by: for all (t, x, s) ∈ [0, T × E with s > K. Thus, from (3.14) and (3.6) we see that: where the function U h is given by (3.7).
(iii) From (3. 16) we see that if we are to prove that: for each (t, s) ∈ [0, T × K, ∞ given and fixed, then it will follow that: On the other hand, if we know that (3.18) holds, then using the general fact obtained directly from the definition (3.6) above: for all (t, s) ∈ [0, T × K, ∞ , we see that (3.17) holds too. The equivalence of (3.17) and (3.18) suggests that instead of dealing with the equation (3.16) in order to derive (3.17) above we may rather concentrate on establishing (3.18) directly.
In order to derive (3.18), let us first note that using standard arguments based on the strong Markov property (or verifying directly) it follows that U h is C 1,2,1 in D h and that: It follows that (3.12) can be applied with U h instead of V h , and this yields: − λ u 0 e −λv (S t+v − K)I(X t+v < h(t + v, S t+v )) dv + N h u using (3.8) and (3.20) as well as that ∆ x U h x (t + v, h(t + v, s), s) = 0 for all 0 ≤ v ≤ u since U h x is continuous. In (3.21) we have N h u = u 0 e −λv U h x (t + v, X t+v , S t+v )I(X t+v = h(t + v, S t+v ))σX t+v dB t+v and (N h u ) 0≤u≤T −t is a continuous martingale under P t,x,s . Next, note that (3.12) applied to G instead of V h yields: for s > K using (3.9) and the fact that the process S may increase only at the diagonal For 0 < x ≤ h(t, s) with 0 < t < T and s > K let us consider the stopping time: Then, using that U h (t, h(t, s), s) = G(s) for all (t, s) ∈ [0, T × K, ∞ since h solves (3.2), and that U h (T, x, s) = G(s) for all (x, s) ∈ E with s > K, we see that U h (t + σ h , X t+σ h , S t+σ h ) = G(S t+σ h ). Hence, from (3.21) and (3.22), using the optional sampling theorem, we find: since X t+v < h(t + v, S t+v ) < S t+v for all 0 ≤ v < σ h . This establishes (3.18), so that (3.17) also holds.
(iv) Let us consider the stopping time: Observe that, by virtue of (3.17), the identity (3.12) can be written as: − λ u 0 e −λv (S t+v − K)I(X t+v < h(t + v, S t+v )) dv + M h u with (M h u ) 0≤u≤T −t being a martingale under P t,x,s . Thus, inserting τ h into (3.26) in place of u and taking the P t,x,s -expectation, by means of the optional sampling theorem, we get: V h (t, x, s) = E t,x,s e −λτ h G(S t+τ h ) (3.27) for all (t, x, s) ∈ [0, T × E with s > K. Then, comparing (3.27) with (2.4), we see that: V h (t, x, s) ≤ V (t, x, s) (3.28) for all (t, x, s) ∈ [0, T × E with s > K.
Hence, by means of (3.28), we see that: E t,x,s σg 0 e −λv (S t+v − K)I(X t+v < h(t + v, S t+v )) dv (vi) Finally, we show that h coincides with g. For this, let us assume that there exists some (t, s) ∈ 0, T × K, ∞ such that h(t, s) > g(t, s) and take an arbitrary x from g(t, s), h(t, s) . Then, inserting τ * = τ * (t, x, s) from (2.31) into (3.26) and (3.29) in place of u and taking the P t,x,s -expectation, by means of the optional sampling theorem, we get: Hence, by means of (3.28), we see that: E t,x,s τ * 0 e −λv (S t+v − K)I(X t+v < h(t + v, S t+v )) dv ≤ 0 (3.36) which is clearly impossible by the continuity of h and g. We may therefore conclude that V h defined in (3.6) coincides with V from (2.4) and h is equal to g. This completes the proof of the theorem.