Stopping spikes, continuation bays and other features of optimal stopping with finite-time horizon

We consider optimal stopping problems with finite-time horizon and state-dependent discounting. The underlying process is a one-dimensional linear diffusion and the gain function is time-homogeneous and difference of two convex functions. Under mild technical assumptions with local nature we prove fine regularity properties of the optimal stopping boundary including its continuity and strict monotonicity. The latter was never proven with probabilistic arguments. We also show that atoms in the signed measure associated with the second order spatial derivative of the gain function induce geometric properties of the continuation/stopping set that cannot be observed with smoother gain functions (we call them \emph{continuation bays} and \emph{stopping spikes}). The value function is continuously differentiable in time without any requirement on the smoothness of the gain function.


Introduction
In this paper we analyse in depth some fine properties of optimal stopping problems with finite-time horizon and state-dependent discounting, when the underlying process is a time-homogeneous one-dimensional diffusion and the stopping payoff is also timehomogeneous.Under very mild (local) regularity conditions on the stopping payoff and the diffusion process we provide results concerning the smoothness of the value function (in time) and the geometry of the optimal stopping boundary.
Denoting by g the stopping payoff (or gain function) and by X the underlying process, we show that when g is just the difference of two convex functions the value function of the problem is continuously differentiable in time.Moreover, the geometry of the stopping set depends in a peculiar way on the interplay between the second-order weak derivative g (dx) (interpreted as a signed measure) and the local-time of X, via the so-called Lagrange formulation of the stopping problem, obtained as an application of Itô-Tanaka-Meyer formula.Among other things we are able to identify sufficient conditions for the formation of continuation bays and stopping spikes, neither of which would occur in the case of a smoother gain function.Both phenomena appear as a result of the presence of atoms in the measure g (dx): continuation bays are associated to positive atoms and stopping spikes are associated to negative ones.It is important to recognise that these features are far from being artificial and indeed occur in very natural optimal stopping problems, as illustrated in Examples 4.1 and 4.2 of Section 4, including the celebrated American put/call option problem.
One key result of the paper concerns the strict monotonicity of time-dependent optimal stopping boundaries (Corollary 5.5).In the literature we can find a wealth of numerical illustrations of optimal boundaries t → b(t) that exhibit a smooth profile and strict (piecewise) monotonic behaviour (see, e.g., numerous examples in [38]).While the probabilistic study of continuity of the map t → b(t) has a relatively long history (classical tricks are presented in [38] and more recent results can be found in [9] and [37]) we are not aware of any rigorous probabilistic proof of the strict monotonicity.This question is addressed in Section 5 of the paper where we give simple sufficient conditions for the strict monotonicity of the optimal boundary and provide a proof based on probabilistic methods and reflecting diffusions.The result complements analogous classical results from the PDE literature, which normally require more stringent conditions on the problem data (traditional references are [3] and [20], among many others).As an application we show that the optimal exercise boundary of the American put in the classical Black and Scholes model is indeed strictly increasing as a function of time (Example 5.1).To the best of our knowledge this is the only existing probabilistic proof that does not require any assumption on the smoothness of the boundary itself (PDE methods were used for example in [4] and [16] and we are also aware of a probabilistic proof in [44], which however requires C 1 -regularity of the optimal boundary).
An important feature of our work, that sets it apart from the majority of papers in the field, is that we conduct a local study of the problem.That is, we provide our results under assumptions concerning the local behaviour of the underlying diffusion and of the gain function, rather than their global behaviour.This allows wide applicability of our methods to specific problems and extensions beyond our set-up are possible on a case by case basis.As part of our methodology here we study the boundary of the stopping set as a function of the spatial variable, i.e., x → c(x), rather than as a function of time t → b(t).This choice is natural, due to the time-homogeneity of both the gain function and the diffusion process, but it is not very common in the literature.Such parametrisation turns out to be very fruitful as we are able to perform a detailed study which includes continuity and strict (piecewise) monotonicity of the map x → c(x), without requiring any convexity or monotonicity of the gain function (nor any structural assumption on the state-dependent discount rate).Then x → c(x) can be inverted locally to obtain a local representation of the optimal boundary as a function of time, t → b(t), which is continuous and strictly monotonic.
There is a broad class of problems that falls directly within the framework of our paper.Along with the already mentioned American put problem (see, e.g., [24]), we find numerous other applications from American option pricing (e.g., chooser options [5] and strangle options [14]), options embedded in insurance policies (see, e.g., [6] and references therein) and technical analysis (see [12]).An early contribution to optimal stopping theory fitting in our set-up is [29], which proposes a constructive procedure to identify the optimal boundary, based on PDE methods, under the requirement that the gain function be three times continuously differentiable.Stopping problems related to Röst's solution of the Skorokhod embedding problem are also covered by the present paper: [34] addresses the question from a PDE point of view, [10] from a probabilistic one and [11] obtains the optimal boundaries numerically ( [8,Rem. 3.5] and [7,Rem. 17] also contain valuable insight on the connection between Röst embedding and optimal stopping).Finally, our set-up covers special cases of several theoretical papers on optimal stopping and free boundary problems.Just to mention some early contributions from both the PDE and the probabilistic strand of the literature, we refer to [43] and [32] which consider time-homogeneous gain function and underlying (multidimensional) arithmetic/geometric Brownian motions, [23] which also allows for time-inhomogeneous diffusions and, finally, [25] which includes both time-inhomogeneous gain function and underlying diffusion.
It is also worth drawing a parallel with the two closely related papers [31] and [30] (notice that [30] mentions [31] as a preprint).In [31], Lamberton and Zervos study infinite-time horizon optimal stopping problems for one-dimensional linear diffusions when the gain function is time-homogeneous and difference of two convex functions.That paper deals mainly with the variational characterisation of the value function but it also addresses optimal boundaries in specific examples.Differently to the present paper, in [31] the state space is one-dimensional and optimal boundaries are points on the real line.We can think of our paper as an analogue of [31] but in the finite-time horizon setting.The methods used in [31] are those from the theory of one-dimensional diffusions and ordinary differential equations, which do not apply here because the free boundary problems associated to our optimal stopping problems are of parabolic type.The analysis from [31] is then extended by Lamberton in [30] to the finite-time horizon framework.The underlying process is a one-dimensional linear diffusion and the gain function is time-homogeneous but it is only bounded and Borel measurable as function of the state-process.It is shown in [30] that the value function of the optimal stopping problem is continuous and it is the unique (bounded continuous) solution of a variational problem understood in the sense of distributions.Our work complements [30] by focussing on the study of the geometry of the optimal stopping set and on the regularity of its boundary.
The paper is organised as follows.In Section 2 we formulate the problem and recall useful facts on optimal stopping and one-dimensional linear diffusions.Then we show existence of an optimal boundary x → c(x) and prove its regularity in the sense of diffusions.Section 3 is devoted to proving that the time derivative of the value function is continuous in the whole space.Fine geometric properties of the continuation/stopping set are addressed in Section 4 whereas the continuity of the map x → c(x) (or equivalently the strict monotonicity of its inverse t → b(t)) is studied in Section 5.The paper is completed by a technical appendix.

Setting
2.1.The underlying process and the gain function.Let us consider a complete probability space (Ω, F, P) equipped with a Brownian motion B := (B t ) t≥0 and its filtration F := (F t ) t≥0 , which is augmented with P-null sets.Let X := (X t ) t≥0 be a linear diffusion on an open (possibly unbounded) interval I = (x, x) ⊆ R. We assume that X be determined as the unique strong solution of the stochastic differential equation (SDE) for a suitable σ : I → [0, ∞).We also require that X be a Feller process, hence strong Markov thanks to continuity of paths.We further assume that the diffusion is infinitelylived, in the sense that the endpoints of the interval I are natural in the terminology of [2, Chapter II] (in particular this means that x and x are not attainable by the process in finite time and the process cannot be started from those points).
To summarise we impose: Assumption 2.1.The process X is strong Markov and it is the unique strong solution of (2.1).The endpoints x and x of I are natural.
To keep the exposition simple, we also make the next mild assumption on the diffusion coefficient.
Assumption 2.2.We have σ ∈ C(I) and strictly positive in I.In particular, for any compact K ⊂ I there exist constants 0 Assumptions 2.1 and 2.2 are enforced throughout the paper.Sometimes we will use the notation X x to keep track of the flow property of the process X or, alternatively, we will denote P x ( • ) := P( Fix a time T ∈ (0, ∞) and continuous functions r : I → [0, ∞) and g : I → R such that, for any compact K ⊂ I, The problem we are interested in is the following finite-time horizon optimal stopping problem: where the supremum is taken over stopping times of the filtration F.
Throughout the paper the minimal regularity assumption on the function g is that g can be written as the difference of two convex functions.
Then, its first (weak) derivative g exists as a function of bounded variation (which can be taken to be either right or left continuous) and its second (weak) derivative g exists as a signed measure on I.For the sake of concreteness, and following [41, Chapter VI.1], we take g as the left-derivative of g.So g is left-continuous with right-limits on I. Finally, the measure g (dx) is defined in the usual way via Let us now introduce Here L z denotes the local time of the process X at a point z ∈ I, which is defined as where X t = t 0 σ 2 (X s )ds is the quadratic variation of X.For the analysis that follows in the next sections, it is convenient to decompose the signed measure µ(dx) into its positive and negative part (see also Section 4): Throughout the paper we denote by A the closure of a Borel set A ⊂ R.
In order to derive the next formula (2.6) we need a mild integrability condition: Thanks to the regularity of g and Assumption 2.3 we can use Itô-Tanaka-Meyer formula (see [41,Thm. VI.1.5])to write the problem in the so-called Lagrange formulation: Since the endpoints of I are natural the expression above follows from [31,Lemma 3.2].For completeness we provide a full proof in the Appendix.
From standard theory on one-dimensional diffusions it is known that X admits a transition density with respect to the speed measure which is continuous in all its variables (see [2,Chapter II.1.4]and [22, p. 149]).In other words, there exists a continuous function p : (0, ∞) × I × I → R + (2.7) such that P x (X t ∈ A) = A p(t, x, y)m(dy) for all Borel sets A ⊆ I and any t > 0, where m(dy) denotes the speed measure of X.In our case, since σ( • ) > 0 on I, we have where S ( • ) is the derivative of the scale function of the process (since X is in natural scale it simply holds S (y) = 1 for y ∈ I).Then X admits a transition density with respect to the Lebesgue measure.
Before closing this section we make a couple of observations concerning the generality of our model.

Remark 2.4 (Gain function and discount rate).
The requirement that the discount rate r be non-negative can be easily relaxed to r : I → [−r 0 , ∞) for some constant r 0 ∈ [0, ∞).Further relaxations are also possible but at the cost of additional integrability requirements, and we leave such extensions aside for the sake of clarity of exposition.
If g ∈ C 2 (I) then the value function u of problem (2.6) takes the more familiar form If we add a running profit function h : I → R, the original problem in (2.3) reads Then the problem in (2.6) becomes Since all the key results of the paper are based on properties of the measure µ(dx), they immediately carry over to problems in which µ(dx) is replaced by ν(dx) defined above.
Remark 2.5 (The underlying process).It is important to notice that there is (almost) no loss of generality in assuming zero drift in the dynamics (2.1) of the process X.Indeed, let us consider instead a strong solution Y of the SDE on some interval Î ⊆ R, with α and β drift and diffusion coefficients that guarantee existence and uniqueness of the strong solution.Assume the endpoints of the interval Î are natural.Let us then consider the stopping problem with Borel measurable functions ĝ : Î → R and r : Î → [0, ∞).Then we can reduce to the setting of (2.1) and (2.3) by a simple change of scale.That is, letting Ŝ be the scale function of Y , we have that and the stopping problem takes the form of (2.3) with g(x) = (ĝ • Ŝ−1 )(x) and r(x) = (r • Ŝ−1 )(x).Obviously here I = Ŝ( Î).This approach can be extended even further to consider SDEs with generalised drift (in the sense of, e.g., [46]), again by adopting the change of coordinates via the scale function.However, we insist on the requirement that the SDE admits a unique strong solution because we use pathwise uniqueness in our arguments below (recall that weak existence and pathwise uniqueness imply strong existence; see, e.g., [27,Corollary 5.3.23]).

2.2.
Generalities on the value function and existence of a boundary.Since X admits a continuous transition density with respect to its speed measure, then the twodimensional process (t, X) enjoys Feller property.The latter, combined with continuity of g and with (2.2), is known to be sufficient to obtain that is the minimal optimal stopping time, where is the so-called stopping set (notice in particular that {T } × I ⊆ S by definition).We will sometimes use the notation τ t,x * to emphasise the dependence of this stopping time on the initial position of the time-space process; that is τ t,x * = inf{s ∈ [0, T − t] : (t + s, X x s ) ∈ S}.Letting also the continuation set be denoted by we immediately see that C is open and S is closed (relative to [0, T ] × I) thanks to lower semi-continuity of u = v − g.Given an interval J ⊆ I, it will be sometimes convenient to work with sets of the form and the associated boundary ∂C J = ∂C∩ [0, T )×int(J ) .We should always understand ∂C as ∂C I .
Finally, it follows from standard theory ([42, Sec.In our setting, the process X is time-homogeneous and the gain function g is independent of time.Therefore, it is immediate to verify that for any (t, x) ∈ (0, T ] × I and h ∈ (0, t) Such monotonicity of the value function identifies some sort of 'privileged' direction in the state space, in the sense of the following simple statement Then, we can uniquely determine the boundary of the continuation set by defining Since S is a closed set we have that for any x ∈ I and any sequence In conclusion we can summarise the above discussion in the next proposition.
Proposition 2.6.The stopping set can be expressed as where c : I → [0, T ] is lower semi-continuous.
Remark 2.7.It is worth noticing that, in the literature on finite-time horizon optimal stopping problems on a one-dimensional diffusion, it is customary to describe the stopping set in terms of a boundary which is time-dependent, rather than spacedependent.However, proving existence of such boundaries requires to show that, e.g., the map x → u(t, x) is monotonic, or at least convex.This type of argument will fail in general, if g is just the difference of two convex functions and in the presence of a state-dependent discount rate.Instead, the existence of the boundary x → c(x) is an immediate consequence of the monotonicity in time of the value function, which holds under even wider generality than ours.Indeed, a quick look at the argument in (2.13) reveals that monotonicity of t → v(t, x) is solely a consequence of the reduction of the set of admissible stopping times and has nothing to do with either the process X, the discount rate or the gain function (provided the latter three are time-homogeneous).
For some of the results that follow, it is convenient to work with a continuous value function v. Continuity is normally easy to prove in specific examples but also general results exist.For example, if g is bounded and continuous and r(x) ≡ r ≥ 0 [35,Thm. 4.3] guarantees continuity of v when X is just a Feller process on a locally compact and separable space (not necessarily a solution of an SDE).Alternatively, for X a nonexploding linear diffusion on an interval and r(x) ≥ 0 locally bounded and measurable, [30] proves continuity of v when g is just bounded and measurable.Rather than giving another proof of the continuity of the value function, when necessary we will invoke the next assumption.For α ∈ (0, 1) we denote by C α oc (I) the class of locally α-Hölder continuous functions on I. Continuity of the value function, along with the martingale property, Assumption 2.2 and a standard PDE argument gives the next corollary (see, e.g., the proof of [28,Prop. 7.7,Ch. 2]).

Regularity of the boundary in the sense of diffusions.
There is an important consequence to Proposition 2.6.Indeed, it turns out that the boundary of the continuation set, ∂C, is regular for the stopping set in the sense of diffusions.We will review this property in detail below.
Denote the hitting time to the stopping set by As before we denote σ t,x * when we need to emphasise the initial point of the process (t, X): that is σ t,x * = inf{s ∈ (0, T − t] : (t + s, X x s ) ∈ S}.For any (t, x) ∈ [0, T ] × I, it is clear that τ t,x * ≤ σ t,x * , P-a.s., by definition.By continuity of paths t → X t and the fact that C and int(S) are open sets (provided they are not empty), it is also clear that τ t,x Lemma 2.11.Consider a Brownian motion W := (W t ) t≥0 , an interval (a, b) and the stopping time For any x ∈ [a, b] and any sequence Then we have τ x c = τ 0 c−x and Lemma 2.11 holds as soon as we show that τ 0 cn → τ 0 c , P-a.s., for any c ∈ R and any sequence c n → c.It is well-known that (τ 0 c ) c≥0 is an increasing Lèvy process and it is P-a.s.left-continuous [41,Prop. III.3.8].Moreover, there is almost surely no interval over which it is continuous.However, for a fixed c ≥ 0 and any sequence c n ↓ c we have τ 0 cn ↓ τ 0 c , P-a.s.[41, Prop.III.3.9].Combining the latter with left-continuity we have that τ 0 cn → τ 0 c , P-a.s., for any c ≥ 0 and any sequence c n → c.Noticing that τ 0 c = inf{t ≥ 0 : sup 0≤u≤t W s = c} for c ≥ 0, it is immediate to extend the arguments to the family (τ 0 c ) c∈R by considering also τ 0 c = inf{t ≥ 0 : inf 0≤u≤t W s = c} for c ≤ 0. Thus, the result in Lemma 2.11 can be deduced also from these standard facts 1 .
Building upon the previous lemma we can prove our next result.
Proposition 2.12.Let (t 0 , x 0 ) ∈ ∂C and let (t n , x n ) n≥1 ⊂ C be a sequence that converges to (t 0 , x 0 ).Then, Proof.We give the proof in two steps.For simplicity and with no loss of generality we assume that t → B t (ω) is continuous and it satisfies the law of iterated logarithm for all ω ∈ Ω.
Step 1. (σ ≡ 1).First we prove the result for σ(x) ≡ 1, so that X is a standard Brownian motion, I = R, and the main ideas in the proof are more transparent.Actually, here we prove a stronger result and show that lim sup n→∞ σ tn,xn * (ω) = 0, for all ω ∈ Ω.
Step 2. (Any σ > 0).Let us now consider a generic diffusion coefficient σ that satisfies Assumption 2.2.The rest of this proof is slightly technical due to the fact that we localise the dynamics of X on a bounded open interval J with J ⊂ I. Denote The process (M t∧τ J ) t≥0 is absorbed when X leaves the interval J , it is a martingale thanks to Assumption 2.2, and can be represented, by Dambis-Dubins-Schwarz theorem ([40, Thm.IV. 34.11]; see also [40, Thm.V.47.1]), as a time-changed Brownian motion.That is M t∧τ J = W M t∧τ J , where W is a standard Brownian motion.Notice that (τ J , M, M , m J ) depend on the initial point x ∈ J but for now we omit the dependence for simplicity.
Since σ( • ) ≥ σ J > 0 on J by Assumption 2.2, we can define the inverse of the quadratic variation A s := ( M ) −1 (s) for s ∈ [0, m J ) (this is done ω by ω).Thanks to strict monotonicity of both A and M , both processes are also continuous and clearly M As∧τ J = s ∧ m J .Then, we have Here, the process Z depends on x ∈ J , via the initial condition Z x 0 = x, the stopping time m J = m x J and the Brownian motion W = W x obtained via time-change.By construction we have that A s < τ J ⇐⇒ s < m J , P-a.s., and so that, in particular, In conclusion s → Z x s∧m J is a Brownian motion absorbed upon leaving the interval J and it is adapted to the time-changed filtration (G s ) s≥0 = (F As ) s≥0 .
Having defined Z we can write A s = ( M ) −1 (s) explicitly (ω by ω) as Again, we notice that A s = A x s depends on x ∈ J .Since we are interested in the event σ t,x * = 0 and we want to restrict our attention to the behaviour of the process X (equivalently Z) for 'small times', here we will always consider σ t,x * ∧ τ x J .In particular, using that u → A x u is strictly increasing, we can write where the inequality is due to replacing s ∈ (0, T ] with s > 0 in the definition of σ * , and the final expression by simply relabelling the time variable s = A x u .Then, setting for any δ > 0 given and fixed.So, it is sufficient to prove that the final expression above converges to zero along any sequence It is not very convenient to work with the Brownian motion W x when x varies.We therefore set with B our original Brownian motion.Then, for each (t, x) we have the equivalence in law From (2.22) and the equality in law we have The advantage of working with ( Z x , A x , ζ t,x * , m x J ) is that Z x only depends on x via its initial point and therefore we can apply arguments analogous to those used in step 1 above.An initial observation is that 1 thanks to Assumption 2.2 and by definition of A. Take (t 0 , x 0 ) ∈ ∂C J , so that [t 0 , T ] × {x 0 } ⊂ S. By the exact same argument as in step 1 above we obtain ζ t 0 ,x 0 * = 0, P-a.s.Then, (2.25) implies thanks to (2.25).
Since m J is the first exit time of the Brownian motion Z from an open interval, then x → m x J (ω) is continuous for any ω ∈ Ω (in the sense of Lemma 2.11).Fix ω ∈ Ω.By continuity of paths m x 0 J (ω) > 0 and, in particular, there exists ε 0,ω > 0 such that Now, we can repeat the arguments from step 1.For any ε ∈ (0, ε 0,ω /2), there exist Therefore, taking n sufficiently large we have The argument holds for any ε ∈ (0, ε 0,ω /2 where the final inequality is by Fubini's theorem and using m xn J → m x 0 J , P-a.s. by Lemma 2.11.Since P( m x 0 J > 0) = 1, letting δ ↓ 0 we arrive at (2.16).

Regularity of the value function
In this section we show that the value function has a modulus of continuity with respect to the time variable and, under mild additional assumptions, it is indeed a locally Lipschitz function of time.Our proof uses properties of the local time of the process (generalising [13,Example 17]).For that we recall that the scale function density is S (x) = 1 and that p is the transition density of the process with respect to the speed measure (Section 2.1).First we state an estimate for the local time of the process.
Lemma 3.1.Let 0 < t 1 ≤ t 2 ≤ T and fix x ∈ I.Then, for any z ∈ I we have By Fatou's lemma and the definition of z in (2.4) we get where we used r ≥ 0 for the first inequality.Writing the expectation in terms of the transition density p and the speed measure (see (2.7) and (2.8)) we obtain for some ε 0 ≥ ε n , n ≥ 1, upon recalling that S (y) = 1.Notice that we are using continuity of p on (0, ∞) × I × I.
Thanks to (3.3) we can invoke dominated convergence to pass to the limit in (3.2) and obtain Next we obtain a modulus of continuity for the value function with respect to time.Recall the decomposition of the signed measure µ = µ + − µ − into its positive and negative part.Proposition 3.2.For any x ∈ I and any t 1 < t 2 in [0, T ) we have In particular, if there exists a constant κ = κ(t 2 , x) > 0, depending on t 2 and x, such that For the remaining inequality we use the representation of the problem in terms of the function denote the optimal stopping time for the problem started at (t 1 , x).Then ) is admissible and sub-optimal for the problem started at (t 2 , x).This gives Now, using Lemma 3.1 in the above expression, we obtain (3.4), after an application of Fubini's theorem.
Remark 3.3.The condition on the transition density p in (3.5), is perhaps more neatly expressed in terms of the standard transition density (with respect to the Lebesgue measure), denoted here by p( • ).Indeed we notice that since S (y) = 1 p(s, x, y) = p(s, x, y) S (y) = 1 2 σ 2 (y)p(s, x, y).
For many known transition densities we have that p is uniformly bounded as soon as s ∈ [ε, ∞) for some ε > 0.Moreover, it is often the case that p(s, x, • ) has an exponential decay at infinity (when I is unbounded) so that mild growth conditions on σ 2 (y)µ + (dy) will guarantee (3.5).
Remark 3.4.It is worth mentioning that Lipschitz continuity in time of the value function was also proved by [23] using scaling properties of Brownian motion (in particular that for s ∈ [0, T − t] one has B s = √ T − tB u , with u = s/(T − t)).However, for the argument in [23] some additional regularity on g and σ is needed (e.g., local Lipschitz continuity of both functions).Theorem 3.5 (C 1 time regularity).Let Assumption 2.8 hold and let r, σ ∈ C α oc (I) for some α ∈ (0, 1).If (3.5) holds with a constant κ = κ(t, x) > 0 which is uniform for Proof.For (t, x) ∈ int(S) we have ∂ t v(t, x) = 0 and continuous (provided int(S) = ∅).Corollary 2.9 guarantees that ∂ t v is continuous in C and therefore it remains to show that ∂ t v is also continuous across the boundary ∂C.Fix (t 0 , x 0 ) ∈ ∂C, with t 0 < T , and take a sequence (t n , x n ) n≥1 ⊂ C such that (t n , x n ) → (t 0 , x 0 ) as n → ∞.With no loss of generality we assume that |x n −x 0 | ≤ η 0 /2 and t n < T − 3ε 0 for all n ≥ 1 and some η 0 , ε 0 > 0. Further, we denote I 0 := (x 0 − η 0 , x 0 + η 0 ).
Next, let us derive an upper bound for ∂ t v(t n , x n ).Fix n and take ε ∈ (0, ε 0 ).Let τ n = τ tn,xn * be optimal for v(t n , x n ) and fix s 0 ∈ [0, ε 0 ).Then where the final equality holds because v(t n +ε+τ n , X τn ) = v(t n +τ n , X τn ) = g(X τn ) on {τ n ≤ ρ n } by monotonicity of t → v(t, x).Now, thanks to (3.5) we can find a constant κ 0 = κ(I 0 , ε 0 ) > 0, independent of n and s 0 , such that Then, plugging the latter estimate into (3.6),recalling that r ≥ 0, dividing by ε and letting ε → 0 we obtain We are now interested in taking limits as n → ∞ and showing that the right-hand side of (3.7) goes to zero.First, let us rewrite From Proposition 2.12 we know that P(τ n > s 0 ) → 0 as n → ∞.We can estimate the second probability as follows.Define σ as along with the process X n on R, which is the unique (possibly weak) solution of Existence of a unique in law, weak solution of the above SDE is guaranteed by Assumption 2.2 and classical results (see [27,Ch. 5.5]).By strong uniqueness of (2.1) we also have X xn t∧ρn = X n t∧ρn for all t ≥ 0, P-a.s., for ρn = inf{t ≥ 0 : for all n ≥ 1.Therefore, using Markov inequality and Doob's martingale inequality we obtain where the last inequality uses that sup x∈R |σ(x)| = sup x∈I 0 |σ(x)| by construction.
Remarkably, the time derivative is continuous irrespective of the regularity of the function g.This is in line with [13], but a direct application of results therein is not straightforward due to the lack of smoothness of g.Remark 3.6.The Hölder-continuity assumption on σ and r is only needed to guarantee that ∂ t v is continuous in C by Corollary 2.9.Thanks to Remark 2.10 we can state a local version of Theorem 3.5 only requiring that r, σ ∈ C α oc (J ) for some open subset J ⊂ I.Under such assumption we obtain Continuity of ∂ t v has important consequences for the spatial regularity of the value function as well.For α ∈ (0, 1) we denote C α (J ) the class of α-Hölder continuous functions on the closure of a set J .Corollary 3.7.Let Assumption 2.8 hold and let r, σ ∈ C α J for some α ∈ (0, 1), with J ⊂ I open and J ⊂ I.If (3.5) holds with a constant κ = κ(t, x) > 0 which is uniform for (t, x) on compacts subsets of [0, T )×I, then 15) and continuity of both ∂ t v and v on [0, T ) × J .

Continuation bays and stopping spikes
In this section we begin the study of the fine geometric properties of the optimal boundary ∂C.In contrast with the case of a smooth gain function, i.e., g ∈ C 2 (I), in this section we show that the possible presence of atoms in the measure µ(dx) produces effects that cannot be observed in the more regular cases.These will be illustrated in Example 4. 1 It is somewhat expected that the stopping set should lie in [0, T ] × Λ − , where accumulating local time in the formulation (2.6) is costly.This result is known to hold when g ∈ C 2 (I) and below we present some extensions to our setting.
We are going to need the next lemma.
Then, for any δ > 0 and z τε∧δ .Recalling Assumption 2.2 and applying Itô-Tanaka formula, triangular inequality and Jensen's inequality we easily obtain where σ K = sup y∈K |σ(y)|, with K ⊂ I a compact set that contains I ε .For (4.1) we repeat steps similar to those in a proof given in [37, Lemma 15], being careful about the various constants cropping up in our case.Denote and notice that M t∧τε = W M t∧τε by Dambis-Dubins-Schwarz theorem, where W is another Brownian motion (analogous to (2.18)).By continuity of r we have τε 0 r(X t )dt ≤ T • r with r = sup x∈K r(x) and K ⊂ I as above.Then, using Itô-Tanaka's formula as in (4.2) with z = x we also have Letting L 0 denote the local-time at zero of the Brownian motion W , a further application of Itô-Tanaka's formula and optional sampling gives (notice that M τε∧δ ≤ σ 2 K δ by Assumption 2.2) , where the final inequality uses that σ K = inf x∈K σ(x) > 0 (Assumption 2.2) and monotonicity of the local time.Notice that the notation and that the Brownian motion W = W x , obtained via time-change, also depends on x through the quadratic variation M x .Next we proceed with simple estimates: It is well-known that the next chain of equalities holds in law under P x : Then, we have also Thus, we obtain from (4.3) and the discussion above Since P x (τ ε > 0) = 1, then we can find δ 0 > 0 sufficiently small, so that Then, (4.1) follows upon recalling also As immediate consequence of the lemma we have the next result.
Let us now introduce suitable subsets of Λ ± that will be useful to prove properties of C and S. For x ∈ I we denote by O x ⊂ I an open neighbourhood of x and we set . Below we will use the following fact whose proof we also provide in the Appendix for completeness.
The stopping time τ ε is admissible and sub-optimal for the stopping problem with starting point (t, x 0 ) and P x 0 (τ ε > 0) = 1 by the continuity of paths of X.Then, using (2.6) we obtain where in the equality we used that z τε = 0, P x 0 -a.s. for z / ∈ I ε and the final inequality is by (4.7) and Fubini's theorem.Since u(t, x 0 ) > 0 then (t, x 0 ) ∈ C. Recalling that t ∈ [0, T ) can be chosen arbitrarily gives [0, T ) × {x 0 } ∈ C. Since x 0 ∈ (a, b) was also arbitrary, then [0, T ) × (a, b) ⊂ C as claimed.
Next we show that c(x) < T at points x ∈ Λ 0 − and that the stopping set is connected in the sense of (4.9) below.For that, it is convenient to recall continuity of the value function and for simplicity we will also require the integrability condition The latter strengthens slightly the requirement in (2.2) by adding uniform integrability of the discounted process X. ) for some α ∈ (0, 1), then c(x) < T for x ∈ (a, b); moreover, for any Proof.We divide the proof into two steps.
Step 1. (Proof of (i)).To prove the first statement we argue by contradiction.Let us first assume that (a, b) ⊂ Λ 0 − and c(x) = T for all x ∈ (a, b).We use ideas as in [9] but without requiring smoothness of σ.Consider the rectangular domain By Corollary 2.9 and Remark 2.10 we know that v is the unique solution of the boundary value problem Monotonicity of t → v(t, x) and (4.10) imply Letting t → T in the above we obtain where the final equality uses dominated convergence, continuity of the value function and v(T, x) = g(x).Undoing the integration by parts we reach a contradiction with 0 ≤ Next we show that (4.9) holds for any two points in D. Let us argue by contradiction again: take any two points x 1 < x 2 in D, set t 0 = c(x 1 ) ∨ c(x 2 ) < T and assume there exists lie in the stopping set (recall that τ t 0 ,x 3 * is the first time (t 0 + s, X x 3 s ) enters S).Then, gives us a contradiction and (4.9) holds.
Since (4.9) holds in D and the latter set has no isolated points in (a, b) we conclude that c(x) < T for all x ∈ (a, b).Indeed, assume by way of contradiction that there is x ∈ (a, b) such that c(x) = T .There are x 1 , x 2 ∈ D with x 1 < x < x 2 and (4.9) holds for such x 1 and x 2 .Then we have reached a contradiction.
It remains to show that P x 0 (τ ε 1 < T − t) ≈ (T − t) as t → T .For that, we define σ as along with the process X on R, which is the unique (possibly weak) solution of By strong uniqueness of (2.1) we have X t∧τε 1 = X t∧τε 1 for all t ≥ 0, P x 0 -a.s., for τε 1 = inf{s ≥ 0 : X t / ∈ I ε 1 } ∧ (T − t).Therefore, using Markov inequality and Doob's martingale inequality we obtain which concludes the proof.
Remark 4.5 (Flatness of x → c(x)).The argument we used in step 1 of the proof above to obtain (4.11) was originally designed in [9] to show continuity of optimal boundaries as functions of time.Here, as a byproduct of the proof we obtain that the map x → c(x) cannot exhibit a flat stretch, which is also strictly positive, on Λ 0 − .That is, if there exists an interval (x 1 , x 2 ) ⊆ Λ 0 − such that c(x) = ĉ for x ∈ (x 1 , x 2 ), then it must be ĉ = 0.The proof is an exact repetition of the one for (4.11), so we omit it.
There is a nice monotonicity result that follows as a corollary from Proposition 4.  For the final claim, assume [a, a * ) = ∅ and, arguing by contradiction, that there exist x 1 < x 2 in [a, a * ) such that c(x 1 ) ≤ c(x 2 ).By definition of a * we have c(x 1 ) > c(a * ).Then [c(x 1 ), T ] × [x 1 , a * ] ⊂ S by (4.9) and it must be c(x 2 ) = c(x 1 ) =: ĉ.By the same argument there cannot exist x 3 ∈ (x 1 , x 2 ) such that c(x 3 ) < c(x 2 ) and therefore we conclude that c(x) = ĉ for all x ∈ [x 1 , x 2 ].From Remark 4.5 we know that x → c(x) cannot be flat, unless it is equal to zero.However, x 2 < a * and therefore ĉ = c(x 2 ) > c(a * ) ≥ 0. Thus we have reached a contradiction and x → c(x) is strictly decreasing on [a, a * ).By the same argument we can prove that the boundary is strictly increasing on (b * , b].Proposition 4.2 holds at any point x 0 such that µ({x 0 }) > 0, irrespective of the sign of µ(dx) in a neighbourhood of x 0 .We will see in the next example that this argument, combined with Proposition 4.4, can produce very peculiar shapes of the continuation set.Loosely speaking we can say that we find a continuation bay in the middle of a stopping set.

Example 4.1 (Continuation bays). A typical example of continuation bay arises in the
American straddle option (see, e.g., [14]).Let us consider a simplified version here and let dX t = σX t dB t , X 0 = x, be the stock's dynamics with σ > 0. Fix K > 0 and r > 0 and let us denote the value of the option by Then, by an application of Itô-Tanaka's formula we have where µ(dz) = 2δ K (dz) − (2r/σ 2 )z −2 |z − K|1 {z =K} dz.
Here we have Λ − = R + \ {K} = Λ 0 − and Λ + = {K} = Λ 0 + , which is a rather 'singular' situation.Intuitively, waiting is costly for the option holder at all times t ∈ [0, T ] for which X t = K: indeed, she pays a cost at a rate r|X t − K|dt.On the contrary, waiting is rewarding only at times t ∈ [0, T ] when X t = K and the option holder receives a reward at the 'rate' of dL K t .As we will see shortly, it is precisely the kink in the payoff x → |x − K| that guarantees C = ∅ and makes the problem mathematically non-trivial.
From (i) in Proposition 4.4 we obtain that c(x) < T for all x ∈ R + \ {K}, whereas Proposition 4.2 guarantees c(K) = T .By the same arguments we used to prove (4.9) we can also show that for any x > K we have [c(x), T ] × [x, ∞) ∈ S. Indeed, assume by contradiction that there exists x > x such that (t, x ) ∈ C for t = c(x); then, τ t,x * ≤ inf{s ≥ 0 : X x s ≤ x} and we obtain the analogue of (4.12) with (t 0 , x 3 ) = (t, x ) and [x 1 , x 2 ] replaced by [x, ∞).Hence a contradiction.Likewise, we can show that for any x ∈ (0, K), we have [c(x), T ] × [0, x] ∈ S. Finally, Corollary 4.6 implies that c is strictly increasing on (0, K) and strictly decreasing on (K, ∞), hence it can be inverted (locally) defining two boundaries which are continuous functions of time.Indeed, let c 1 (x) = c(x) for x ∈ (0, K) and c 2 (x) = c(x) for x > K, then we can set A reverse situation is observed at points x 0 such that µ({x 0 }) < 0. In this case, if µ(dx) > 0 on a neighbourhood of x 0 , we observe a stopping spike in the middle of the continuation region.This type of geometry of the stopping set is almost unique and certainly not very popular in the literature.The only examples of a similar geometry that we are aware of appear in [36] and [15] but the settings are different: in both references the gain function is time-dependent and in [36] it is discontinuous in the spatial variable whereas in [15] it is discontinuous in the time variable.So it is difficult to draw a clear parallel.More closely related is the situation of game call options where, for some parameter choice, the option seller will only stop if the underlying asset's value equals the strike price (see, e.g., [17,18,45]).
This time we need to recall (ii) from Proposition 4.4.That is, we take dX t = σX t dB t , X 0 = x, and, for a fixed η 0 > 0, we consider Here the stopper may be the seller of a cancellable straddle option of European type, who must pay a fee of η 0 (in addition to the option's current payoff) in order to cancel the contract.Although the problem is stated as a minimisation, it is clear that it is equivalent to Notice that, due to discounting and to the presence of a cancellation fee η 0 > 0, if T is sufficiently large we expect c(K) > 0, as stopping at K is not necessarily optimal if the time to maturity is long.

Continuity of the boundary
Here we address the question of continuity of the map x → c(x) and its link to strict monotonicity of time-dependent optimal stopping boundaries.For α ∈ (0, 1) we denote The proof of the theorem hinges on the following two lemmas.The proof is essentially an application of the maximum principle and we give it in Appendix for completeness.
The proof is inspired by [13] but we cannot directly invoke any of the results therein due to the local nature of our assumptions.However, if we strengthen the requirements in the lemma to, e.g., σ, r, g ∈ C 1 b (I), then [13,Theorem 10] applies directly yielding v ∈ C 1 ([0, T ) × I).In several practical applications Lemma 5.4 may be better suited and therefore we give a full proof in Appendix.
Proof of Theorem 5.2.Since x → c(x) changes its monotonicity at most once on (a, b) (Corollary 4.6) and it is lower semi-continuous, we only need to rule out discontinuities of the first kind.In particular, with no loss of generality we may assume that c is strictly increasing on (a, b) as the argument is analogous for decreasing boundaries and combining the two we can handle the general case.
First we notice that since c is strictly increasing and lower semi-continuous on (a, b), then it must be left-continuous.It then remains to prove that it is also right-continuous.Arguing by contradiction let us assume that there exists x 0 ∈ [a, b] such that c(x 0 ) < c(x 0 +).Then (c(x 0 ), c(x 0 +)) × {x 0 } ⊂ ∂C and there exists x 1 > x 0 and ε 1 > 0 such that thanks to Lemma 5.3 and the fact that ∂ t v is continuous.Setting ĉ = min x∈[a,b] c(x) and combining Lemma 5.4 and Theorem 3.5 (recall also Remark 3.6) we can also conclude that v ∈ C 1 ((ĉ, T )×(a, b)).Then for any ε > 0 there exists δ ε > 0 such that x 0 +δ ε < x 1 and by uniform continuity on any compact.Classical results on interior regularity for solutions of PDEs guarantee ∂ t v ∈ C 1,2 (t 0 , t 1 ) × (x 0 , x 1 ) and (see, e.g., [19,Thm. 10,Ch. 3,Sec. 5]).
Since v = g on c(x 0 ), c(x 0 +) × {x 0 }, we may expect that v tx be continuous at (c(x 0 ), c(x 0 +)) × {x 0 } and equal to zero.From a PDE perspective that would enable the use of Hopf's lemma to reach a contradiction.Here instead we present a probabilistic analogue based on the construction of a process which is normally reflected 'near' the discontinuity of the boundary.This approach avoids to deal with continuity up to the optimal boundary of the value function's derivatives of order greater than one.
On the interval [x 0 + δ ε , x 1 ) we consider a process that is equal to (X t ) t≥0 away from x 0 + δ ε , it is reflected (upwards) at x 0 + δ ε and it gets absorbed at x 1 .For the construction of such process we extend the diffusion coefficient σ outside (a, b) to be C 1 b (R) and strictly separated from zero.With a slight abuse of notation let us denote such extension again by σ.Then, it is well-known (see, e.g., [33] or [1, Sec. 12, Chapter I]) that there exists a unique strong solution of the stochastic differential equation where R δε is a continuous, non-decreasing process, with R δε 0 = 0, that guarantees ) t≥0 .Letting v := ∂ t v we can apply Itô's formula for semi-martingales and use (5.3) to obtain, for any t ∈ (t 0 , t 1 ) where the inequality follows from (5.2) and for the term under expectation we use (5.4).
For the expression on the left-hand side of (5.5), denoting r = sup x∈[x 0 ,x 1 ] r(x) and recalling that ∂ t v ≤ 0, thanks to (5.1) we have Hence, setting for simplicity ε1 = ε 1 e r T , from (5.5) we obtain The next step is to let ε → 0. In order to take care of possible issues with the regularity of ∂ tx v as δ ε ↓ 0 we adopt an approach using test functions.Pick a nonnegative function ϕ ∈ C ∞ c (t 0 , t 1 ) such that t 1 t 0 ϕ(t)dt = 1.Then, multiplying both sides of (5.6) by ϕ, integrating over (t 0 , t 1 ) and using Fubini's theorem we obtain where we are also using that τ ε 1 is independent of t.Let us now look more closely at the integral on the right-hand side above: integration by parts and the second estimate in (5.2) give where the final equality follows by integrating ϕ over (t 0 , t 1 ).Using the expression above in (5.7) along with r( • ) ≥ 0 we obtain where the final inequality uses that ϕ(t From the integral form of the dynamics of X ε we obtain Then, taking limits as ε → 0 gives lim sup Showing that the left hand side above is positive will give us a contradiction.Hence there cannot be a discontinuity of c at x 0 . Setting J = (a, x 1 ) and adopting the same time-change as in step 2 of the proof of Proposition 2.12 (see (2.18) and (2.19)) we obtain, using the same notation, with S δε s∧m J = R δε As∧τ J and m J = m ε J the first time the process Z ε leaves the interval (a, x 1 ) (let us also recall that the Brownian motion W δε depends on the initial point x 0 + δ ε ).By construction and recalling (5.4) we have that the process Z ε solves (uniquely) the classical Skorokhod reflection problem Z ε t∧m J ≥ x 0 + δ ε , for all t ≥ 0 and dS δε t = 1 {Z ε t =x 0 +δ} dS δε t .(5.9) Therefore we have an explicit formula for the increasing process S δε (see, [27, Lemma 6.14, Chapter 3]): It may be worth noticing that reversing this construction gives another proof of the existence and uniqueness of the solution of the original reflected SDE for X ε .From (2.18) we have where σ := min x∈R σ(x) (recall that we extended σ to R so that it is also strictly separated from zero).Hence As in the proof of Proposition 2.12 (see (2.23)) we need to pass to auxiliary processes in order to remove the dependence of the Brownian motion on the initial point.Then, setting m ε J = inf{s ≥ 0 : Z ε s / ∈ (a, x 1 )} and recalling m ε J = inf{s ≥ 0 : } and where Z 0 s = x 0 + B s + S s is a Brownian motion reflecting at x 0 .Hence, from (5.10) and the above construction we have where in the second inequality we used that and that, for each s ∈ [0, T ], the law of Z 0 s is the same as the law of x 0 + |B s | (see, e.g., [27, Thm.6.17, Sec.3.6.C]).
Finally, using Fatou's lemma in (5.8), and the discussion above, we conclude where the final inequality uses that ϕ ≥ 0 and arbitrary.Hence a contradiction and continuity of x → c(x) is proved.This result immediately applies to the setting of Example 4.1.Moreover, as a byproduct we obtain the first known probabilistic proof of the strict monotonicity of the American put boundary.
Example 5.1 (American put boundary).Let us consider the classical Black and Scholes set-up where dY t = rY t dt + σY t dB t , Y 0 = y, is the stock's dynamics with r, σ > 0. Let K > 0 be the strike price and (x) + := max{0, x}, then the value of the American put option is ṽ(t, y) = sup Although this problem is perhaps the best studied optimal stopping problem in the literature, it is convenient to rewrite some of the main results in the notation of our work so far.The scale function of the process (up to affine transformations) reads S(y) = (1 − D) −1 y 1−D with D = 2r/σ 2 .Recalling the argument from Remark 2.5 we set X t = S(Y t ) and find the dynamics It is worth noticing that if D > 1 the process X is strictly negative, while if D < 1 then the process X is positive.For simplicity but with no loss of generality let us consider We now set g(x) = (K − (x) 1 1−D ) + and notice that g (dx) has a positive atom at K = (K ) 1−D with g ({ K}) = (1 − D) −1 KD/(1−D) .Then, using (2.6) (see also Remark 2.4) we obtain where Here we have Λ − = R + \ { K}, Λ 0 − = (0, K) and Λ + = { K} = Λ 0 + , which is a similar situation to the one in Example 4.1.Intuitively, waiting is costly for the option holder at all times t ∈ [0, T ] for which X t < K, whereas waiting is rewarding at times t ∈ [0, T ] when X t = K (the option holder receives a reward at the 'rate' of 1  2 g ({ K})d K t ).Differently to Example 4.1, here ( K, ∞) = Λ − \ Λ 0 − so that the option holder incurs no costs and no benefits when waiting if X t ∈ ( K, ∞).
From (i) in Proposition 4.4 we obtain that c(x) < T for all x ∈ (0, K), whereas Proposition 4.2 gives c( K) = T .In addition one can easily prove v(t, x) > 0 for t < T and all x ∈ R + .Then c(x) = T for x ∈ ( K, ∞) as well.By analogous arguments to those in the third paragraph of Example 4.1 for any x ∈ (0, K) we have [c(x), T ] × [0, x] ∈ S. Finally, Corollary 4.6 implies that c is strictly increasing on (0, K), hence it can be inverted defining a continuous, non-decreasing boundary t → b(t).In the original coordinates (t, y) the optimal exercise boundary reads b(t . The latter is the familiar parametrisation of the American put exercise boundary (see, e.g., [38,Ch. VII,Sec. 25.2]).Now, applying Corollary 5.5 with Λ 0 − = (0, K) and Σ [0, K] = [0, b(0)] we conclude that t → b(t) must be strictly increasing.Hence t → b(t) is strictly increasing too.

Appendix
Proof of (2.6).The process X is a continuous local martingale, so Itô-Tanaka-Meyer formula ([41, Thm.VI.1.5])gives: L z t g (dz).(5.12) Here the fact that X is bound to evolve in I = (x, x) implies that R L z t g (dz) = I L z t g (dz), (5.13) since L z t = 0 for z / ∈ I.It is also worth recalling that (t, z) → L z t can be chosen Pa.s.continuous by [41,Thm. VI.1.7 and Corollary VI.1.8].Then g(X t ) is a continuous semi-martingale and by Itô's product rule and (5.12) we have where we used Fubini's theorem to swap the order of integrals in the final term (this can be justified more formally using the same arguments as in (5.15) below, so we avoid repetitions here).
For the integral with respect to ds we use the occupation times formula.Let us rewrite g in terms of its positive and negative part, i.e., g = is a positive Borel-measurable function, then [41,Corollary VI.1.6]holds and for all t ∈ [0, T ], P-a.s.Notice that even though σ(z) may vanish when z approaches the endpoints of I, the final integral above is well-defined because the initial expression for Q ± t is always finite.For a.e.ω the mapping defines a (finite) signed measure on [0, T ].Moreover, for simple functions f : [0, T ] → R + (possibly depending on ω as well) it is easy to check that Thus, by dominated convergence the equality extends to any bounded measurable function.In particular, choosing f (s) = e − s 0 r(Xu)du 1 [0,t] (s) we deduce with z s and µ(dz) as in (2.4).Let (τ n ) n≥1 be a localising sequence for the local martingale in (5.16).Taking expectations we have E e − τ ∧τn 0 r(Xs)ds g(X τ ∧τn ) = g(x) + E 1 Step 1.Here we show that (5.17 Recalling that ω ∈ Ω was also arbitrary, lower semi-continuity holds as in (5.17).
Proof of (4.7).First of all we notice that since X t∧τε ∈ I ε for all t ≥ 0, then > 0, because the discount factor is bounded from below by e −rεT with rε = sup x∈Iε r(x).
Now we can use the same time-change as the one adopted in step 2 of the proof of Proposition 2.12 (see (2.18)-(2.19))with τ J therein replaced by τ ε .Thus we get , where W = W x 0 depends on x 0 but we can drop this dependence from our notation as x 0 is fixed throughout the proof.Let us denote by ( L z t ) t≥0 the local time of the process Z x 0 at z ∈ I ε .From Itô-Tanaka's formula we get where the second equality is by [27,Prop. 3.4.8]and the final one is by Itô-Tanaka's formula applied to |Z x 0 − z|.So our problem reduces to proving that for simplicity.Let (ϕ n ) n∈N be smooth approximations of ϕ(x) := |x| such that ϕ n → ϕ uniformly on R with ϕ n (x) → 1 {x≥0} −1 {x<0} pointwise and ϕ n (x) → 2δ 0 (x) in the sense of distributions.Then, taking expectations in Itô-Tanaka's formula and using dominated convergence yields p Iε (s, x 0 , z)ds > 0, using integration by parts.That concludes the proof of (4.7).
Proof of Lemma 5.4.For future reference let us denote ) by Corollary 2.9 and Remark 2.10.Then ∂ x v is continuous separately in C a,b and in the interior of the stopping set int(S)∩ [0, T )×(a, b) .Then we only need to look at the regularity across the boundary ∂C a,b .An important observation which will be used several times below is that for any δ > 0, thanks to Corollary 3.7.
With no loss of generality we assume ĉ > 0 as the argument for ĉ = 0 is analogous.In this case Corollary 4.
Subtracting the two expressions we obtain First we obtain a lower bound.For the first term in (5.27) we recall that v is bounded on compacts (see (2.10)), we set ∆ ε X t = X x+ε t − X x t and use the mean value theorem and r ≥ 0 to obtain For the second term in (5.27), recalling that v(t+τ * , X x τ * ) = g(X x τ * ) and v(t+τ * , X x+ε τ * ) ≥ g(X x+ε τ * ) by optimality of τ * = τ t,x * , we obtain for some κ > 0, independent of ε.Hence, Then, substituting the estimates above back into (5.27)we have Thanks to (5.26) and due to the local nature of the argument we are using, we may substitute X with X in all our calculations above.Therefore, there is no loss of generality assuming that x → X x is continuously differentiable in all the expressions above (since x → X x is such by, e.g., [39,Ch. V.7]) and moreover the process t → ∂ x X x t evolves according to In particular, (t, x) → ∂ x X x admits a continuous modifications (which we use in the rest of the proof) and Thanks to the arbitrariness of σ and the explicit formula for ∂ x X x t we can also assume with no loss of generality that As in (5.32), we take limits as ε n → 0 along the same subsequence (ε n ).In order to use dominated convergence we recall (5.31).Moreover, we notice that (s, z) → by dominated convergence and continuity of g at x 0 .We claim that ∂ x v(t 0 , x 0 +) = g (x 0 ) so that combining the limits (5.37)-(5.40)with (5.32) and (5.35) we obtain g (x 0 ) ≤ lim n→∞ ∂ x v(t n , x n ) ≤ ∂ x v(t 0 , x 0 +) = g (x 0 ).That contradicts (5.36) since the limit must be the same along any subsequence.

4 .
First notice that given any interval [a, b] ⊂ I the boundary attains a global minimum on [a, b] by lower semi-continuity.Then we can define the set of minimisers Σ [a,b] := argmin{c(x), x ∈ [a, b]} and Σ [a,b] = ∅ for any a ≤ b.Notice that Σ [a,b] is a closed set by lower semi-continuity of the boundary.

Figure 1 .
Figure 1.An illustration of the continuation bay in Example 4.1.

b 1
(t) := c −1 1 (t) and b 2 (t) := c −1 2 (t), t ∈ [0, T ).The functions b 1 and b 2 are continuous with b 1 (T ) = b 2 (T ) = K.It may be worth noticing that a one-sided version of continuation bay appears by the same argument also in the American put and call options.

Figure 2 .
Figure 2.An illustration of the stopping spike in Example 4.2.

.
6 implies Σ [a,b] = a * .If a * = a or a * = b then the boundary x → c(x) is strictly monotonic on (a, b).In the more general situation when a * ∈ (a, b) we need to consider separately the intervals (a, a * ] and [a * , b) where the boundary is strictly decreasing and strictly increasing, respectively.Below we develop our arguments only for x ∈ [a * , b) as the remaining case follows along the same lines up to obvious changes.Take a < a < a * < b < b and for any x ∈ (a , b ) let τ x 0 = inf{s ≥ 0 : X x s / ∈ (a , b )}.(5.25)Take σ ∈ C 1 b (R) as an extension of σ outside the interval (a, b).Letting X be the unique strong solution of d X t = σ( X t )dB t , X 0 = x, and τ x 0 the exit time of X x from (a , b ) we have P-a.s. the equalities τ x 0 = τ x 0 and X x s∧τ 0 = X x s∧ τ 0 for all s ≥ 0. (5.26)We will use such equivalence later on.Fix x ∈ (a * , b ) with (t, x) ∈ C and t > ĉ.Take ε > 0 such that x + ε < b and let ρ ε = τ x 0 ∧ τ x+ε 0 Taking τ

1
the continuous inverse of c on (a * , b).Then, for any b ∈ (b , b), on the event {τ * > ρ ε } the segment {t + ρ ε } × [X x ρε , b ] lies in C and we can use the fundamental theorem of calculus (twice) to obtain Due to the strict monotonicity of the boundary c and the fact that c(x) < T for x ∈ (a, b), there exists δ > 0 such that c(b ) < c(b ) ≤ c(b) − δ < T − δ.Moreover, by definition of ρ ε , on the event {τ * > ρ ε } we also have t + ρ ε ≤ c(b ).Then, recalling (5.24), on the event {τ * > ρ ε } we have sup ν∈[X x ρε ,b ]