Boundary Approximation for Sticky Jump-Reflected Processes on the Half-Line

The Skorokhod reflection was used in 1961 to create a reflected diffusion on the half-line. Later, it was used for processes with jumps such as reflected L\'evy processes. Like a Brownian motion, which is a weak limit of random walks, reflected processes on the half-line serve as weak limits of random walks with switching regimes at zero: one regime away from zero, the other around zero. In this article, we develop a general theory of this regime change and prove convergence to a function with generalized reflection. Our results are deterministic and can be applied to a wide class of stochastic processes. Applications include storage processes, heavy traffic limits, diffusion on a half-line with a combination of continuous reflection, jump exit, and a delay at 0.


Introduction
In a classic M/M/1 queue, customers arrive at rate λ, are served at rate µ, with inter-arrival and service independent exponential times.The number of customers in the queue is a birthand-death discrete-valued Markov process on {0, 1, 2, . ..} with birth rate λ and death rate µ.Now, consider a sequence of queues as λ ↑ µ (heavy traffic limit).With correct normalization, we can prove that a scaling limit of these queues is a reflected Brownian motion, possibly with drift.This is an [0, ∞)-valued process which behaves as a classic Brownian motion away from zero, and is reflected instantaneously (according to the Skorokhod reflection) at zero.We refer the reader to the classic book [33].The method of proof is continuous mapping: We have a Skorokhod mapping which takes a function and makes a reflected version out of it.Applied to a random walk, this mapping gives us an M/M/1 queue.Applied to a Brownian motion, this gives us reflected Brownian motion.It is well known that a properly re-scaled random walk weakly converges to a Brownian motion.Since the Skorokhod mapping is continuous in the corresponding functional space, heavy traffic limit of M/M/1 queues is indeed a reflected Brownian motion, see also [3,19,33].
However, Skorokhod reflection is far from the only reflection model for a Brownian motion, and, more generally, for diffusion or Lévy processes.In addition to the Skorokhod reflection the following boundary behavior are possible: • Absorption: Hitting zero and stopping there • Delayed/sticky reflection: Spending more time at zero than with classic Skorokhod reflection.Time spent at zero has positive Lebesgue measure.• Jump reflection: Jumping out of zero upward, instead of reflecting continuously.Classification of reflection modes were done by Feller and Wentzell in [12,32].The construction was done by using resolvents, semigroups, their generators, and boundary conditions.Probabilistic approach was originated by Ito and McKean [16], where the excursion theory was used.The work on continuity principle for these more general reflections is a hard problem.Delay models with Skorokhod's reflection are easier to study rather than jump-exit from the boundary, see for example [13,34].This can be explained by nice properties of the Skorokhod reflecting map and the fact that a delay at the boundary is regulated by a local time at the boundary, which can be associated with a regulator boundary term of the Skorokhod problem.Construction of processes with jump-exit from the boundary is not a trivial task.For example, the corresponding processes were constructed via an elegant usage of excursion theory [31], see also an approach based on the mixture of stochastic differential equations together with martingale problem methods, and resolvent analysis [2,20,21].Jump-reflected Brownian motion was constructed in [7,Theorem 3.11] as a function of a Wiener process and a subordinator, whose Lévy measure coincides with an 'intensity of exit from 0' of the jump-reflected Brownian motion.
In this article, we consider a general deterministic setting.Instead of a queue, we consider a (deterministic) switch problem: It behaves differently at the boundary than away from zero.For a limit function, we create a reflection mapping on the space of RCLL functions, which generalizes the classic Skorokhod mapping mentioned above and includes delays and jumps.Then convergence results for various stochastic models will be received automatically as an application of continuous map theorem.As an application of the general deterministic result on convergence, we will be able to handle out limit theorems for storage processes, perturbed reflected random walks, spectrally positive Lévy processes having jump-type reflection at 0.
Let us concisely explain this switch problem.Definition 3 below in Section 3 provides a rigorous statement.We have an input function x driving the system in the regular regime, and a regulator F for the boundary regime.The output function y evolves together with x (that is, x and y have the same increments) as long as y does not get below −δ.After that, we stop x and restart y as F , until y gets above 0. Then we start y again as x from the place where we stopped x previously, until y gets below −δ.Then restart y again as F from the place where we stopped F last time, etc.
Informally, this can be illustrated as follows: There is an Internet shop, which stores goods and gradually sells them to customers.When the inventory reaches zero (or even some negative level, i.e., there are more orders than goods), the shop switches to a critical regime.It is still accepting orders, but starts to order in larger batches.When the shop again has enough on hand, they stop to order in larger batches and continue the usual functioning.Our goal in this article is to show that the solution to the switch problem converges to the solution to the generalized Skorokhod problem as the threshold δ between the critical and usual regimes converges to 0. We stress that we show this in the general case for deterministic functions.This creates the framework for proving such weak convergence results for regulated stochastic processes.
Moreover, we will obtain continuous dependence on controls in usual and critical regimes in the following sense.We take two sequences of functions: x n → x 0 and F n → F 0 .Next, we take a sequence of non-negative numbers δ n → 0 and another sequence ρ n → ρ ∈ [0, ∞] of time normalizing constants.For each n, we solve the switch problem with input function x n , threshold −δ n , and regulator F n that is slowed down by the factor of ρ n .
If F 0 is strictly increasing, and under some additional technical assumptions, then the solution to this switch problem converges to the reflected process as in (6) with jump reflection governed by F 0 , driving function x 0 , and the time delay at 0 described by (4).The critical values 0 and ∞ of parameter ρ mean zero delay and absorption at 0, respectively.1.1.Notation.Let R + := [0, ∞).Let C T be the space of continuous functions [0, T ] → R and C be the space of continuous functions R + → R with topology of the uniform convergence on compact sets.For T > 0, let D T be the Skorokhod space of right-continuous functions with left limits [0, T ] → R, abbreviated as RCLL or cádlág in French language, and D be the Skorokhod space of RCLL functions x : R + → R. We endow the spaces D T , D with Skorokhod's J 1 -topology, see, for example, [33].We will denote weak convergence of stochastic processes (in C or D) by X n ⇒ X.We define a + := max(a, 0) and a − := max(−a, 0) for a ∈ R. For a set A, we let A be its closure.Let mes(A) be the Lebesgue measure of A. For T > 0, let Λ T be a set of continuous one-to-one strictly increasing functions For an RCLL function h : R + → R, we define ∆h(t); = h(t) − h(t−).If h has jump at time t, this is the size of this jump.If h is continuous at time t, this is zero.
1.2.Organization of this article.In Section 2, we complete the discussion of three reflection modes (Skorokhod's reflection, jump-type reflection, and delay).We state rigorous definitions of the switching process in Section 3. Next, we state the main result: Theorem 1, which includes both cases: sticky and generalized jump-type reflected processes.In Section 4, we present applications of our results to various stochastic processes, including the ones in the previous articles.Section 5 is devoted to the proof of Theorem 1.The Appendix contains proofs of a few technical lemmas.1.3.Acknowledgment.The first author acknowledges a partial support by the National Research Foundation of Ukraine, project 2020.02/0014Asymptotic regimes of perturbed random walks: on the edge of modern and classical probability.Also, the first author was supported by the project Mathematical modelling of complex dynamical systems and processes caused by the state security (Ukraine, Reg.No. 0123U100853).The second author thanks his Department for welcoming and positive atmosphere.

Background: Skorokhod Reflection and Boundary Controls
In this section we recall basic constructions and properties of reflected processes in order to get better understanding of a nature of result that we obtain.
A reflected Brownian motion can be constructed by simply taking the absolute value |W | of a Brownian motion W . However this construction can't be called 'natural' for other stochastic processes, and in particular, for non-symmetric Lévy processes.In two 1961 articles [27,28] Anatoliy Skorokhod developed a method to reflect any continuous function B : [0, ∞) → R, deterministic or stochastic, with B(0) ≥ 0. He found a pair of continuous non-negative functions X, L : [0, ∞) → R such that (1) L is non-decreasing and can increase only when X = 0; finally, L(0) = 0.In this problem (1), the function B is called driving or input.See an example in Figure 1.
The function X has the same increments as B while X is positive, and the function L pushes up only at instants when X = 0 and have no effect otherwise.If B is a Brownian motion, then X has the same distribution as |B|.Thus, Skorokhod's definition is consistent with a naive approach to the notion of a reflected Brownian motion.Moreover, it is well known that L is the symmetric local time of X at 0. Later Skorokhod's problem (1) was generalized for functions B ∈ D; see, for example, [9,30,18].The solution is given by the formula ( 2) The corresponding Skorokhod mapping B → X is continuous in the Skorokhod space of RCLL functions (and is continuous in the subspace of continuous functions).This allows us to prove functional limit theorems for heavy traffic limits as an application of continuous map theorem, see for example, [14,33].For instant, the Lindley's recursion is nothing else but a solution to Skorokhod's reflecting problem for random walks.Assume again that B is a Brownian motion, so that X is a reflected Brownian motion.It is well known that X spends zero time at 0 almost surely; that is, the Lebesgue measure of this time is zero: (3) mes({s ≥ 0 | X(s) = 0}) = 0 a.s.
Hence, the Skorokhod reflection is instantaneous.There are other types of reflection and boundary behavior.A simple rule is alternatively called absorbing, or stopping: When the Brownian motion or any other input function (deterministic or stochastic) hits zero, it simply stops and stays constant after that.Another, more complicated rule, is sticky reflection.This version of a reflected Brownian motion is delayed when it hits zero.Fix a parameter ρ > 0 called delay rate and consider the function A(t) = t + ρL(t) where L is from (1).It is a continuous strictly increasing function and therefore it has an inverse A −1 .Plugging this inverse into the classic reflected function X, we get: As a result, this process has same excursions as X (but shifted in time) and spends positive time at zero, so (3) is no longer true.The larger ρ is, the longer the delay is.As ρ → ∞, we get A(t) → ∞, which corresponds to the absorbed process.When ρ = 0, we are back to the classic instantaneous reflection.We shall call this reflection delayed, as opposed to instantaneous classic Skorokhod reflection.However, this term should not be misunderstood.This sticky reflected Brownian motion still leaves zero instantaneously when it hits zero in the following sense.Notice that the set of zeros of X is a closed nowhere dense set a.s., so for any t such that X(t) = 0 and every ε > 0 there exists an s ∈ (t, t + ε) with X(s) > 0.
The process L is a local time at 0 of the reflected Brownian motion X. Hence A is a continuous additive functional of X, and the general theory of Markov processes implies that the process X is a strong Markov process.It can be proved that a sticky reflected Brownian motion is a (weak) solution to the following stochastic differential equation: where W is a standard Brownian motion.Note that increments of X ρ coincide with increments of W when X is positive.However, a reader should be careful: As shown in [10], there is no strong solution to the system of stochastic differential equations (5).This is a very subtle and unexpected observation, because the process X defined by ( 4) and ( 2) is a function of B.
Like the Skorokhod reflection map (2), formula (4) can be used for RCLL functions or stochastic processes, including Lévy processes.Moreover, if X is a reflection of spectrally positive Lévy process or a reflected diffusion, then the corresponding process L will be a local time of X (up to a multiplicative constant).So the process X defined via time change ( 4) is naturally be called the delay at 0 of a Markov process X. See [1,4] for further studies of a sticky reflected Brownian motion, and also also [25] on construction of sticky Lévy processes (without reflection).
The Skorokhod map B → X and the delayed Skorokhod reflection B → X describe continuous exits from 0: Discontinuity at the instant of exit from 0 can arise only from jumps in the driving process B, not the reflection itself.
There is a discontinuous jump-type reflection that corresponds to non-local Feller-Wentzell boundary conditions in the semigroup theory of the diffusion processes or to a jump entrance law in Ito's excursion theory.The first author in [23] proposed to consider (a deterministic) jump reflection problem, when we replace L in (1) with F (L).Here F : [0, ∞) → [0, ∞) is a given strictly increasing RCLL function with F (0) = 0 and F (∞) = ∞: where L is again a non-decreasing function that may increase only when X equals 0.More generally, in a later article [23] it was shown that for every continuous B there is a unique solution to this modified equation (6), see formula (9) in Lemma 1 below for an explicit formula of a solution.This explicit formula (without a formulation of a reflected problem) was used in [7] for a construction of Feller diffusions on a half-line, where B was a Brownian motion and F was a subordinator.Finally, note that this jump reflection can be combined with delay.Then we get sticky jump reflection.We do this in two steps, much like for sticky reflection X above.First, we solve the equation ( 6) and get the jump reflection X without delay.Then we fix ρ > 0 and use L and X to construct X as in (4).The same trichotomy as above is present here: For ρ = 0, we are back in the case of jump reflection without delay.For ρ ∈ (0, ∞), this is jump reflection with delay.Finally, as ρ → ∞, we get absorbed process.When it hits zero, it remains there forever.

Definitions and the Main Result
3.1.Instantaneous jump reflection and delays.In this subsection, we provide explicit formulas for solutions on reflected problems, both classic (1) and with jumps (6).We state the definition again for completeness, although we discussed these reflection modes in the Introduction.
Definition 1.Consider functions x, F ∈ D such that x(0) ≥ 0, F is strictly increasing, F (0) = 0, F (∞) = ∞.A solution to the generalized Skorokhod problem is a pair of functions y, l ∈ D, with the following properties: y(t) ≥ 0 for all t ≥ 0; l is non-decreasing function, and the following equality is true: Moreover, the function l can increase only when y = 0, i.e. ( We will call the function x the driving noise, y the reflected process, F the regulator, l the boundary term, and denoted y = S(x, F ).
Lemma 1.If F is continuous or x has no negative jumps, then there is a unique solution to (7) and this solution is given by the formula where For F (t) ≡ t, we get the classic Skorokhod reflection problem with the unique solution y(t) = x(t)+m(t), see [30,18].If F is continuous, we are back to classic Skorokhod reflection; in a way, the case of a continuous F is not different from the case F (t) = t for all t ≥ 0. Indeed, F • F −1 (t) = t for strictly increasing and continuous F .Hence y(t) = x(t) + m(t) again.
Remark 1.The function m is continuous, since x has no negative jumps.The function F −1 is continuous and non-decreasing, since F is strictly increasing, see [33,Lemma 13.6.5.].Of course, F −1 is the standard inverse function if F is strictly increasing and continuous.
Remark 2. The condition that x has no negative jumps or F is continuous is important.Assume x has, in fact, a negative jump at the point t.Then we can construct a setting when there is no solution.Assume y(t−) > 0 for some t but △x(t) < 0 and y(t−) + △x(t) < 0. At time t, the jump of x needs to be compensated by the function F (l).If F has jumps, then it may be impossible to find the compensation such that y(t) = 0.
For continuous x, Lemma 1 was shown in [23].The proof of the general case is postponed until the Appendix.For convenience of readers we quote a result about the composition F • F −1 used in (9), taken also from [23], and illustrated in Figure 2.
Lemma 2. Take a function F ∈ D that is an increasing function with F (0) = 0 and It is an open set, and therefore a countable union of intervals We stress that a delayed Skorokhod reflection is not an alternative to jump reflection.Rather, these are two characteristics of a reflection: It can be with or without delay, and with or without jumps (that is, with F continuous or discontinuous).All four options are possible.In addition, of course, we have absorbed processes, but there F does not matter anymore: It regulates only behavior at zero, and the resulting process stays at zero forever after it hits zero.

The switch problem.
Here we define the regime switching problem in the general case, for arbitrary deterministic RCLL functions.This can be applied to Brownian motion, Lévy processes, or any other stochastic processes.Definition 3. Fix a gap δ > 0, the driving function x ∈ D, x(0) ≥ 0, and the regulating function F ∈ D with F (0) = 0.A solution to the switch problem is a function y ∈ D with: (and therefore T A (t) + T B (t) ≡ t), which satisfy (10) Remark 4. From (10) it follows that: y(t) − F (T B (t)) is constant on each interval in B; and y(t) − x(T A (t)) is constant on each interval in A.
Let us explain this switch problem in plain English.
(1) We start with the input function x; the output function y is equal to x until it reaches below −δ.This is the normal regime A. Note that the function x can have jumps, so the input function x (and together with it the output function y) might reach (−∞, −δ] via a negative jump rather than a continuous path.Assume this happens at time ρ 1 .This is the first piece of the input function. (2) Then we switch to the boundary regime B. We use the regulating function F , or simply regulator, starting from zero argument, to increase the output function y until it reaches above 0.The increments of the output function y coincide with the increments of the regulator F .Again, note that the output function y can reach [0, ∞) by a jump rather than continuous movement.We stop at time τ 1 .So the regulator F stops at τ 1 − ρ 1 .This is the first piece of the regulator.(3) Next, we switch to the normal regime A again.We govern the output function y by the input function x: That is, the increments of the input and output functions coincide.We use the second piece of the input function x, starting from the point where we finished the first piece, in part 1.This happens until, as in part 1, the output function hits (−∞, −δ].Assume this happens at the moment ρ 2 .(4) Then we switch again to the boundary regime B again, as in part 2. We govern the output function y by the regulator F : The increments of the regulator and output functions coincide.We use the second piece of the regulator F , which starts at the time when the first piece ended.We continue until the output function hits [0, ∞), and then switch to the normal regime again.To summarize, we cut the graphs of input function x and the regulator F in pieces: We attach the first piece of x, then the first piece of F , then the second piece of x, then the second piece of F , and so on.By construction, for any pair (x, F ) ∈ D 2 , and any δ > 0, there exists a unique solution y = G δ (x, F ).
See an example with a piecewise linear input function x and another piecewise linear function as regulator F , and the output in Figure 3, where δ = 0.5.
Generally, we cannot consider the switch problem for δ = 0 and switch the regime when the process enters and exits (0, ∞) (or enters and exits [0, ∞)).The only difficulty is treating the switch problem at 0. For example, it is unclear how to treat the definition if regime A pushes down and regime B pushes up, or if functions x and F behaves like excursions of a Brownian motion having no intervals of monotonicity.We will define the solution to the switching problem G δ (x, F ) with δ = 0 if contradictions in the definition do not appear.The following cases are examples: (a) x and F are step-functions with finitely many jumps in any [0, t]; regime A is selected if y(t) > 0; regime B is selected if y(t) ≤ 0. (b) F is a non-decreasing step function with finitely many jumps in any [0, t]; regime A is selected if y(t) > 0; regime B is selected if y(t) ≤ 0. (c) Same as above, where regimes A or B are selected if y(t) ≥ 0 or y(t) < 0, respectively.(d) Any function y constructed from pieces of x and F under condition that y never hits 0 and has finitely many crossings of 0 during any [0 We will apply the switch problem G 0 (x, F ) to the cases when: (a) x and F are independent compound Poisson processes; , where (x n ) and (F n ) are independent random walks; (c) x is a Lévy process, and F is a jump-type subordinator with finite Lévy measure.
Remark 5.If we consider the problem y = G 0 (x, F ), then we will always assume that x and F are such that the switch problem G 0 (x, F ) is well defined.

Main results.
For a non-decreasing function f : R + → R we say that s ∈ R + is a growth point if f (t) > f (s) for t > s and f (t) < f (s) for t < s.
Take two sequences (δ n ) or (̺ n ) of real numbers such that for every n we have δ n ≥ 0 and Consider a sequence (y n ) ⊆ D of functions defined as Assume that (a) (d) x 0 does not have negative jumps; (e) if α ≥ 0 and t ≥ 0 are such that F 0 (α) = F 0 (α−) = m 0 (t), where m 0 is defined in (11), then t is a growth point of m 0 .Then y n → y 0 in D, where y 0 depends on ̺: (1) Classic reflection: ̺ = 0. Then y 0 = S(x 0 , F 0 ).
Corollary 1.Let (x n ), (F n ) be two sequences of RCLL processes that for almost every ω satisfy assumptions (a), (c), (d), and (e) of Theorem 1. Assume also that sequences (δ n ), (̺ n ), and (y n ) are defined as in Theorem 1, and also Then we have convergence in distribution (12) y . where y 0 depends on ̺ similarly to Theorem 1.
Proof.By the Skorokhod representation theorem, there are copies ( x n , F n ) d = (x n , F n ), n ≥ 0 such that we have convergence almost surely: Hence almost surely we have convergence , n → ∞.This latter convergence implies (12).

Examples and Applications
4.1.Driving Brownian motion.Assume that x n = w, n ≥ 1, where w is a Brownian motion, ̺ n = 1, n ≥ 1, F n (t) = at, where a > 0, and (δ n ) ⊂ (0, ∞) be any sequence of positive numbers that converges to 0. That is, the process y n is a continuous process that moves like a Brownian motion before hitting −δ n , then switches a regime and moves up with constant speed until hitting 0, then switches regime for a Brownian motion until hitting −δ n , etc.We have x 0 (t) = w(t), F 0 (t) = at, F 0 (F −1 0 (t)) = t.Then S(x 0 , F 0 ) = S(x 0 ) = S(w) = w + m 0 is the reflected Brownian motion, and A ̺ (t) := t + ̺F −1 0 (m 0 (t)) = t + a −1 m 0 (t).Hence y 0 = (w + m 0 )(A −1 ̺ ) is a sticky reflected Brownian motion.If we assume F n (t) = a n t, where lim n→∞ a n = +∞, i.e., the regime below 0 pushes up strongly, then the limit process will be the usual reflected Brownian motion without any delay.
Note that the process y n has the same distribution as a solution to the stochastic equation with Y n (0) = 0.Here sets A n and B n are defined as follows: 4.2.Storage processes.Take Poisson processes N λn (t) and Nλ n (t) with intensities λ n and λn , respectively.Take sequences of non-negative i.i.d.random variables (ξ We will assume that all processes and sequences are jointly independent.Consider two compound Poisson processes x n and F n with positive jumps, where x n has an additional negative drift term: Note that F n is a jump-type subordinator.Then the process y n (t) = G 0 (x n , F n ) is a Markov storage process whose behavior can be described as follows: (1) If y n (t) > 0, then it has a negative drift at rate r n and jumps up in a Poisson clock with intensity λ n ; the value of the kth jump is ξ (2) If y n (t) = 0, then x n stays at 0 the exponential time Exp( λn ) and then jumps up with the distribution where w is a standard Brownian motion.Assume that there are normalizing constants γ n such that (13) [γnt] Suppose that there exists a limit λn /γ n → ̺ ∈ [0, ∞].Without loss of generality we may assume that Nλ n (t) = N ( λn t), where N is a fixed Poisson process with intensity 1.It is well known that N (γ n t)/γ n → t uniformly on every [0, T ] a.s.Thus where ̺ n = γ n / λn .
The application of Corollary 1 with δ n := 0 implies the following result.
(2) If ̺ > 0, then the limit of y n is the sticky reflection of µt + σw(t) at 0.
In addition, the case when ξ was considered in [13].In this case ( 13) is satisfied with γ n = λ n /r n .Another example when (13) holds is the following one: where (ζ k ) are non-negative independent identically distributed random variables, Eζ k = 1, and (γ n ) is any sequence of positive numbers such that γ n → +∞.
If the sequence ( k ) converges to an increasing subordinator F 0 (t), t ≥ 0, then the limit process will be a diffusion with jump-type exit from 0. The only non-triviality is to verify condition (e) of Theorem 1.This will be done in more general case in next example.

4.3.
Convergence to a reflected Lévy process with a delay at 0. Let a sequence of positive numbers (ρ n ) and sequences of stochastic processes (x n ), (F n ) be such that (2) x 0 is a Lévy process without negative jumps; (3) F 0 is an increasing subordinator; (4) x 0 and F 0 are independent; (5) The process T 0 := m −1 0 is a subordinator (may be killed at an exponential time), see [5,Theorem 1,p.189].A point t is not a point of growth of m 0 if and only if m 0 (t) is a point of jump of T 0 .At any fixed (non-random) α ≥ 0, the function T 0 is almost surely continuous.Since the processes F 0 and T 0 are independent and the set of jumps of F 0 is at most countable, condition (e) of Theorem 1 is true almost surely.The application of Corollary 1 implies convergence in distribution (12).It is natural to say that the process S(x 0 , F 0 )(A −1 ̺ ) is the process x 0 that having jump-type reflection at 0 with a delay.It may be seen similarly to [7,Chapter II.3 (c)] that the process S(x 0 , F 0 ) is a Markov process and L(t) = F −1 0 (m 0 (t)), t ≥ 0 is its local time at 0 if x 0 and F 0 are independent.
4.4.Perturbed random walks.Let (ξ k ) be a sequence of independent identically distributed mean-zero random variables with finite variance σ 2 > 0. Consider the random walk S ξ (n) := n k=1 ξ k , where S ξ (0) := 0. Let us extend S ξ to non-negative half-line as follows S ξ (t) := S ξ ([t]) for t ≥ 0. It is well known that in D by the Donsker theorem, where w is a standard Wiener process.Assume that a nonnegative random variable η belongs to the domain of attraction of the β-stable law with β ∈ (0, 1).Consider a Markov chain (X(n)) with transition probabilities We will interpret X as a perturbation of S ξ below 0. Note that X has the same distribution as a solution of two-phase system, where δ = 0, x = S ξ , and F = S η .Here S η (n) = n k=1 η k , S η (t) := S η ([t]) for t ≥ 0, and the sequences (η k ) and (ξ k ) are independent.That is, Note that natural scaling for S η is not √ n as in the Donsker theorem.There is a sequence (a(n)) that is slowly varying at infinity with index 1 in D, where U β is a β-stable subordinator: a non-decreasing Lévy process with Laplace transform E[exp(−λU β (t))] = exp(−tλ β ) for t, λ ≥ 0. It can be seen that there is a sequence (b(n)) that is slowly varying at infinity with index β/2 such that Corollary 1 implies the weak convergence ( 14) where w and U β are independent, m(t) = max s∈[0,t] (w(t) − ).Feasibility of condition (e) of Theorem 1 follows from the reasoning of the previous example.
Remark 9.If we multiply each b(n) by a constant C > 0, then However, the result will be unchanged because Ûβ Note that the same result is also true if transition probabilities for X are P(X(n + 1) = y|X(n) = x) = P(ξ = y − x), x > 0; P(η = y − x), x ≤ 0.
Convergence (14) was proved in [15] and for a particular case in [24].The same convergence was obtained in [15] for the Donsker scaling limits of sequences that also satisfy assumptions of Corollary 1 (the corresponding additional reasoning can be found in [15]).Some ideas used in this paper are taken from [15,24], but only now it becomes clear how different scaling for (ξ k ) and (η k ) interplay and give a limit for the Donsker scaling of perturbed random walk (X(n)).
Remark 10.Consider sequences (X l (k)) k≥0 that have the same transition probabilities as (X(k)) k≥0 but different initial values such that X l (0)/ √ l ⇒ ζ, n → ∞.It is easy to see that the following convergence can be proved: , t ≥ 0, where processes w, U β , and the random variable ζ are independent.

Proof of the Main Result
This section is organized as follows.In Subsection 5.1, we state an upper and a lower bound for the solution of the switch problem.These two lemmas have proofs postponed until the Appendix.In Subsection 5.2, we write four technical convergence lemmas used for the proof of Theorem 1.The first two of these four lemmas, similarly, have proofs in the Appendix.The other two lemmas are quoted from other sources so we do not give their proofs.In the next three subsections, we prove Theorem 1 for three cases: 0 < ̺ < ∞, ̺ = 0, and ̺ = ∞.

5.1.
Estimates for the switch problem.Take a constant δ > 0 and functions x, F , as in the definition of the switch problem.Let y = G δ (x, F ) be the solution, and A, B the corresponding sets.Define the running maximum We state the two lemmas which together form the basis for the proof.Lemma 3. Let y = G δ (x, F ). Then for every t ≥ 0, we have: Lemma 4. Let y = G δ (x, F ). Then for every t ≥ 0, we have: 5.2.Preliminary results.The next two lemmas are technical convergence results, with (simple) proofs postponed until the Appendix.
Lemma 5.If x n → x 0 in D, where x 0 has only positive jumps, then for any Recall a classical characterization of convergence in a Skorokhod space from [11, Chapter 3, Proposition 6.5, page 125]: Lemma 7. We have x n → x 0 in D if and only if for any non-negative t n → t 0 , we have: (1) all limit points of (x n (t n )) n≥1 are either x 0 (t 0 ) or x 0 (t 0 −); The following lemma easily follows from the previous one.
Lemma 8. Assume x n → x 0 in D and y n → y 0 in D. If for every point t ≥ 0 at least one of two functions x 0 and y 0 is continuous at this point, then x n + y n → x 0 + y 0 in D.

Proof of Theorem 1 for
Define A n and B n to be the sets A and B for the nth switch problem.First, the sequences (T An ), (T Bn ) are pre-compact in C T for every T > 0. Indeed, (T An ), (T Bn ) are globally Lipschitz continuous with Lipschitz constant 1; therefore, this sequence (T An ) is equicontinuous in C T for every T > 0. By the Arzela-Ascoli theorem, these sequences are pre-compact.The same applies to (T Bn ).Take a subsequence (n ′ ) such that ( 16) where Recall that m 0 is continuous, see Remark 1. Thus we have the uniform convergence m n → m 0 on any [0, T ].Combining these facts of the uniform convergence and the observation we get the uniform convergence on any [0, T ]: . Combining this argument with Lemmas 4, 5, 6, and the assumption that F 0 is non-decreasing, we get: Here we used the fact that T An (t) ≤ t and T Bn (t) ≤ t, thus The inequality ).Since S 0 (t) + S 1 (t) = t, we can rewrite this as ̺ −1 (t − S 0 (t)) = F −1 0 (m 0 (S 0 (t))).Therefore, (20) S 0 (t) + ̺F −1 0 (m 0 (S 0 (t))) = t, t ≥ 0. By Remark 1, the function A(s) := A ̺ (s) := s + ̺F −1 0 (m 0 (s)) is continuous, strictly increasing, A(0) = 0, and A(∞) = ∞.Therefore, S 0 (t) = A −1 (t) is continuous and strictly increasing.Combining the above formulas, we get: Step 3. It follows from Steps 1 and 2 that there exists a sub-sequence (n ′ ) such that we have T A n ′ → A −1 in C. Similarly, we can show that every sub-sequence (ñ) have their own subsub-sequence (ñ ′ ) such that T A ñ′ → A −1 in C. Since the space C is metric, this implies that T An → A −1 in C. Since T Bn (t) = t−T An (t) we have convergence T Bn → S 1 = ̺F −1 0 (m 0 (A −1 )) too.It follows from [33,Theorem 13.2.2] that (21) x n • T An → x 0 • A −1 in D because A −1 is continuous and strictly increasing.
Step 4. For brevity, we give the proof only for ρ n = 1.In the general case nothing will change, but the presence of ρ n in the numerators or denominators obscures the idea of the proof.Let us verify the convergence: (22) F n (T Bn ) → F 0 (S 1 ) in D.
Above, the limit points of (F n (T Bn (t))) may be only α or β.So lim t→t 0 F n (T Bn (t)) exists and is equal to β.Thus all conditions of Lemma 7 holds and this completes the proof of (22).

Proof of Theorem 1 for
Since T An (t) + T Bn (t) = t and F 0 (∞) = ∞, we have convergence T An (t) → t and T Bn (t) → 0 for any fixed t ≥ 0.Moreover, since all functions are non-decreasing in t the convergence is locally uniform.It follows from [33, Theorem 13.2.2] that x n (T An ) → x 0 and m n (T An ) → m 0 in D as n → ∞.Moreover, continuity of m 0 implies the locally uniform convergence m n (T An ) → m 0 .Thus (24) −m 0 (t) + lim n→∞ Using Lemma 4, similarly to the reasoning above, we obtain the inequality: Fix a t > 0 and set s n := T Bn (t)/̺ n .Then we can rewrite ( 24) and ( 25) as Let s 0 be a limit point of (s n ) (including infinity).By Lemma 7, using the fact that F 0 is strictly increasing, we get the inequality Thus s 0 := F −1 0 (m 0 (t)).Since any limit point of (s n ) is determined uniquely we have convergence T Bn (t)/̺ n = s n → s 0 = F −1 0 (m 0 (t)) as n → ∞ for any t ≥ 0, and hence a locally uniform convergence because all functions are non-decreasing and the limit is continuous.The same arguments as for the case ̺ ∈ (0, ∞) imply convergence in D: F n (T Bn /̺ n ) → F 0 (F −1 0 (m 0 )), and finally y n := x n (T An ) + F n (T Bn /̺ n ) → y 0 := x 0 + F 0 (F −1 0 (m 0 )) = S(x 0 , F 0 ).5.5.Proof of Theorem 1 for ̺ = ∞.Note that S(x 0 , F 0 )(t ∧ σ) = x 0 (t ∧ σ).Since lim n→∞ ̺ n = ∞ and T Bn (t) ≤ t we have the local uniform convergence T Bn (t)/̺ n → 0 as n → ∞.Recall that F 0 (0) = 0 and F n → F 0 in D. Due to Lemma 7, for any t ≥ 0 we have convergence lim n→∞ F n (T Bn (t)/̺ n ) = F 0 (0) = 0 and even locally uniform convergence because functions F n (T Bn ) are non-decreasing.It is clear that for any t < σ we have T An (t) = t for sufficiently large n.On the other hand, similarly to the case ̺ ∈ (0, ∞), Lemma 4 implies: s) < 0} by the assumption and inf{s ≥ 0 | x 0 (s) < 0} = inf{s ≥ 0 | m 0 (s) > 0}, we have lim n→∞ T An (t) ≤ σ for any t.We may conclude that T An (t) → t ∧ σ.To prove the theorem, it suffices to verify convergence x n (T An ) → x 0 (• ∧ σ).Let us apply Lemma 7. If t 0 = σ, then conditions of Lemma 7 are obviously satisfied.
Recall that inf{s ≥ 0 | x 0 (s) = 0} = inf{s ≥ 0 | x 0 (s) < 0}.Hence x 0 doesn't have a (positive) jump at σ = inf{s ≥ 0 | x 0 (s) = 0} and so x 0 is continuous at t 0 = σ.Thus x 0 (• ∧ σ) is continuous at t 0 = σ, and consequently conditions of Lemma 7 are satisfied for t 0 = σ too.6. Appendix 6.1.Proof of Lemma 1.The proof of existence is straightforward and is done exactly as in the classic Skorokhod problem.Let us show uniqueness: take two solutions y 1 , y 2 , and the corresponding boundary terms l 1 , l 2 .Then We also know that y(t) ≤ 0 for t ∈ B, and therefore ( 27) The second equality is true because T B is strictly increasing on (ρ k , τ k ]; thus, the left limit of the function F at the point T B (τ k−1 ) is equal to the left limit of the composition F • T B at the point τ k−1 .Next, apply the formulas ( 10), ( 15), (27).We get: For the last inequality, we use that m(T A ) is non-decreasing.This completes the proof of Lemma 3 in Case 1. Case 2. Assume t ∈ (τ k−1 , ρ k ] for some k.On this interval, the function T B is constant.Therefore, T B (t) = T B (τ k−1 ).Thus Next, using ( 15) and ( 27), we get: Combining ( 28), ( 29), (30), we complete the proof of Lemma 3 in Case 2.
6.3.Proof of Lemma 4. We prove the statement by induction.First, the induction base: [0, ρ 1 ), we are in regime A. Thus T A (t) = t and T B (t) = 0. Therefore, x(s) = y(s) for s ∈ [0, ρ 1 ).Hence we have: Thus m(t) ≤ δ for t ∈ [0, ρ 1 ).Note that F (0) = 0, and the two suprema in the right-hand side of the inequality of Lemma 4 are non-negative.This proves the lemma statement on [0, ρ 1 ).Before the induction step, notice that regimes can switch only when x attains its minimum or F attains its maximum: Case 1. Assume the statement is true on [0, ρ k ).Let us show it for t ∈ [ρ k , τ k ).We have (F 0 (s 1 ) − F 0 (s 2 )).
Since the function F 0 is non-decreasing, the right-hand side of (35) is zero.Together with (34), this completes the proof of Lemma 6.

Figure 1 .
Figure 1.Classic Skorokhod reflection; B is the original function; X is the Skorokhod reflection; L is the boundary term.

Figure 2 .
Figure 2. Lemma 2. The function G with A F = (3, 4) ∪ (6, 8) ∪ (8, 8.5).Definition 2. Fix a ρ ∈ (0, ∞).Take the boundary term l from Definition 1 and define the time change A(t) = t+ρl(t) for t ≥ 0. Plug A −1 into the reflected function y from Definition 1. Then z(t) = y(A −1 (t)) for t ≥ 0 is called the delayed jump-reflection or sticky jump-reflection.The functions x, F are called the input or driving function and the regulator, respectively, similarly to Definition 1.The function z called the sticky jump-reflected function, and the function L defined as L(t) = l(A −1 (t)) for t ≥ 0 is called the sticky boundary term.

Figure 3 .
Figure 3.The switch problem.Top left: Input function x.Top right: regulator F .Bottom: Output y = S δ (x, F ).All functions are right-continuous.

0 F
Conditions 2 and 3 of Lemma 7 hold for the point t 0 because F 0 is strictly increasing.It remains to consider the case when m 0 (A −1 (t 0 )) ∈ (α, β].Lemmas 4, 5, 6 and the assumption that F 0 is non-decreasing imply lim t→t n (T Bn (t)) ≥ lim t→t 0