The invariant measure of PushASEP with a wall and point-to-line last passage percolation

We consider an interacting particle system on the lattice involving pushing and blocking interactions, called PushASEP, in the presence of a wall at the origin. We show that the invariant measure of this system is equal in distribution to a vector of point-to-line last passage percolation times in a random geometrically distributed environment. The largest co-ordinates in both of these vectors are equal in distribution to the all-time supremum of a non-colliding random walk.


Introduction
The last two decades have seen remarkable progress in the study of random interface growth, interacting particle systems and random polymers within the Kardar-Parisi-Zhang (KPZ) universality class through the identification of deep connections between probability, combinatorics, symmetric functions, queueing theory, random matrices and quantum integrable systems. The greatest progress has been made with narrow-wedge initial data (for example see [1,3,7,12,18,26,27,31]) and there are substantial differences in the case of flat initial data, see [2,6,10,15,24,29].
The purpose of this paper is to prove multi-dimensional identities in law between different models in the KPZ universality class with flat initial data. These are closely related to identities involving reflected Brownian motions and point-to-line last passage percolation with exponential data proved recently in [16]. The results of this paper, together with [16], suggests the possibility that there may be more identities of this form and deeper algebraic reasons for why they hold.
On the one hand, these identities involve an interacting particle system called PushASEP (introduced in [8]) in the presence of an additional wall at the origin. This is a continuous-time Markov chain (Y1(t), . . . , Yn(t)) t≥0 taking values in W n ≥0 = {(y1, . . . , yn) : 0 ≤ y1 . . . ≤ yn and yi ∈ Z} and with the following evolution depending on 2n independent exponential clocks. Throughout we refer to the i-th co-ordinate as the i-th particle. At rate vi, the right-clock of the i-th particle rings and the i-th particle jumps to the right. All particles which have (before the jump of the i-th particle) a position equal to the i-th particle position and an index greater than or equal to i are pushed by one step to the right. At rate v −1 i the left-clock of the i-th particle rings and if the i-th particle has a position strictly larger than both the (i − 1)-th particle and zero then the i-th particle jumps by one step to the left; if not this jump is suppressed. In summary, particles push particles with higher indices and are blocked by particles with lower indices and a wall at the origin.
A second viewpoint is to relate the top particle in PushASEP with a wall to the top particle in an ordered (or non-colliding process), see Proposition 3 and related statements in [3,4,26,32]. Let (Z (v 1 ) n (t)) t≥0 be a multi-dimensional continuous-time random walk where Z (v n−i+1 ) i jumps to the right with rate vn−i+1 and to the left with rate v −1 n−i+1 . We construct from this an ordered process (Z † 1 (t), . . . , Z † n (t)) t≥0 by a Doob h-transform, see Section 2. In the case 0 < vn < . . . < v1, this is given by conditioning (Z where gij are an independent collection of geometric random variables with parameter 1 − vivn−j+1 indexed by {(i, j) : i, j ∈ Z ≥1 and i + j ≤ n + 1} and with 0 < vi < 1 for each i = 1, . . . , n. The geometric random variables are defined as P (gij = k) = (1 − vivn−j+1)(vivn−j+1) k for all k ≥ 0.
Theorem 1. Let n ≥ 1 and suppose 0 < v1, . . . , vn < 1 and let Y * n be distributed according to the top particle of PushASEP with a wall in its invariant measure, let Z † n be the top particle in the ordered random walk above, see also (4), and G(1, 1) be the point-to-line last passage percolation time defined by (1). Then The first identity in Theorem 1 follows from two representations for Y * n and sup t≥0 Z † n (t) as point-to-line last passage percolation times in a random environment constructed from Poisson point processes. The equality in law then follows from a time reversal argument.
The main content of Theorem 1 is that either of these random variables is equal in distribution to a point-to-line last passage percolation time. This can be proven in two ways. The first method is to calculate the distribution function of sup t≥0 Z † n (t) by relating the problem to conditioning a multi-dimensional random walk to stay in a Weyl chamber of type C given that it remains in a Weyl chamber of type A. This gives the distribution function of sup t≥0 Z † n (t) as proportional to a symplectic Schur function divided by a Schur function. This can be identified as a known expression for the distribution function of point-to-line last passage percolation in a geometric environment from [5]. This proof of Theorem 1 is given in Section 2.
The second method of proof is to view Theorem 1 as an equality of the marginal distributions of the largest co-ordinates in a multi-dimensional identity in law relating the whole invariant measure of PushASEP with a wall to a vector of point-to-line last passage percolation times. This leads to our main result.
We give two proofs of Theorem 2. In the first proof, we prove in Section 3 a formula for the transition probability of PushASEP with a wall, following the method of [8]. From this we obtain an expression for the probability mass function of (Y * 1 , . . . , Y * n ) in Proposition 10. In Section 4, we use an interpretation of last passage percolation as a discrete-time Markov chain, with a sequential update rule for particle positions, which has explicit determinantal transition probabilities given in [14]. In order to find the distribution of a vector of point-to-line last passage percolation times, we use the update rule of this discrete-time Markov chain while adding in a new particle at the origin after each time step. In such a way we can find an explicit probability mass function for (G (1, n), . . . , G(1, 1)) which agrees with (Y * 1 , . . . , Y * n ) and gives our first proof of Theorem 2.
The second proof of Theorem 2 is to obtain this multi-dimensional equality in law as a marginal equality of a larger identity in law. We give this proof in Section 5. In particular, we construct a multi-dimensional Markov process involving pushing and blocking interaction which has (i) an invariant measure given by {G(i, j) : i+j ≤ n+1} and (ii) a certain marginal given by PushASEP with a wall. Moreover, the process we construct is dynamically reversible. This notion has appeared in the queueing literature [20] and means that a process started in stationarity has the same distribution when run forwards and backwards in time up to a relabelling of the co-ordinates. Dynamical reversibility leads to a convenient way of finding an invariant measure and can be used to deduce further properties of PushASEP with a wall. In particular, when started in stationarity the top particle of PushASEP with a wall evolves as a non-Markovian process with the same distribution when run forwards and backwards in time. This is a property shared by the Airy 1 process and it is natural to expect that the top particle in PushASEP with a wall run in stationarity converges to the Airy 1 process.
We end the introduction by comparing with the results on PushASEP in Borodin and Ferrari [8]. When started from a step or periodic initial condition [8] prove that the associated height function converges to the Airy 2 or Airy 1 process respectively (see also the seminal work [10,29]). The choice of a periodic initial condition thus gives one way of accessing the KPZ universality class started from a flat interface. In this paper we instead impose a wall at the origin and consider the invariant measure of PushASEP with a wall. This makes a substantial difference to the analysis and unveils different connections within the KPZ universality class with flat initial data.

Proof of Theorem 1 2.1 The all-time supremum of a non-colliding process
We start by defining Schur and symplectic Schur functions. It will be sufficient for our purposes to define them according to their Weyl character formulas and we only remark that they can also be defined as a sum over weighted Gelfand Tsetlin patterns and have a representation theoretic significance, see [17].
and for x ∈ W n ≥0 we define the symplectic Schur function Sp x : R n >0 → R by Let (Z (vn ) 1 (t), . . . , Z (v 1 ) n (t)) t≥0 denote a multi-dimensional continuous-time random walk started from (x1, . . . , xn) where each component is independent and Z (v n−i+1 ) i jumps to the right at rate vn−i+1 and to the left with rate v −1 n−i+1 . We define an ordered random walk (Z † 1 (t), . . . , Z † n (t)) t≥0 started from x ∈ W n as having a Q-matrix given by a Doob h-transform: for x ∈ W n and i = 1, . . . , n, This is a version of (Z (v 1 ) n (t)) t≥0 with components conditioned to remain ordered as Z1 ≤ . . . ≤ Zn. It is related to a non-colliding random walk with components conditioned to remain strictly ordered by a co-ordinate change; for more information on non-colliding random walks we refer to [21,22,25].
Define hA : This is a consequence of Theorem 5.10 in [4] and is proved by multidimensional versions of Pitman's transformation. It is also closely related to the analysis in [8]. In the case that only rightward jumps in Zi are present, this corresponds to a construction of a process on a Gelfand-Tsetlin patten with pushing and blocking interactions [33]. The statement above can also be proved as a consequence of push-block dynamics by minor modifications of the proof of Theorem 2.1 in [33] and we describe these modifications in Section 6. The construction of a corresponding process on a symplectic Gelfand Tsetlin pattern in [33] leads to the following. Lemma 4 (Theorem 2.3 of [33]). hC is harmonic for (Z (vn) (t), . . . , Z (v 1 ) ) killed when it leaves W n ≤0 . This is a reflection through the origin of the result in [33] which considers a process killed when it leaves W n ≥0 . Proposition 5 (Corollary 7.7 of [23]). Suppose 0 < vn < . . . < v1 < 1.
The probability that a random walk remains within a Weyl chamber for all time is considered in a general setting in [23]. In our setting, we give a direct proof using Proposition 3 and Lemma 4.
Proof. Proposition 3 and Lemma 4 show that hA and hC are harmonic functions for (Z (vn ) (t), . . . , Z (v 1 ) ) killed when it leaves W n and W n ≤0 respectively. We now check that κAhA and κC hC have the correct boundary behaviour. Let Then we can observe from (2) that Sx(v) = 0 for all x ∈ ∂W n because two columns in the determinant in the numerator of (2) coincide if xi = xi+1 +1 for some i = 1, . . . , n−1. In a similar manner, for all x ∈ ∂W n ≤0 due to the above observation and that hC (x) = 0 when xn = 1.
We now consider the behaviour at infinity. For hA, it is easy to see from the Weyl character formula (2) that κA lim where we use the limit above to mean x1, . . . , xn → −∞ and xi − xi+1 → −∞ for each i = 1, . . . , n − 1. For the symplectic Schur function we find and use Eq. 24.17 from [17] to give a more explicit expression for the limiting constant We conclude that In the case 0 < vn < . . . < v1 < 1 the process (Z (vn) (t), . . . , Z (v 1 ) (t)) almost surely has Z → −∞ for i = 1, . . . , n − 1. Therefore the above specifies the boundary behaviour of κAhA and κChC .
Suppose that (h, T ) either equals (κAhA, TA) or (κC hC , TC) and let Z * t denote (Z (vn ) (t), . . . , Z (v 1 ) (t)) killed at the instant it leaves W n or W n ≤0 . Then (h(Z * t )) t≥0 is a bounded martingale and converges almost surely and in L 1 to a random variable Y. From the boundary behaviour specified above, Y equals 1 if T = ∞ and equals zero otherwise almost surely. Using this in the L 1 convergence shows that h(x) = limt→∞ Ex(h(Z * t )) = Px(T = ∞).
From this we can prove the second equality in law in Theorem 1 for a particular choice of rates. Suppose 0 < vn < . . . < v1 < 1 which ensures that all of the following events have strictly positive probabilities, and let x ∈ W n ≤0 . Then Let (x1, . . . , xn) → (−η, . . . , −η) and shift co-ordinates by η. Then by using that S (x 1 ,...,xn) (v) → n i=1 v −η i and the notation η (n) = (η, . . . , η). We compare this to Corollary 4.2 of [5] which in our notation states that Equation (7) and (8) prove the second equality in law in Theorem 1 for 0 < vn < . . . < v1 < 1. This can be extended to all distinct rates with vi < 1 for each i = 1, . . . , n by observing that the law of the process (Z † n (t)) t≥0 is invariant under permutations of the vi. In particular, this holds for sup t≥0 Z † n (t) and also holds for G(1, 1) from (8).

Time reversal
We now prove that PushASEP with a wall started from (0, . . . , 0) has an interpretation as semi-discrete last passage percolation times in a environment constructed from 2n Poisson point processes. In particular, where the Z are a difference of two Poisson point processes. In the proof of (9) we will denote the right hand side of (9) by (U k (t)) n k=1 . We check that the evolution of this process is PushASEP with a wall. When n = 1, and this evolves as PushASEP with a wall with one particle started from zero. For the inductive step we note that adding in the n-th particle to (U k (t)) n k=1 does not affect the evolution of the first (n − 1) particles. Therefore we only need to consider the n-th particle given by where Yn−1 is the (n − 1)-th particle in PushASEP with a wall. If Un > Yn−1 then the suprema in (10) is attained with a choice s < t and Un jumps right or left whenever Z (vn ) n does. If Un = Yn−1 then at least one of the (possibly non-unique) maximisers of the supremum in (10) involves s = t. This means that if Z (vn ) n jumps to the right then Un jumps to the right; if Yn−1 jumps to the right then Un jumps to the right (this is is the pushing interaction); and if Z (vn ) n jumps to the left then Un is unchanged (this is the blocking interaction). Therefore Un defined by (10) follows the dynamics of the n-th particle in PushASEP with a wall started from the origin. Therefore (9) follows inductively. Equation 9 has a similar form to Proposition 3 and this along with time reversal establishes the following connection, see [9,16] for a similar argument in a Brownian context.

Proposition 6.
Let Y * n be distributed as the top particle in PushASEP with a wall in its invariant measure and Z † n be the top particle in the ordered random walk with Q-matrix given by (4) and started from the origin. Then Proof. For any fixed t, we let t − ui = t k−i and use time reversal of continuous-time random walks (Z The equality in law of the largest co-ordinates, relabelling the sum from i to n − i + 1 and comparing with Proposition 3 part (ii) shows that Yn(t) In particular, letting t → ∞ completes the proof.
Proof. We will use the representation for (Y * 1 , . . . , Y * n ) obtained by relabelling the sum i to k − i + 1 in (11) and letting t → ∞, We fix ǫ > 0 and construct realisations of Z We define R to be the subset of (ti,wi) i≥1 withwi > 1 − ǫ/v. The projection onto the first co-ordinate of R give independent Poisson point process of rate vi and 1/vi respectively which define coupled realisations of Z Almost surely, the suprema on the right hand side of Proposition 9 part (ii) all stablise (after some random time uniform over (v1, . . . , vn) ∈ (ǫ, 1 − ǫ) n ). For any realisation of the marked Poisson point processes, the right hand side of Proposition 9 part (ii) is continuous in (v1, . . . , vn) except at (1−wi) i≥1 and (ǫ/(1−wi)) i≥1 . Therefore the distribution of the right hand side of part (ii) of Proposition 9 is continuous in (v1, . . . , vn) on the set (ǫ, 1 − ǫ) n , and hence so is the distribution of (Y * 1 , . . . , Y * n ). As ǫ is arbitrary this completes the proof.
Proof of Theorem 1. Proposition 6 is the first equality in law. At the end of Section 2.1 we proved the second equality for distinct 0 < v1, . . . , vn < 1. Lemma 7 allows us to remove the constraint that the vi are distinct.

Transition probabilities
We give a more explicit definition of Push-ASEP with a wall at the origin as a continuous- We use ei to denote the vector taking value 1 in position i and zero otherwise. The transition rates of Y are defined for y, y +ei +. . .+ej ∈ W n ≥0 and i ≤ j by with the notation yn+1 = ∞ and for y ∈ W n ≥0 by All other transition rates equal zero. We note that in [8] the particles were strictly ordered, whereas it is convenient for us to consider a weakly ordered system; these systems can be related by a co-ordinate change xj → xj + j − 1.
To describe the transition probabilities we first introduce the operators acting on where we will always apply J (v) to functions with superexponential decay at infinity.
. . . J (vn) as notation for concatenated operators and D to specify a variable u on which the operators act.
We recall Siegmund duality for birth-death processes, see for example [11,13]. Let (Xt) t≥0 denote a birth-death process on the state space Z ≥0 with transition rates: Let (X * t ) t≥0 denote a birth-death process on the state space Z ≥−1 with transition rates The process X has a reflecting boundary at zero while X * is absorbed at −1. Under suitable conditions on the rates, see [13], which hold in the case of interest to us: λi = v1 for i ≥ 0 and µi = v −1 1 for i ≥ 1, Siegmund duality states that We can find the transition probabilities for X * by solving the Kolmogorov forward equation. We define for any t ≥ 0 and x, y ∈ Z, where Γ0 denotes the unit circle oriented anticlockwise. The transition probabilities of X * are given for x ∈ Z ≥0 and t ≥ 0 by Px( . By using Siegmund duality, the transition probabilities of PushASEP with a wall with a single particle (which is an M/M/1 queue) are given by The purpose of the above is that this now provides a form which is convenient to generalise to n particles. Define for all t ≥ 0 and x, y ∈ Z n , Proposition 8. The transition probabilities of (Y1(t), . . . , Yn(t)) t≥0 are given by rt(x, y) for x, y ∈ W n ≥0 . The transition probabilities for PushASEP in the absence of a wall were found in [8] and related examples have been found in [30,32]. Our proof follows the ideas in [8].
Proof. Observe that for all u, w ∈ Z, and therefore for all x, y ∈ W n , We note that y ± e k may be outside of the set W n ≥0 but that r has been defined for all x, y ∈ Z n . The proof will involve showing that the terms involving y ± e k / ∈ W n ≥0 in (14) can be replaced, using identities for r, by terms corresponding to the desired pushing and blocking interactions.
This can be proved by showing that the difference of the two sides is equal to where the relevant columns of A are the k-th and (k + 1)-th which have entries for each i = 1, . . . , n given by These two columns are equal which proves (15). We first consider the terms in (14) with y − e k / ∈ W n ≥0 and y k > 0 which corresponds to right jumps with a pushing interaction. Denote by m(k) the minimal index such that y m(k) = y m(k+1) = . . . = y k . Then by iteratively applying the identity (15) This shows that, We next consider the terms in (14) with y + e k / ∈ W n ≥0 which will correspond to blocking interactions. This means that y k = y k+1 and using (15) shows that . (17) We note that y + e k ∈ W n ≥0 whenever (1 − 1 {y k =y k+1 } ) = 0. The final terms we need to consider in (14) are those with y − e k / ∈ W n ≥0 and y k = 0 which correspond to left jumps which are suppressed by the wall. If y1 = . . . = y k = 0 for some k > 1, then for a matrix B where the relevant entries of B are the columns indexed by 1, . . . , k. The first column has entries Bi1 = J by using ψ(·, −1) = 0 and column operations. Using this argument for the k-th column and that we consider the vector y − e k , we observe that the k-th column is a linear combination of columns 1, . . . , k − 1 and hence rt(x, y − e k ) = 0 if y k = 0 for any k > 1.
The remaining case is when 0 = y1 < y2 and we show that This follows from multilinearity of the determinants involved in the definition of r and using ψ(·, −1) = 0, We combine (14), (16), (17) and (18) to obtain that We now consider the initial condition. where and ψ0(u, w) = 1 {w−u=0} for u, w ≥ 0 depends only on the difference w − u and we will view this as a function of w − u. For any function f : Z → R and u, w ∈ Z, r > 0 Therefore the top-left entry in the matrix defining r0 equals 1 {y 1 =x 1 } . Suppose y1 > x1 and observe that if a function g has g(u) = 0 for u > 0, then for any j = 1, . . . , n we have D This shows that when y1 > x1 the top row of the matrix defining r0 equals zero. In a similar manner, when y1 < x1 the first column in the matrix defining r0 is zero. Therefore and using (21) the entries of the matrix in (22) have the same form as the entries of the matrix in (20) but with n − 1 particles. Continuing inductively, Therefore rt(x, y) satisfies the Kolmogorov forward equations (19) and (23) corresponding to the process (Y (t)) t≥0 . These equations have a unique solution given by the transition probabilities of (Y (t)) t≥0 because the process does not explode. .
(ii) With no extra conditions and the notation D ∅ = Id, Proof. The proof is similar to Lemma 2 in [16] and so we give a description of the proof and refer to [16] which carries out some of the steps more explicitly. We prove (i) first and (ii) is almost identical. We first observe that We apply (24) repeatedly to show that The general procedure is to use a Laplace expansion of the determinants on the left hand side, apply (24) with a particular choice of variable and parameter, and then reconstruct the result as a sum of three determinants. A key property is that all of the boundary terms in (24) will end up contributing zero.
The first application of this procedure is with the parameter vn, variable xn and summing xn from xn−1 to infinity. This shows that the left hand side of (25) equals a sum of three terms which all take the form: In the first term, Σ = {(x1, . . . , xn) ∈ W n ≥0 }. The Aij are given by the entries of the first matrix on the left hand side of (25) except with the application of D (1/vn ) in the n-th column removed and the argument xn + n − 1 replaced by xn + n − 2. The Bij are given by the entries of the second matrix on the left hand side of (25) except with the application of J (1/vn) in the n-th row removed and the argument xn + n − 1 replaced by xn+n−2. There are two boundary terms which have Σ = {(x1, . . . , xn−1) ∈ W + n−1 } and are evaluated at xn = xn−1 and xn = ∞. These terms are both zero: when evaluated at xn = xn−1 two columns in Aij are equal, and the boundary term at infinity vanishes due to the growth and decay conditions imposed on f and g.
This proves (25) and we apply the Cauchy-Binet (or Andréief) identity to the right hand side of (25) to complete the proof of part (i).
Part (ii) is identical except that we do not apply (24) to the x1 variable. Thus the condition fi(−1) = 0 for each i = 1, . . . , n can be omitted. Then the probability mass function of (Y * 1 , . . . , Y * n ) is given by

Invariant measure
We note that the Markov chain is irreducible, does not explode and the invariant measure is unique when normalised.
Proof. We use Lemma 9, noting that φi(−1) = 0 for each i and that the conditions at infinity are satisfied, to find .
We recall that ψ is related to the transition probabilities of a process (X * t ) t≥0 defined through two independent Poisson point processes N (1) Using this and the fact that ψ is symmetric in the right hand side of the first displayed equation in this proof shows that We defer the proof that π is positive and the identification of the normalisation constant. These two properties will follow by identifying π as the probability mass function for a vector of last passage percolations times in the proof of Theorem 2.

Point-to-line last passage percolation
Point-to-line last passage percolation can be interpreted as an interacting particle system, where at each time step a new particle is added at the origin and particles interact by pushing particles to the right of them. We define a discrete-time Markov chain denoted (G pl (k)) 1≤k≤n where G pl (k) = (G pl 1 (k), . . . , G pl k (k)). The particles are updated between time k − 1 and time k by sequentially defining G pl 1 (k), . . . , G pl n (k) starting with G pl 1 (k) = g n−k+1,k and then applying the update rule where (g jk ) j,k≥1,j+k≤n+1 are an independent collection of geometrically distributed random variables with parameters 1 − vj v n−k+1 and 0 < vj < 1 for each j = 1, . . . , n.
The geometric random variables are defined as P The initial state is G pl 1 = gn1. The connection to point-to-line last passage percolation is that the largest particle at time n has the representation G pl n (n) = max π∈Πn (i,j)∈π gij where Π flat n is the set of directed up-right paths nearest neighbour paths from (1, 1) to the line {(i, j) : i + j = n + 1}. Moreover, G pl (n) is the vector on the right hand side of Theorem 2. The advantage of this interpretation is that the transition probabilities of (G pl (k)) 1≤k≤n have a determinantal form and this can be used to find the probability mass function of G pl (n) as a determinant.
In the context of point-to-point last passage percolation the transition kernel of a Markov chain analogous to the above is given in Theorem 1 of [14]. This can be used to describe the update rule of G pl (n − 1) to G pl (n) from time n − 1 to time n by viewing G pl (n − 1) as being extended to an n-dimensional vector with zero as the leftmost position. We first define: for functions f : Z → R with f (u) = 0 for all u < 0, . . , 1/pn) and for a function g : for j = i.
The proof uses the RSK correspondence; a more direct proof is given in the case with all parameters equal in [19] and with the geometric replaced by exponential data in [16].
We will iteratively apply these one-step updates and use the following lemma to find the probability mass function for G pl (n) as a single determinant. p = (p1, . . . , pn) and pi > 0 for i = 1, . . . , n. Let (fi) n i=1 be a collection of functions from Z ≥0 → R and g : Z → R with g(u) = 0 for all u < 0. Then

Lemma 12. Suppose that
Proof. We have g(u) = 0 for all u < 0 which means that (J (p) g)(z−·)(u) = (I (1/p) g)(z− u). We apply Lemma 9 part (ii) with the functions gj (·) = D (1/p 2 ...1/p j ) g(yj + j − 1 − ·) where D ∅ = Id. We note that as g is zero in a neighbourhood of infinity the condition on the growth of f can be omitted.
Consider a Laplace expansion ofπ where the summation is indexed by a permutation σ. If σ(1) = 1 then the top-left entry in the matrix definingπ is given byφ1(x1 − 1) which equals 1 if x1 = 0 and 0 otherwise. Therefore the terms in the Laplace expansion with σ(1) = 1 will give the desired expression forπ and we need to show the remaining terms in the Laplace expansion ofπ are zero.
Let σ(1) = j for some 2 ≤ j ≤ n and σ(i) = 1 for some 2 ≤ i ≤ n. For any 2 ≤ j ≤ n, the (1, j) entry in the matrix in (28) is only non-zero if xj = 0 by using the definition ofφ1. On the other hand, the (i, 1) entry in the matrix in (28) is given bŷ Therefore as x1 ≤ xj all terms in the Laplace expansion ofπ with σ(1) = 1 are zero. This proves (29).
We use (28), (29) and the update rule in Lemma 11 to find the probability mass function for G pl (n) as where x1 := 0. We use the identities: We use (31) and (32) in (30) to obtain the probability mass function for G pl (n) as 1,v −1 is defined by (27). We apply Lemma 12 to show this equals and observe that In the case 2 ≤ i ≤ n observe that where C is independent of yj . Using (34) and (35) in (33) and removing the terms Cv y j 1 using row operations we find The prefactor equals cn where cn = 1≤i<j≤n 1 (v i −v j ) n j=1 v n j and so we establish inductively that the probability mass function of G pl (n) is given by We recall that in Proposition 10, we deferred the proof of positivity of π and the normalisation constant. This is now proven as we have identified π as the probability mass function of G pl (n). Moreover, Equation (36) and Proposition 10 proves Theorem 2 when v1, . . . , vn are distinct. The distribution of (Y * 1 , . . . , Y * n ) is continuous in (v1, . . . , vn) on the set (0, 1) n from Lemma 7 and the distribution of (G (1, n), . . . , G(1, 1)) is continuous in (v1, . . . , vn) on the same set as a finite number of operations of summation and maxima applied to geometric random variables. This completes the proof of Theorem 2. v1, . . . , vn) where Sp denotes the symplectic Schur function from (3), and η (n) denotes an n- dimensional vector (η, . . . , η).

The largest particle
For point-to-line last passage percolation this was proven in [5] and related to earlier formulas for point-to-line last passage percolation in [2] and [6]. We could appeal to this and Theorem 1 to prove the same expression for the distribution function of Y * n . We now show that it follows quickly from Proposition 10.
Proof. From Proposition 10 we have We perform the summation in yn from yn−1 to ∞ which replaces the last column by v η . The second term differs from the penultimate column by a factor of v which is non-zero and independent of i. Therefore the second term can be removed from the last column by column operations. We now apply this procedure inductively in order xn−1, . . . , x1 to obtain where D ∅ = Id. We relate this to a symplectic Schur function by using column operations. The entries in the first column are φi(η) = v In the second column, the entries are i ) and the second bracketed term can be removed by column operations. This can be continued inductively and leads to The proof is now completed by using (6) to equate the normalisation constants.
We define a continuous-time Markov process (Xij (t) : i + j ≤ n + 1, t ≥ 0) taking values in X by specifying its transition rates. For x, x + eij + eij−1 + . . . + e ik ∈ X , (i, j), (i, k) ∈ S and k ≤ j define (37) with the notation that x0,j = ∞ for j = 2, . . . , n + 1. For (38) All other transition rates are zero. The fact that q(x, x ′ ) = 0 only if x, x ′ ∈ X corresponds to blocking interactions. This defines a multi-dimensional Markov chain with interactions shown in Figure 1, where the arrows in Figure 1 correspond to the following interactions. (iii) An interaction A B in which the rates of right and left jumps experienced by B depend on its location relative to A. The particular form is given in (37) and (38) and is chosen such that (Xij (t) : i + j ≤ n + 1, t ≥ 0) is dynamically reversible, see part (ii) of Theorem 15.
(iv) An interaction with a wall in which all left jumps below zero are suppressed. This is depicted by the diagonal line on the left side of Figure 1.
To find the invariant measure of X we use a result which has found applications in the queueing theory literature. Lemma 14 (Theorem 1.13, Kelly [20]). Let X be a stationary Markov process with state space E and transition rates (q(j, k))i,j∈E. Suppose we can find positive sequences (q(j, k))i,j∈E and (π(j))j∈E with j∈E π(j) = 1 such that: (ii) π(j)q(j, k) = π(k)q(k, j).
Then π is the invariant measure for X andq are the transition rates of the time reversal of X in stationarity.
The proof is straightforward: using (ii) then (i) Nonetheless this gives a convenient way of verifying an invariant measure if we can guess the transition rates of the time reversed process. In general, this is an intractable problem. However, in this case we can make the choice that the invariant measure is a field of point-to-line last passage percolation times and the reversed transition probabilities are given by reversing the direction of all interactions between particles in Figure 1 and changing the order of the parameters (v1, . . . , vn) → (vn, . . . , v1) (the interactions with the wall remain unchanged). This is motivated by the construction of [16].
More precisely, we define the reversed transition rates as follows. For x, x + eij + eij−1 + . . . + e ik ∈ X for (i, j), (i, k) ∈ S and k ≤ i definê Our proposed invariant measure is the probability mass function of (G(i, j) : i + j ≤ n + 1). This has an explicit form: (t) : i + j ≤ n + 1, t ≥ 0) be the continuous-time Markov process with transition rates given by (37) and (38). This process has a unique invariant measure (X * ij : i + j ≤ n + 1) which satisfies When run in stationarity, The process (Xij (t) : i + j ≤ n + 1, t ≥ 0) is irreducible, does not explode and has a unique invariant measure. The second statement is the statement that when run in stationarity (Xij (t) : i + j ≤ n + 1, t ≥ 0) is dynamically reversible.
Proof. We will use Lemma 14. We first prove that for all x, x ′ ∈ X , First consider the case when x ′ = x+eij +eij−1 +. . .+e ik . Both sides are zero unless xij = xij−1 = . . . = x ik . When these equalities hold, max(xip, xi−1,p+1) = xi−1,p+1 for each p = j − 1, . . . , k and max(xi+1,p−1, xip) = xip for each p = j, . . . , k + 1. Therefore whereπ1 does not depend on xij, xij−1, . . . , x ik . In particular, with We compare to the ratio Combining the above two equations proves (39) whereπ2 does not depend on xij , xi+1,j, . . . , x lj . Therefore letting The two above equations prove (39) in the case when Both sides of (39) are zero in all other cases and so we have proven (39). We now show that for all x ∈ X we have q(x) =q(x). This follows from comparing, and One way to check that q(x) =q(x) is to check the equality first in the case when the inequalities xij < xi−1,j for each i = 1, xij < xij−1 for each j = 1 and xin−i+1 > 0 hold for each i = 1, . . . , n. This case can be seen directly from (40) and (41). We now consider the rates of jumps which are suppressed in each case when these inequalities no longer hold: (i) If xi,n−i+1 = 0 then both forwards and backwards in time a jump of rate v −1 i is suppressed by the wall.
(ii) If xij = xi,j−1 then forwards in time the left jump of the (i, j − 1) particle is suppressed and the suppressed jump has rate v −1 n−j+2 because xi,j−1 = xij ≤ xi−1,j . Backwards in time, the right jump of the (i, j) particle is suppressed and the suppressed jump has rate v −1 n−j+2 because xi,j = xi,j−1 ≥ xi+1,j−1. (iii) If xij = xi−1j then forwards in time the right jump of the (i, j) particle is suppressed and the suppressed jump has rate v −1 i−1 because xij = xi−1,j ≥ xi−1,j+1. Backwards in time, the left jump of the (i − 1, j) particle is suppressed and the suppressed jump has rate v −1 i−1 because xi−1j = xij ≤ xij−1. Using Lemma 14, we have now established that π is the invariant measure andq are the reversed transition rates in stationarity of (Xij (t) : i + j ≤ n + 1, t ≥ 0). The second statement in the Theorem follows from comparing q andq and observing that they are identical after the swap xij → xji and (v1, . . . , vn) → (vn, . . . , v1).
We end by discussing two further properties of the the process X. These properties can both be proved by running the process (Xij (t) : i + j ≤ n + 1, t ≥ 0) in stationarity, forwards and backwards in time, and follow in exactly the same way as Section 5 of [16] as they depend on the structural properties of the X array rather than the exact dynamics. (ii) Let Q n t denote the transition semigroup for PushASEP with a wall with n particles. Let Pn−1→n denote the transition kernel for the update of the Markov chain G pl defined in Section 4 from time n − 1 to n. There is an intertwining between Q n−1 t and Q n t with intertwining kernel given by Pn−1→n. In operator notation,

Push-block dynamics and Proposition 3
The aim of this Section is to describe how Proposition 3 can be obtained by a construction of an interacting particle system with pushing and blocking interactions. This section is adapting the proof of Theorem 2.1 in [33] with a different intertwining (43) replacing Equation 3.3 from [33]. We follow the set-up and notation of [33]. For each n ≥ 1, let (X(t) : t ≥ 0) be a continuous-time Markov process We use x j to denote the vector x j = (x j 1 , . . . , x j j ) and describe X j as the positions of the particles in the j-th level of X. Let vi > 0 for each i ≥ 1.
The dynamics of X is governed by n(n + 1) independent exponential clocks, where each particle in the j-th level has two independent exponential clocks with rates vj and v −1 j corresponding to its right and left jumps respectively. When the clock of a particle in the j-th level rings, that particle attempts to jump to the right or left but will experience a pushing and a blocking interaction which ensures that X remains within Kn. In summary, a particle at the j-th level pushes particles at levels k > j and is blocked by particles at levels k < j. More precisely, suppose the right clock of X j i rings. (i) If X j i = X j−1 i then the right jump is suppressed.
(ii) If X j i < X j−1 i and X j i = X j+1 i+1 then X j i jumps right by one and pushes X j+1 i+1 to the right by one. The right jump of X j+1 i+1 may then cause further right jumps in the same way.
(iii) In all other cases X j i jumps to the right by one and all other particles are unchanged.
If the left clock of X j i rings then we have the same trichotomy of cases: i then X j i jumps to the left and pushes X j+1 i to the left by one which may then push further particles to the left; and (iii) in all other cases X j i jumps to the left by one and all other particles are unchanged.
Let n ≥ 1 and for x ∈ Kn let wv( xj for x ∈ R d and |x 0 | = 0. For any z ∈ W n we define Kn(z) = {(x j i ) 1≤i≤j≤n ∈ Kn : x n = z} and a probability measure on Kn(z) by Mz(x) = wv(x)/Sz(v) for all x ∈ Kn(z).
The key step in proving Proposition 16 is to prove the intertwining QY Λ = ΛA where QY is the desired Q-matrix from Proposition 16 with n + 1 particles. This is equivalent to the statement that QY (y, y ′ ) = x≤y m(x, y) m(x ′ , y ′ ) A((x, y), (x ′ , y ′ )), for all y, y ′ ∈ W n (43) Once this is established it follows from general theory [28] that Y is an autonomous Markov process with the desired Q-matrix and this Q-matrix is conservative; therefore Proposition 16 follows inductively. For a more detailed argument we refer to [33]: we are replacing Equation 3.3 from [33] with equation (43) and the rest of the argument is unchanged. It remains to show (43). We first show (43) in the case y ′ = y. The right hand side of (43) equals and A((x, y ′ ), (x ′ , y ′ )) can be non-zero if x = x ′ or x = x ′ ± ei. If x = x ′ then A((x ′ , y ′ ), (x ′ , y ′ )) equals the negative of (42). If x = x ′ − ei then A((x, y ′ ), (x ′ , y ′ )) = QX (x ′ −ei, Using these expressions we find that the right hand side of (43) is equal to = QY (y ′ , y ′ ) and proves (43) for y ′ = y. Suppose that y ′ = y + ei and consider two cases: depending on whether or not there was a pushing interaction. If i = 1 or i > 1 and x ′ i−1 < y ′ i , the right hand side of (43) equals v The second case is if i > 1 and x ′ i−1 = y ′ i , when the right hand side of (43) equals v = QY (y ′ − ei, y ′ ).
Finally suppose that y ′ = y − ei and split again into two cases. If i = n or i < n and y ′ i < x ′ i then the right hand side of (43) equals v If i < n and y ′ i = x ′ i then the right hand side of (43) equals = QY (y ′ + ei, y ′ ).
This completes the proof of (43) and as described above completes the proof of Proposition 16.