NON-COLLIDING RANDOM WALKS, TANDEM QUEUES, AND DISCRETE ORTHOGONAL POLYNOMIAL ENSEMBLES

We show that the function h ( x ) = Q i<j ( x j − x i ) is harmonic for any random walk in R k with exchangeable increments, provided the required moments exist. For the subclass of random walks which can only exit the Weyl chamber W = f x : x 1 < x 2 < (cid:1) (cid:1) (cid:1) < x k g onto a point where h vanishes, we de(cid:12)ne the corresponding Doob h -transform. For certain special cases, we show that the marginal distribution of the conditioned process at a (cid:12)xed time is given by a familiar discrete orthogonal polynomial ensemble. These include the Krawtchouk and Charlier ensembles, where the underlying walks are binomial and Poisson, respectively. We refer to the corresponding conditioned processes in these cases as the Krawtchouk and Charlier processes. In [O’Connell and Yor (2001b)], a representation was obtained for the Charlier process by considering a sequence of M=M= 1 queues in tandem. We present the analogue of this representation theorem for the Krawtchouk process, by considering a sequence of discrete-time M=M= 1 queues in tandem. We also present related results for random walks on the circle, and relate a system of non-colliding walks in this case to the discrete analogue of the circular unitary ensemble (CUE).


Introduction
We are concerned with probability distributions on Z k of the form where P is some well-known distribution on Z k , Z is the normalizing constant, and the function h is given by Interesting examples include: the Krawtchouk ensemble, where P = µ ⊗k and µ is a binomial distribution; the Charlier ensemble, where P = µ ⊗k and µ is a Poisson distribution; the de-Poissonised Charlier ensemble, where P is a multinomial distribution; and the Meixner ensemble, where P = µ ⊗k and µ is a negative binomial distribution. These are examples of discrete orthogonal polynomial ensembles, so-called because of their close connection with discrete orthogonal polynomials. See [Jo01,Jo00a,Jo00b], and references given there, for models which lead to these ensembles, connections between these ensembles, and their asymptotic analysis as k → ∞. These ensembles are discrete analogues of ensembles which arise as eigenvalue distributions in random matrix theory (see [Me91], for example) and marginal distributions for non-colliding diffusion processes [Bi94,Dy62,Gr99,HW96,KO01].
Our main results are as follows. First we show that the function h is harmonic for any random walk in R k (discrete or continuous time) with exchangeable increments, provided the required moments exist. (Note that h is defined on R k .) Define the Weyl Chamber as W = {x = (x 1 , . . . , x k ) ∈ R k : x 1 < x 2 < · · · < x k }. (1.3) For random walks with the property that they can only exit W onto a point x with h(x) = 0, it follows that h is a strictly positive regular function for the restriction of the random walk to W and we can define the corresponding Doob h-transform. We show that the Krawtchouk and Charlier ensembles can be recovered as the law at a fixed time of an appropriately chosen h-transformed walk on W started from the point x * = (0, 1, 2, . . . , k − 1) ∈ W . We shall refer to these conditioned walks as the Krawtchouk and Charlier processes, respectively. Roughly speaking, the Krawtchouk process is a system of non-colliding random walks in discrete time and the Charlier process is the continuous-time analogue (but note that they differ significantly in that the latter process does not permit individual walks to jump simultaneously).
In [OY01b], a representation is obtained for the Charlier process by considering a sequence of M/M/1 queues in tandem. In this paper we will present the analogue of this representation theorem for the Krawtchouk process, by considering a sequence of discrete-time M/M/1 queues in tandem. We use essentially the same arguments as those given in [OY01b], but need to take care of the added complication that, in the discrete-time model, the individual walks can jump simultaneously.
For completeness, we also present related results for random walks on the circle. The invariant distribution for the system of non-colliding walks on the circle is a discrete orthogonal polynomial ensemble which can be regarded as the discrete analogue of the circular unitary ensemble (CUE).
The latter arises in random matrix theory as the law of the eigenvalues of a random unitary matrix chosen according to Haar measure on the group of unitary matrices of a particular dimension. In the continuous case, a connection with non-colliding Brownian motions has been made in [Dy62] and [HW96].
The outline of the paper is as follows. In Section 2, we show that the function h is harmonic for any random walk (discrete or continuous time) with exchangeable increments, provided the required moments exist. Then we introduce a class of random walks for which h is regular in the Weyl chamber, and define the h-transform. In Section 3, we identify some well-known discrete ensembles in terms of h-transforms of appropriate random walks. In Section 4, we give a representation for the Krawtchouk process, by considering a sequence of discrete-time M/M/1 queues in tandem. Finally, we discuss the discrete circular ensemble in Section 5.

Non-colliding random walks
Fix k ∈ N and let X = (X(n)) n∈N 0 be a random walk on R k starting at x ∈ R k under P x . In Subsection 2.1 we prove that h is harmonic for X, provided the required moments exist and the walk has exchangeable increments; in fact, as remark below, h is harmonic for any Lévy process with exchangeable increments, provided h is in the domain of the generator. In Subsection 2.2 we restrict to a smaller class of random walks for which we can define the h-transform on the Weyl chamber.

Harmonicity of h.
We first show that h is harmonic for the random walk X, under very weak assumptions. Recall that a measure is exchangeable if it is invariant under permutation of its components.
Theorem 2.1 Let the step distribution µ of X be exchangeable, and assume that y k 1 µ(dy) < ∞. Then h is harmonic for X; that is, for any x ∈ R k , we have E x (h(X(1))) = h(x). Thus, (h(X(n))) n∈N 0 is a martingale under P x for any x ∈ R k .

Proof.
Recall that h can be expressed as a Vandermonde determinant: Our proof is by induction on k. The assertion is trivially satisfied for k = 1.
Fix k ≥ 2 and assume that the assertion is true for (2.6) Using this, we obtain that For m ∈ [k], denote by µ m the m-th marginal of µ, and for y m ∈ R, denote by µ (m) (·|y m ) a version of the conditional distribution of µ given that the m-th coordinate is y m . Note that µ (m) (·|y m ) is an exchangeable probability measure on R k−1 . Furthermore, recall that, for fixed y m , the map y ≡ (y 1 , . . . , y m−1 , y m+1 , . . . , Hence, the induction hypothesis yields, for any y m ∈ R, (2.8) Using this, we obtain from (2.7): (2.9) On the right hand side, we use the expansion (2.6) and see that the term between the brackets is equal to h(x) for l = k − 1 and equal to zero for all other l (since it is the determinant of a matrix with two identical lines). Hence, the right hand side of (2.9) is equal to h(x), and this completes the induction proof.
Corollary 2.2 If X is a continuous-time random walk in R k with generator given by for some exchangeable jump distribution µ with y k 1 µ(dy) < ∞, then h is harmonic for X; that is, Gh = 0.

Remark 2.3
Actually, h is harmonic for any Lévy process on R k with exchangeable increments. That is, for a generator of the form where µ is exchangeable, y k 1 µ(dy) < ∞ and R |y 1 | 1+|y| 2 µ(dy) < ∞, we have Gh = 0. This follows from the above corollary and the elementary facts that ∆h = 0 and k i=1 ∂ i h = 0.

Conditioned walks
Recall that our goal is to condition X on never having a collision between any two of its components. We can do this for certain walks by means of a Doob h-transform with h as in (1.2). Actually, h is in general not the only positive harmonic function but in some sense the most natural one (see Lemma 3.5 below and the remarks preceding it).
In order to define the h-transform, we require that (h(X(n))1l{T > n}) n∈N 0 is a positive martingale, where T = inf{n ∈ N : X(n) / ∈ W } (2.10) denotes the first time that the process leaves the Weyl chamber W defined by (1.3). This requirement forces us to restrict the class of random walks we consider. In particular, we shall assume now that X runs on the integers and that the components are nearest-neighbor walks. Under the assumptions of Theorem 2.4, one may define the h-transform of X. In spite of the possible existence of other functions which are regular for P W (see Lemma 3.5 below), we refer to this transformed walk as the conditioned walk given that X never leaves the Weyl chamber W . A certain justification for this interpretation is provided in the proof of Theorem 4.6 below.
We denote by P x the distribution of the transformed process X, started at x ∈ W . The transition kernel for this walk is given by (2.11) Recall T defined in (2.10). Make iterated use of (2.11) to see that This construction is easily extended to continuous-time random walks X = (X(t)) t∈[0,∞) . Consider the canonical embedded discrete-time random walk and assume that its step distribution satisfies the assumptions of Theorem 2.4. Then the definition of the Doob h-transform of the process is possible in the analogous way. In particular, (2.12) holds in this case for any n ∈ [0, ∞) as well, where the definition (2.10) of the leaving time T has to be adapted to

Discrete Ensembles: Examples
In this section, we present two examples of well-known discrete ensembles which can be viewed as the distribution at a fixed time of a suitable random walk conditioned to stay in W . Our examples are: (i) the binomial random walk (leading to the Krawtchouk ensemble), (ii) the Poisson random walk (leading to the Charlier ensemble) and its de-Poissonised version, Throughout this section we shall use the notation of Section 2.

The Krawtchouk process.
The main aim of this subsection is to show that the Krawtchouk ensemble (on W ), appears as the distribution at fixed time of independent binomial walks conditioned to never collide.
Fix a parameter p ∈ (0, 1) and let the components of X be independent random walks on N 0 whose steps take the value one with probability p and zero else. Hence, at each time n, each component has the binomial distribution Bi n,p (l) = n l p l (1 − p) n−l , l ∈ {0, . . . , n}, with parameters n and p, if the process starts at the origin. Clearly, this step distribution satisfies the assumptions of Theorem 2.4.
We conceive Kr W k,n,p as a probability measure on W . We can identify the distribution of the conditioned binomial walk as follows. Abbreviate x * = (0, 1, . . . , k − 1) ∈ W .
Proposition 3.1 For any n ∈ N 0 and any y ∈ W , P x * (X n = y) = Kr W k,n+k−1,p (y). (3.14) Proof. First observe that, for any y ∈ W , i.e., the two distributions on both sides of (3.14) have the same support. Assume that y lies in that support.
Use the Karlin-McGregor theorem [KM59] to rewrite, for any x, y ∈ W , Here we trivially extend Bi n,p to a probability measure on Z.
Observe that Hence, in view of (2.12),using the multilinearity of the determinant, we obtain that Here K denotes a positive constant which depends on n, k and p only. Hence, the proof of the lemma is finished as soon as we have shown that det(B) = Kh(y) for some positive constant K which does not depend on y.
. It is clear that det(C) = 0 since both sides of (3.14) are strictly positive.
Substituting det(B) = h(y) det(C) in (3.17) and this in (3.15), and noting that the supports of the two probability measures in (3.14) are identical, we arrive at the assertion.
Introduce the Green's function for the walk before the first time of a collision, Γ : (3.18) and the corresponding Martin kernel K : where we recall that x * = (0, 1, . . . , k − 1) ∈ W . For future reference, we state a result on the asymptotic behaviour of the Martin kernel. By 1l we denote the vector (1, . . . , 1) ∈ ∂W .

Lemma 3.2 For any
(3.20) Proof. All following limit assertions refer to the limit as y → ∞ through W such that y/|y| → 1l/k. In order to prove the lemma, it is sufficient to prove the following. There is a constant C > 0 (depending on k and p only) such that, for any x ∈ W , We again use Karlin-McGregor's formula in (3.15). We may assume that y j − x i ≥ 0 for any i and j. We split the sum on n ∈ N 0 in the definition of Γ(x, y) into the three sums Γ(x, y) = n∈Iy + n∈IIy + n∈IIIy P x (X(n) = y, T > n) where I y , II y and III y , respectively, are the subsets of N 0 in the three regions left of, between and right of |y|/(kp) ± |y| 3/4 . For n ∈ II y ∪ III y , we use that n y j for any j to get that uniformly in n. Hence, uniformly in n ∈ II y ∪ III y , we have, using the multilinearity of the determinant, (3.23) In order to evaluate the latter determinant, we introduce the Schur function Schur x (z) which is a certain polynomial in z 1 , . . . , z k whose coefficients are non-negative integers and may be defined combinatorially. It is homogeneous of degree |x| − k 2 (k − 1) and satisfies Schur . Applying this to the vector z = (y j /(n − y j )) j=1,...,k , we obtain from the last display that Bi n,p (y j ). (3.24) We turn now to the second region of summation, i.e., |y|/(kp) − |y| 3/4 ≤ n ≤ |y|/(kp) + |y| 3/4 . Observe that z converges to p 1−p 1l, uniformly in all n ∈ II y . Hence, Schur Use a local central limit theorem for the binomial probability (see Theorem VI.1.6 in [Pe75], e.g.) to approximate Bi n,p ( Bi n,p (y j ) exists and is positive (observe that y j ∼ y ∞ for every j). Hence, the second term n∈IIy P x (X(n) = y, T > n) is asymptotically equivalent to the right hand side of (3.21), with some appropriate choice of C. Now we show that the sums of P x (X(n) = y, T > n) over n in I y and III y are negligible with respect to the sum over n ∈ II y . In order to do that, we have to further divide I y and III y into I y = I + y∪ I − y and III y = III + y∪ III − y where I + y = {n ∈ I y : n ≥ (1 + ε)y j , ∀j} and III + y = {n ∈ III y : n ≥ 1 ε y j , ∀j}, for some small ε > 0. In order to handle the sum over the extreme sets, use Stirling's formula to estimate, for sufficiently small ε > 0 (the smallness depends on p only): We turn to the summation on n ∈ I + y ∪ III − y . Here (3.22) holds as well, and therefore also (3.24) does. Use the first equation in (3.25) to estimate h(z) ≤ h(y)|y| − k 2 (k−1) const (the latter constant depends on ε only). Furthermore, note that y → z is a bounded function, and since the coefficients of the Schur function are positive, we may estimate Schur In order to handle the sum of the binomial probability term, use Stirling's formula to deduce that Bi n, Bi n,p (y j ) = 0.
This shows that the sum on n in I + y ∪ III − y is asymptotically negligible with respect to the right hand side of (3.21). This ends the proof.

The Charlier process.
In this subsection, we show how the Charlier ensemble (on W ), arises from the non-colliding version of the Poisson random walk.
We consider a continuous-time random walk X = (X(t)) t∈[0,∞) on N k 0 that makes steps after independent exponential times of parameter k, and the steps are uniformly distributed on the set of the unit vectors e 1 , . . . , e k where e i (j) = δ ij . In other words, the components of X are the counting functions of independent unit-rate Poisson processes. The generator of this walk is given as . By Corollary 2.2, h defined in (1.2) is harmonic for this walk. Since the step distribution satisfies the conditions of Theorem 2.4, h is a strictly positive harmonic function for the walk killed when it exits W . In other words, the embedded discrete-time random walk satisfies the condition of Theorem 2.4. Hence, we may consider the conditioned process given that the walker never leaves the Weyl chamber W , which is the Doob h-transform of the free walk defined by The distribution of the free Poisson walk at fixed time t is given by P where Mu n (y) = k −n n y 1 ,...,y k 1l{|y| = n} denotes the multinomial distribution on N k 0 , and | · | denotes the lattice norm.
We conceive the Charlier ensemble defined in (3.26) as a probability measure on W . Let us now show that the distribution of the conditioned walk at fixed time is a Charlier ensemble, if the walker starts from the point x * = (0, 1, 2, . . . , k − 1).

Proposition 3.3 For any y ∈ W and any t > 0,
Proof. We use Karlin-MacGregor's [KM59] formula, which in this case reads: where S k denotes the set of all permutations of 1, . . . , k, and sign(σ) denotes the signum of a permutation σ, and y σ = (y σ(1) . . . , y σ(k) ). We may summarize this as (3.31) Applying (3.31) for x * = (0, 1, 2, . . . , k − 1), we note that the matrix on the right hand side may be written as a product of the Vandermonde matrix (y m−1 j ) j,m=1,...,k with some lower triangle matrix C = (c m,i ) m,i=1,...,k with diagonal coefficients c i,i = 1. Hence, the latter determinant is equal to h(y). Now recall (3.27) to conclude the proof.

Remark 3.4
The imbedded discrete-time random walk of the Poisson walk leads to what is known as the de-Poissonised Charlier ensemble. If X = (X(n)) n∈N 0 is a discrete-time random walk on N k its distribution at fixed time is given by P x (X(n) = y) = Mu n (y − x). We call X the multinomial walk. The step distribution satisfies the conditions of Theorem 2.4. The h-transform of the multinomial walk satisfies where Z = 2 −k(k−1)/2 (n + 1) . . . (n + k(k − 1)/2) denotes the normalizing constant.
Let us go back to the h-transform of the continuous-time Poisson random walk. We now identify h with a particular point on the Martin boundary of the restriction of X to the Weyl chamber W . Analogously to (3.18) and (3.19), we define the Green kernel associated with X on W by Γ(x, y) = ∞ 0 dt P x (X(t) = y, T > t) and the corresponding Martin kernel by K(x, y) = Γ(x, y)/Γ(x * , y). Recall from the proof of Lemma 3.2 (see below (3.23)) the Schur function Schur x (v) for x ∈ W and v ∈ R k . The following lemma implies that x → Schur x (kv) is a strictly positive regular function for P W , for any v ∈ (0, ∞) k ∩ W with |v| = 1; in particular, the function h is not the only function that satisfies Theorem 2.4. We have thus identified infinitely many ways to condition k independent Poisson processes never to collide. However, in [OY01b] it is shown that, in some sense, h is the most natural choice, since the h-transform appears as limit of conditioned processes with drifts tending to each other. Similar remarks apply to the Krawtchouk case; in the proof of Theorem 4.6 we see that the h-transform has the analogous interpretation. In this case we also expect infinitely many positive harmonic functions on the Weyl chamber which vanish on the boundary. We remark that in the Brownian case h is the only positive harmonic function on W .
Lemma 3.5 Fix x ∈ W ∩ N k 0 and v ∈ (0, ∞) k ∩ W with |v| = 1. Then, in the above context, (3.33) In particular, Proof. We use (3.31) to see that, for any y ∈ W with |y| ≥ |x|, (3.35) Recall from the proof of Proposition 3.3 that the latter determinant is equal to h(y) for x = x * . Also note that x i ≥ x * i for any i, and hence |x − x * | = |x| − |x * |. Hence we obtain from (3.35) that, as the limit in (3.33) is taken, (3.36) Recall that det y x i j i,j=1,...,k = h(y) Schur x (y). Use the continuity of the determinant and of the Schur function and the fact that Schur x (·) is homogeneous of degree |x − x * | to deduce that (3.33) holds.

A representation for the Krawtchouk process
In [OY01b], a representation is obtained for the Charlier process by considering a sequence of M/M/1 queues in tandem. In this section we will present the analogue of this representation theorem for the Krawtchouk process, by considering a sequence of discrete-time M/M/1 queues in tandem. We will use essentially the same arguments as those given in [OY01b], but need to take care of the added complication that, in the discrete-time model, the individual walks can jump simultaneously. The Brownian analogue is also presented in [OY01b], in an attempt to understand the recent observation, due to Baryshnikov [Ba01] and Gravner/Tracy/Widom [GTW01], that the random variable where B = (B 1 , . . . , B k ) is a standard k-dimensional Brownian motion, has the same law as the smallest eigenvalue in a k × k GUE random matrix. For related work on this identity, see [BJ01,OY01a].
Similar connections between directed percolation random variables, such as M , and random matrix or discrete orthogonal polynomial ensembles have also been observed in [Jo00a,Jo01] (see Corollary 4.7). See also [Ba00,Fo99]. These are all related to the amazing fact, recently discovered and proved by Baik, Deift and Johansson [BDJ99], that the asymptotic distribution of the longest increasing subsequence in a random permutation is the same as the asymptotic distribution of the largest eigenvalue in a GUE random matrix, which had earlier been identified by Tracy and Widom [TW94].

Tandem queues.
Consider a collection of k queues in tandem operating in discrete time. There is a number of customers (maybe none) waiting in any of the queues, and the customers must pass each of the queues in increasing order until they finally leave the system. At each time unit in each queue, any service processes one customer (if present) and the customer arrives at the next queue. Let us formalize the system.
We introduce two other useful variables. Let (4.40) Then u i (n) is the number of unused services in queue i at time n. Note that we have always u i (n) ≥ 0. Later it will turn out that t i (·) is the service process for the time-reversed i-th queue.
We need some general notation. For a given process y i (·) and any finite interval I ⊂ R, we denote by Y i I the cumulative process over the interval I (4.41) This notation applies for y = a, d, t and s. Hence, from the above relations we deduce in particular, for any i ∈ {1, . . . , k} and all m, n ∈ Z with m ≤ n, Furthermore, for any process y i (·), we also define the reversed process y i by Then Y i I is the cumulative reversed process.

Tandem queues with single arrivals and services.
From now on, we assume that there are at most a single arrival and a single service at a given time and queue. Furthermore, we assume that the arrivals and the services occur randomly and independently and that their distributions depend only on the queue. Hence, the collection of the arrival and maximal service numbers, a i (n) and s i (n), are independent Bernoulli random variables with respective parameters p ∈ (0, 1) and q i ∈ (0, 1). For stability, we assume p < min{q 1 , . . . , q k }.
Note that Q i (·) is a stationary process given by Note also that Q 1 is a reversible Markov chain; in fact it is a birth and death process, and the geometric distribution on N 0 with parameter p(1−q 1 ) q 1 (1−p) ∈ (0, 1) is its invariant distribution (see e.g. [As87]).
We now give a proof of an important result that identifies the distribution of the processes D k , T 1 , . . . , T k . The statement is the discrete-time analogue of the extension of Burke's theorem given in [OY01b], although the proof is considerably more involved due to the fact that simultaneous arrivals and services are possible. Discussions on Burke's theorem and related material can be found in [Br81,Br99,Ke79,Ro00]. The original statement was first presented in [Bu56]. The idea of using reversibility to prove Burke's theorem is due to Reich [Re57]. We remark that Brownian analogues and variations of Theorem 4.1 below are presented in [HW90,OY01a].
If {y(n), n ∈ Z} is a sequence of independent Bernoulli variables with parameter p, the cumulative process Y I defined over all intervals I is a binomial process of parameter p on Z. Indeed, for all m ≤ n, Y (m, n] is a binomial variable of parameter (n − m, p). Proof. The proof is by induction on k. We first consider the case of k = 1 which already contains the main argument.
The proof of the assertion for k = 1 is based on reversibility. We already know that A 1 and S 1 are independent binomial processes with parameters p and q 1 . It is easy to see that D 1 and T 1 are the arrival and service processes of the reversed queue. So we proceed by constructing a reversible representation of the queue which is symmetric in (A 1 , S 1 ) and (D 1 , T 1 ).
We first construct a queue-length process Q given by a birth and death process which is taken equal in distribution to the stationary process Q 1 . Since it is not possible to reconstruct the arrival and service processes from the queue-length process, we introduce an auxiliary process indexed by 1 2 + Z which contains the necessary information on the events {Q(n − 1) = Q(n)} and moreover is reversible. Consider the i.i.d. process (M 0 n−1/2 ) n∈Z ∈ {(0, 0), (0, 1), (1, 1)} Z (resp. (M n−1/2 ) n∈Z ∈ {(0, 0), (1, 1)} Z ) where for all n ∈ Z, M 0 n−1/2 (resp. M n−1/2 ) has the same distribution as (a 1 (n), s 1 (n)) given that Q 1 (n − 1) = Q 1 (n) = 0 (resp. (a 1 (n), s 1 (n)) given that Q 1 (n − 1) = Q 1 (n) = 0). (The reader might find it helpful to draw a picture.) It is easy to check that the arrival-service process (α, σ) in our construction is then a function of (Q, M 0 , M), more precisely, for some suitable function f , The process (Q, α, σ) is then equal in distribution to (Q 1 , a 1 , s 1 ).
We can also construct the departures in this framework. More precisely, we define processes δ and τ by (δ(n), τ (n)) = f (Q(n − 1), Q(n)), M 0 n−1/2 , M n−1/2 , n∈ Z. The processes δ and τ are reversible, so that (α, σ) and (δ, τ ) are equal in law. From the same arguments, we get that the sequence δ(n) (resp. τ (n)) is pairwise independent which finally gives the equality in law of the cumulative processes of (α, σ) and (δ, τ ). This shows that the assertion holds true for k = 1.
We now turn to the general case. Assume the theorem is true for k − 1 instead of k, i.e., D k−1 , T 1 , . . . , T k−1 are independent binomial processes with respective parameters p, q 1 , . . . , q k−1 . Since the service process S k at queue k is clearly independent of the processes at queues 1, . . . , k− 1, we even have that D k−1 , T 1 , . . . , T k−1 , S k are independent binomial processes with respective parameters p, q 1 , . . . , q k .
Recall from (4.37) that D k−1 = A k . Applying the result for k = 1, we get that D k and T k are independent binomial processes with parameters p and q k . Being functions of (A k , S k ) = (D k−1 , S k ) only, they are independent of (T 1 , . . . , T k−1 ).
In the same manner one proves the following.

Tandem queues and non-colliding random walks.
In this subsection we give a representation of non-colliding random walks defined in the sense of Section 2 in terms of tandem queues with Bernoulli arrivals and services.
First we recursively define a sequence of mappings Γ k : N 0 → N k 0 , depending on k functions f 1 , . . . , f k : N 0 → N 0 satisfying f i (0) = 0 for all i. Recall (4.51) and (4.52) and define (4.61) The following result shows that the operator Γ k yields a representation of the processes D k , T 1 , . . . , T k in terms of the arrival and service processes if all the queues start without any customer at time 0. We abbreviate Y (n) = Y (0, n] for any cumulative process defined on N 0 .
Theorem 4.5 The conditional law of (X 1 , . . . , X k ), given that is the same as the unconditional law of Γ k (X 1 , . . . , X k ).
Proof. The proof is by induction on k. Let us prove the assertion for k = 2.
Assume that (X 1 , X 2 ) is a pair of independent binomial processes with parameters p < q 1 .
Our next purpose is to extend the assertion of Theorem 4.5 to the case in which all the parameters are equal. Hence let X 1 , . . . , X k be independent binomial processes with parameter p ∈ (0, 1) each. Note that, in this case, the conditioning on (4.66) is equivalent to the conditioning that the processes X i + i − 1, i = 1, . . . , k do not collide. Recall that x * = (0, 1, 2, . . . , k − 1) ∈ W , and let X + x * be a realisation of P x * as defined in Section 2.
Theorem 4.6 The processes X and Γ k (X) have the same law.
Proof. In what follows, convergence in law of processes is in the sense of finite-dimensional distributions. For ε < (1 − p)/(k − 1), let X ε be a random walk on N k 0 whose components X 1 , . . . , X k are i.i.d. binomial walks with respective parameters 0 < p < p + ε < · · · < p + (k − 1)ε < 1. Then, of course, X ε converges in law to X, as ε → 0. It follows that the process Γ k (X ε ) converges in law to Γ k (X). Thus, Theorem 4.6 will follow from Theorem 4.5 if we can show that the conditional law of x * + X ε , given (4.66) converges weakly along a subsequence to P x * . To prove this, first note that this conditional law is in fact the Doob h-transform of X ε , started at where P ε x denotes the law of X ε started at x ∈ W , and T ε is the first time X ε exits the Weyl Chamber W . Denote by P ε x * the law of this h-transform. It is easy to see that there exist strictly positive functions ϕ and ϕ on W , which do not depend on ε, such that It follows that h ε (·)/h ε (x * ) converges (in the product topology on R W ) along a subsequence to a strictly positive function g on W . Denote by P ε W (resp. P W ) the restriction of the transition kernel associated with X ε (resp. X) to W . Then P ε W converges to P W as ε → 0. Since h ε is regular for P ε W and the processes X and X ε have bounded jumps, we see that g is regular for P W . We also deduce that the Doob transform of P W by g has the same law as Γ(X). It remains to show that g = h. To do this we use Lemma 3.2 and a theorem of Doob (see [D094] or [Wi79,Theorem III.49.3]), by which it suffices to show that 1 n Γ(X)(n) → p1l almost surely as n → ∞. But this follows from the fact that, with probability one, 1 n X(sn) → sp1l uniformly for s ∈ [0, 1]. So we are done.
Using both Proposition 3.1 and Theorem 4.6, we recover the following result due to Johansson [Jo01, Proposition 5.2].
Corollary 4.7 (Johansson) For any n ∈ N, the law of (X 1 ⊗ · · · ⊗ X k )(n) is equal to that of the smallest component in the Krawtchouk ensemble Kr W k,n+k−1,p . That is, Finally, we relate this to another discrete orthogonal ensemble called the Meixner ensemble. This is defined (on W ) by where 0 < q < 1, k, n ∈ N, and Z is a normalisation constant. For notational convenience, set X = Γ k (X). We begin by giving a queueing interpretation to the random variable X 1 (n) = (X 1 ⊗ · · · ⊗ X k )(n). Consider a tandem of k queues, labelled by 1, . . . , k. At time zero there are infinitely many customers stacked up at the first queue and the other k − 1 queues are empty. The customers are also labelled in the order they appear in the first queue at time zero; that is, the 'first customer' is the customer initially at the head of the first queue, and so on. By (4.63), if X j (n) is the cumulative service process associated with the j th queue, then X 1 (n) is the number of departures from the k th queue upto (and including) time n. Now, there is an equivalent formulation of this Markovian queueing system in temporal units. If w(i, j) denotes the service time of the i th customer at the j th queue (that is, the length of time spent by customer i at the head of queue j), then these are i.i.d. geometric random variables with parameter q = 1 − p, that is, P (w(i, j) = l) = q l (1 − q) for l ∈ N 0 . Let D(m, k) denote the departure time of the m th customer from the k th queue. Then D(m, k) = X −1 1 (m); that is, D(m, k) ≤ l if, and only if, X 1 (l) ≥ m. In other words, G(·, k) is the inverse of X 1 . It is well-known that we have the following representation for D(m, k) (see, for example, [GW91]): where Π(m, k) is the set of non-decreasing connected paths (1, 1) = (i 1 , j 1 ) ≤ (i 2 , j 2 ) ≤ · · · ≤ (i m+k , j m+k ) = (m, k).
By [Jo00a, Proposition 1.3], for m ≥ k, (4.77) (Note that by symmetry D(k, m) has the same law as D(m, k).) Combining this with Corollary 4.7, we recover the following relationship between the Meixner and Krawtchouk ensembles which was presented in [Jo00b, Lemma 2.9]: for m ≥ k, Given this connection, we see that Theorem 4.6 also yields a representation for the largest component in the Meixner ensemble, in terms of the rightmost of a collection of independent random walks, each with a geometric step distribution, conditioned in an appropriate sense on the order of the walks to be fixed forever. (By the latter process we mean the componentwise inverse of the h-transform (with h as in (1.2)) of the collection of the inverse processes of k independent Bernoulli walks, re-ordered such that the the k-component is the largest.) Similarly, the analogue of Theorem 4.6 given in [OY01b] for the Charlier process yields a representation for the largest eigenvalue in the Laguerre ensemble: in this case the w(i, j) are i.i.d. exponentially distributed random variables. We remark that this is quite different from the representation of the Laguerre process, presented in [KO01], as a system of non-colliding squared Bessel processes.

The discrete circular ensemble
For completeness, we now present related results for random walks on the discrete circle. As we shall see, the invariant distribution for the system of non-colliding walks on the circle is a discrete orthogonal polynomial ensemble which can be regarded as the discrete analogue of the circular unitary ensemble (CUE). The latter arises in random matrix theory as the law of the eigenvalues of a random unitary matrix chosen according to Haar measure on the group of unitary matrices of a particular dimension. In the continuous case, a connection with non-colliding Brownian motions has been made in [Dy62] and [HW96].
Since in this case, we are dealing with a finite-space Markov chain, there is no positive harmonic function for the process killed at the first collision time; instead we shall be working with the Perron-Frobenius eigenfunction associated with the killed walk, which will play a role similar to that played by h in Section 2.
Consider a random walk X = (X(n)) n∈N 0 on on the circle Z k N with associated probability P x , where x ∈ Z k N is the starting point and Z N is the set of integers modulo N . Assume that X is irreducible and aperiodic. Furthermore, we assume that its step distribution is exchangeable with support in {0, 1} k . We denote, for L ∈ {0, . . . , k} where | · | again denotes the lattice norm. Note that Ω L contains precisely k L elements. By exchangeability, the step distribution induces the uniform distribution on Ω L for any L. Hence, any transition in Ω L has the same probability which we denote p L . In particular, 1 = k L=0 p L |Ω L |, where |Ω| denotes the cardinality of a set Ω.
We are concerned with the process conditioned never to collide. From now on, we assume that N > k and that the starting point x is such that where the inequality holds in Z. This is always possible by renumbering the particles and assuming that they start at different positions. We denote the state space of this process by W N which is the set of points in y ∈ Z k N for which there exists a cyclic permutation σ of 1, . . . , k such that x = y σ satisfies (5.80). We note by W N the set of points such that the non-strict inequalities hold for a cyclic permutation, and ∂W N = W N \W N . Denote by P W N the restriction of the transition kernel of the walk to W N .
An equivalent useful representation is given by considering a random walk on Z k conditioned to stay in V N = {x ∈ Z k : x 1 < x 2 < · · · < x k < x 1 + N }. Note that φ β ≡ 0 if the components of β are not distinct and that φ β does not depend on the order of the components. Furthermore, we have φ β = 0 on ∂V N if β 1 ≡ β 2 ≡ · · · ≡ β k mod k.
One easily checks for any β ∈ Z k that φ β is an eigenfunction of P V N : The n-th power of the transition matrix is then given by P n (x, y) = λ −n ψ(y)P n W N (x, y)/ψ(x), and from [DS65] it follows that P x (X(n) = y) = lim m→∞ P x (X(n) = y | T > m), x,y ∈ W N , n ∈ N.
(5.87) Thus, P x can be interpreted as the law of X conditioned never to leave W N .
The invariant distribution of the conditioned process is given by ψ 2 /Z on W N , where Z is the appropriate normalising constant. This follows from results presented in [DS65] (it also follows easily from (5.86) if one makes use of the dynamic reversibility of the random walk). This distribution is the orthogonal polynomial ensemble on the discrete circle corresponding to the Chebyshev polynomials.
The kernel P n W N may be expressed in terms of the eigenfunctions by using the Karlin-MacGregor formula and the discrete Fourier transform [HW96]. We have where φ β is the complex conjugate of φ β , and the parameter sets B k and B k are given by B k = β ∈ Z k N : − Nk/2 + 1 ≤ β l ≤ Nk/2 ∀l, β 1 ≡ β 2 ≡ · · · ≡ β k mod k , B k = β ∈ B k : β 1 < β 2 < · · · < β k }.
Let us mention two important examples to which the preceding applies. The binomial walk on the circle is obtained by setting p L = p L (1 − p) k−L for some parameter p ∈ (0, 1) and all L ∈ {0, . . . , k} (recall that k L=0 p L |Ω L | = 1). In this case the Perron-Frobenius eigenvalue is identified as (1−p) 2 +p 2 +2p(1−p) cos 2π N l − k+1 2 . (5.89) We also mention the multinomial walk on the circle. In this case, p L = 1 k if L is 1 and p L = 0 otherwise. Then, the Perron-Frobenius eigenvalue is given by These results may be compared with those obtained in [HW96] for the case of Brownian motions on the circle. There, the leading eigenfunction is also given by (5.85) and the principal eigenvalue is k(k − 1)(k + 1)/24. See also [Pi85] for related work for diffusions in a more general context.