Orthogonal polynomial ensembles in probability theory

We survey a number of models from physics, statistical mechanics, probability theory and combinatorics, which are each described in terms of an orthogonal polynomial ensemble. The most prominent example is apparently the Hermite ensemble, the eigenvalue distribution of the Gaussian Unitary Ensemble (GUE), and other well-known ensembles known in random matrix theory like the Laguerre ensemble for the spectrum of Wishart matrices. In recent years, a number of further interesting models were found to lead to orthogonal polynomial ensembles, among which the corner growth model, directed last passage percolation, the PNG droplet, non-colliding random processes, the length of the longest increasing subsequence of a random permutation, and others. Much attention has been paid to universal classes of asymptotic behaviors of these models in the limit of large particle numbers, in particular the spacings between the particles and the fluctuation behavior of the largest particle. Computer simulations suggest that the connections go even farther and also comprise the zeros of the Riemann zeta function. The existing proofs require a substantial technical machinery and heavy tools from various parts of mathematics, in particular complex analysis, combinatorics and variational analysis. Particularly in the last decade, a number of fine results have been achieved, but it is obvious that a comprehensive and thorough understanding of the matter is still lacking. Hence, it seems an appropriate time to provide a surveying text on this research area.


Introduction
In the 1950ies, it was found that certain important real N -particle ensembles (that is, joint distributions of N real random objects) can be described by a probability measure P N of the form (1.1) on the set 2) where Z N is the normalization, µ some distribution on R having all moments, and (1.3) is the well-known Vandermonde determinant. That is, P N is the transformed configuration distribution of a vector of N particles, distributed independently according to µ under the influence of the mutually repelling density ∆ 2 N , properly normalized to a probability measure on the so-called Weyl chamber W N . The most important and one of the earliest examples is the joint distribution of the eigenvalues of a random matrix drawn from a Gaussian Unitary Ensemble (GUE), in which case µ is a Gaussian distribution, and P N is called the Hermite ensemble. Also spectra of a couple of other types of random matrices turned out to admit a description of the form (1.1), among which the Wishart matrices, where µ is a Gamma distribution and P N the Laguerre ensemble. The explicit form of (1.1) served as a starting point for many deep investigations of asymptotic spectral properties of random matrices. Furthermore, non-colliding Brownian motions (sometimes called Dyson's Brownian motions) could also successfully be investigated in the early 1960ies using descriptions in the spirit of (1.1). Also variants of (1.1) (e.g., with ∆ 2 N replaced by ∆ N or by ∆ 4 N ) turned out to have a significant relevance and could be treated using related methods.
For a long while, spectra distributions of certain random matrices (and the closely related non-colliding Brownian motions) were the only known important models that admit a description as in (1.1). However, in the second half of the 1990ies, the interest in non-colliding random processes was renewed and was put on a more systematic basis, and other types of statistical physics models were found to admit a description of the form (1.1): certain random growth models (equivalently, directed last passage percolation), polynuclear growth models, the problem of the length of the longest increasing subsequence in a random permutation, the Aztec diamond, and others. Furthermore, effective analytic techniques for deriving asymptotic properties of P N , which were developed in the early 1990ies, have recently been systematically extended and improved. As a consequence, in recent years a lot of questions about these models could be answered. The last ten years saw an exploding activity of research and an enormous progress in the rigorous understanding of some of the most important of these models, and the work is still going on with an increasing velocity. A significant number of deep and important results on universality questions have recently been solved, building on work of the last 40 or so years. However, it still seems as if a complete understanding of the phenomena has not yet been achieved, since many of the existing proofs are still based on explicit calculations and direct arguments. There seem some intricate mechanisms present which have been understood only in special cases by formal analogies. It will be an important and difficult task in future to find the essences of the phenomena in general.
In view of the great recent achievements, and also in order to draw the attention of non-experts to this field, it seems fruitful to write a comprehensive survey on most of the models that can be described by an ensemble as in (1.1). The present text is an attempt to explain the problems and questions of interest in a unifying manner, to present solutions that have been found, to give a flavor of the methods that have been used, and to provide useful guidelines to much of the relevant literature. It is aimed at the non-expert, the newcomer to the field, with a profound background in probability theory, who seeks a non-technical introduction, heuristic explanations, and a survey. Our concern is to comprehensively summarize the (in our opinion) most important available results and ideas, but not to lose ourselves in details or even technicalities. In the three remaining sections, we give an account on the three research areas we consider most important in connection with orthogonal polynomial ensembles: random matrix theory, random growth models, and non-colliding random processes.
A probability measure P N of the form (1.1) is called an orthogonal polynomial ensemble. The theory of orthogonal polynomials is a classical subject, and appears in various parts of mathematics, like numerics, combinatorics, statistics and others. The standard reference on orthogonal polynomials is [Sz75]. However, the term 'orthogonal polynomial ensemble' is relatively recent and may be motivated by the following. Let (π N ) N ∈N0 denote the sequence of polynomials orthogonal with respect to the inner product on the space L 2 (µ). The polynomials are unique by the requirement that the degree of π N is N , together with the normalization π N (x) = x N + O(x N −1 ). They may be obtained from the monomials x → x j via the well-known Gram-Schmidt algorithm. A nice relation 1 between the orthogonal polynomials and the ensemble P N in (1.1) now is the fact that π N may be seen as the 'expected polynomial' of the form N i=1 (x − x i ) with (x 1 , . . . , x N ) distributed according to P N , i.e., (1.4)

Random matrix theory
In spite of the appearance of various random matrix distributions in several areas of mathematics and physics, it has become common to use the term random matrix theory exclusively for those matrix distributions that are used, since Wigner's introduction to physics in the 1950ies, as models for energy levels in slow nuclear reactions. Measurements had already given rise to a hope that the energy levels follow a universal picture. Wigner's hope was that the eigenvalues of appropriate classes of random matrices would be mathematically tractable and would reflect this universality in the limit of unbounded matrix size. Based on Wigner's early work, Dyson [Dy62a], [Dy62c] argued on physical grounds that three certain matrix classes be relevant for the description of energy levels, the by now famous orthogonal, unitary and symplectic Gaussian ensembles.
It soon turned out that their eigenvalue distributions are given in terms of certain orthogonal polynomial ensembles. In the mid-nineties, seven more Gaussian random matrix ensembles were introduced [Ve94], [AZ96], [AZ97], [Zi97], and it was argued that these in total ten classes form a complete classification of the set of random matrix ensembles that are physically relevant in a certain sense, subject to some symmetry constraints.
In the last decades, random matrix theory became a major mathematical and physical research topic, and more and more exciting phenomena were found. In the last decade, universality of many aspects could be proven for large classes of models, and the research is going on increasingly fast.
The standard reference on the mathematical treatment of random matrices is [Me91]. Authored by a physicist with strong mathematical interest, it explains the physical relevance of a host of random matrix models and provides a great amount of relevant formulas and calculations. A recent historical survey on the field from a physicist's point of view in [FSV03] (see the entire volume), which contains a vast list of references, mostly from the physics literature. A thorough summary of the proofs of some of the most important results on random matrix theory from the viewpoint of Riemann-Hilbert theory is in [De98]. Further surveying and appetizing texts on random matrix theory are [TW93b] and [Di03]. When the present text is being written, some (teams of) authors are preparing monographs on random matrix theory, among which [Fo05+].
In the present section we first introduce to some of the above mentioned matrix ensembles and their eigenvalue distributions in Sections 2.1-2.4, present the famous Wigner semicircle law in Section 2.5, discuss correlation functions in Section 2.6 and introduce the important method of orthogonal polynomials in Section 2.7. Afterwards, we present the most important asymptotic results on eigenvalues, the bulk limits in Section 2.8 and the edge asymptotics in Section 2.9. The main proof method, the Riemann-Hilbert theory, is outlined in Section 2.10. Finally, in Section 2.11 we explain some relations to the zeros of the famous Riemann zeta function.

The questions under interest
Consider a random Hermitian N × N -matrix, M , and denote its eigenvalues by λ 1 ≤ λ 2 ≤ · · · ≤ λ N . Hence, λ = (λ 1 , . . . , λ N ) is a random element of the closure of the Weyl chamber W N in (1.2). Among others, we shall ask the following questions: (i) What is the distribution of λ for fixed N ∈ N? (ii) What is the limiting scaled distribution of λ as N → ∞, in terms of the empirical measure 1 N N i=1 δ λi , for an appropriate scaling λ i of λ i ? (iii) What is the limiting behavior of the largest eigenvalue, λ N , as N → ∞?
(Or of the smallest, λ 1 , or the joint distribution of a few of the smallest, say (λ 1 , . . . , λ m ) for some m.) More precisely, what is the right normalization for a law of large numbers, and what is the right scaling for a limit law, if present? (iv) What are the limiting statistics of the spacings between neighboring eigenvalues? How many gaps are there with a given maximal length? What is the average distance between λ cN −rN and λ cN +rN for some c ∈ (0, 1) and some r N → ∞ such that r N /N → ∞?
Question (iii) refers to the edge of the spectrum, while (iv) refers to the bulk of the spectrum.
The so-called Wigner surmise conjectures that the limiting spacing between two subsequent eigenvalues of a large Gaussian matrix should have the density (0, ∞) ∋ x → Cxe −cx 2 . This is true for a (2×2)-matrix a b b c with independent standard Gaussian entries a, b, c: The spacing λ 2 − λ 1 is equal to [(a − c) 2 + 4b 2 ] 1/2 , whose square has the χ 2 -distribution. However, the Wigner surmise turned out to be inaccurate (even though rather close to the true distribution): the asymptotic spacing distribution is different.

Matrix distributions
It turned out [Dy62a] that, according to time reversal invariance properties of the material considered, basically three different matrix classes are of interest as models for energy levels of nuclea: matrices whose entries are (1) real numbers, (2) complex numbers, and (3) quaternions. One basic requirement is that the random matrices considered be symmetric, respectively Hermitian, respectively self-dual, such that all the eigenvalues are real numbers. For the (famous and most studied) special case of Gaussian entries, these three cases correspond to the Gaussian Orthogonal Ensemble (GOE), the Gaussian Unitary Ensemble (GUE) and the Gaussian Simplectic Ensemble (GSE). In the following, we shall concentrate mostly on the unitary ensemble, since this class is, in some respects, technically the easiest to treat and exhibits the farthest reaching connections to other models.
We assume that M = (M i,j ) i,j=1,...,N is a random Hermitian (N × N )-matrix with complex entries. In particular, the diagonal entries M i,i are real, and for j,i are the real part and imaginary part, respectively. Two basic respective requirements are (1) independence of the matrix entries, and (2) invariance of the distribution of the matrix under unitary conjugations. These two ideas lead to different matrix classes: Wigner matrices: We call the random Hermitian matrix M a Wigner matrix if the collection {M (R) i,j : i, j = 1, . . . , N, i < j} ∪ {M (I) i,j : i, j = 1, . . . , N, i < j} ∪ {M i,i : i = 1, . . . , N } consists of independent, not necessarily identically distributed, random variables with mean zero and a fixed positive variance, which is the same for the real parts and for the imaginary parts, respectively.
Hence, there are N 2 independent real variables that determine the distribution of M . The distribution of the diagonal elements is arbitrary, subject to moment conditions.
Unitary-invariant matrices: We call the random Hermitian matrix M unitaryinvariant if the joint distribution of its entries has the form 2) With the exception of the Gaussian case j = 1, there are strong correlations between all the matrix entries. The idea behind the invariance under unitary conjugations is that the matrix distribution should not depend on the observation system, as long as it is based on a unitary coordinate axis.
The famous GUE lies in the intersection of the Wigner-class and the unitaryinvariant class. It is a Wigner matrix with all the sub-diagonal entries being complex standard normal variables 2 and the diagonal entries being a real normal variable with variance two 3 . Alternately, it is the unitary-invariant matrix of the form (2.1) with F (M ) = Tr(M 2 ).
The GOE is the real variant of the GUE; i.e., the sub-diagonal entries are independent standard real normal variables with the same variance as the diagonal entries. Hence, the GOE has 1 2 N (N + 1) independent sources of real randomness.
The GSE is the symplectic variant of the GUE, i.e., the diagonal entries are real standard normals as in the GUE, and the sub-diagonal entries are elements of the quaternion numbers. Their four components are i.i.d. real standard normal variables. Hence, the GSE has N + 2N (N − 1) independent real randomnesses.
Further important related classes of random matrices are the Wishart matrices, which are of the form A * A with A a (not necessarily square) matrix having throughout i.i.d. complex normal entries (first considered in a multivariate statistics context by Wishart [Wi28]). See [Me91] for further classes.

Eigenvalue distributions
Let λ 1 ≤ λ 2 ≤ · · · ≤ λ N be the N eigenvalues of the random Hermitian matrix M . We ask for the distribution of the random vector λ = (λ 1 , . . . , λ N ). A concise answer for a general Wigner matrix M seems inaccessible, but for unitaryinvariant ensembles there is a nice, fundamental formula. We formulate the GUE case and make a couple of remarks afterwards. Lemma 2.1 (Eigenvalue distribution for GUE). Let M be a random matrix from GUE. Then the distribution of the vector λ = (λ 1 , . . . , λ N ) of eigenvalues of M has the density with Z N the appropriate normalizing constant on the Weyl chamber W N in (1.2).
Sketch of the proof. Choose a (random) unitary matrix U which diagonalizes M , i.e., the matrix D = U M U −1 is the diagonal matrix with the eigenvalues on the main diagonal. Hence, (i) We chose the normalization Z N such that P N is normalized on W N = {x ∈ R N : x 1 < x 2 < · · · < x N }. We extend P N to a permutation symmetric function on R N . Hence, P N = 1 N ! P N is a probability density on R N . ( An analogous assertion is true for the orthogonal case, see [HP00,Ch. 4].
having throughout independent complex standard normal entries, then the vector of eigenvalues of M has the density [Ja64] This ensemble is called the Laguerre ensemble. (vi) Using Selberg's integral [HP00, p. 118/9], the normalizing constants of the Hermite ensemble and the Laguerre ensemble may be identified in terms of the Gamma-function. Indeed, for any β > 0, we have and, for any a > 0, . (2.8) (vii) There is obviously a mutually repelling force between the eigenvalues in (2.3): the density vanishes if any two of the N arguments approach each other. It does not seem easy to derive an intuitive reason for this repellence from random matrix considerations, but if the matrix M is embedded in a natural process of random Hermitian matrices, then the process of eigenvalues admits a nice identification that makes the repellence rather natural. This is the subject of Section 4.1 below. ✸

Circular ensembles
An important type of random Gaussian matrices are the circular ensembles, which were introduced in [Dy62a] in the desire to define a matrix model that can be seen as the conditional Gaussian ensembles given a fixed value of the exponential weight F (M ) in (2.1). Again, there is an orthogonal, unitary and symplectic version of the circular ensemble. We give the definition of the circular ensembles [Me91,Ch. 9]. The circular orthogonal ensemble (COE) is the unique distribution on the set of orthogonal symmetric (N × N )-matrices that is invariant under conjugation with any real orthogonal matrix. That is, an orthogonal symmetric random matrix S is COE-distributed if and only if W SW −1 has the same distribution as S, for any real orthogonal matrix W . The circular unitary ensemble (CUE) is the unique distribution on the set of complex unitary (N × N )-matrices that is invariant under (two-sided) transformations with unitary matrices, i.e., a complex unitary random matrix S is CUE-distributed if and only if U SV has the same distribution as S, for any two unitary matrices U and V . Finally, the circular symplectic ensemble (CSE) is the unique distribution on the set of self-dual unitary quaternion matrices that is invariant under every automorphism S → W R SW , where W is any unitary quaternion matrix and W R its dual.
All eigenvalues of the circular matrices lie on the unit circle and may be written λ i = e i θi with 0 ≤ θ 1 < θ 2 < · · · < θ N < 2π. One advantage of the circular ensembles is that the joint distribution density of their eigenvalues admits particularly simple formulas. Indeed, adopting the parameter β = 1, 2, 4 for the COE, CUE and CSE, respectively (recall (2.5)), the density of the vector (θ 1 , . . . , θ N ) of eigenvalue angles is given as (2.9) Here we chose the normalization such that P (circ,β) N is a probability density on

The law of large numbers: Wigner's semi-circle law
In this section we present the famous semi-circle law first proved by Wigner: the convergence of the mean eigenvalue density as the size of the matrix increases to infinity. This is an asymptotic statement about the convergence of the empirical measure 4 of the appropriately scaled eigenvalues of a random matrix towards the distribution be the empirical measure of the rescaled eigenvalues. Then µ N converges weakly in distribution towards the semicircle distribution µ * in (2.10).
We shall call λ (N ) 1 , . . . , λ (N ) N the (unscaled) eigenvalues and λ (N ) 1 , . . . , λ (N ) N the (re)scaled eigenvalues. 4 By the empirical measure of N points x 1 , . . . , x N we denote the probability measure for any ε > 0 with overwhelming probability, and the spacings between subsequent eigenvalues are of order N −1/2 in the bulk of the spectrum and much larger close to the edge. (ii) The convergence takes place in the sense that the expectation of every bounded and continuous function of the empirical measure converges. Note that the moments (i.e., the family of maps µ → x k µ(dx) for k ∈ N) constitute a convergence determining family. (iii) Note that, for any a < b, In particular, the semicircle law states that the expected number of unscaled eigenvalues λ (N ) i in the interval [aN We turn now to sketchs of two proofs. Sketch of the first proof: the method of moments. This is Wigner's original method [Wi55], [Wi58], see [HP00,Ch. 4]. The idea is that it suffices to prove that the expected moments of µ N converge to the ones of µ * , i.e., By symmetry, all odd moments of both µ N and µ * are zero, hence it suffices to consider k = 2m. The (2m)-th moments of µ * are known to be 2 −m 1+m 2m m . Note that the left hand side is equal to the normalized trace of M 2m N , i.e., where M i,k denote the entries of the matrix M N . Some combinatorial work has to be done in order to discard from the sum those terms that do not contribute, and to extract the leading terms, using the independence of the matrix entries and rough bounds on the moments of the matrix entries. The term coming from the subsum over those multi-indices i 1 , . . . , i 2m with #{i 1 , . . . , i 2m } < m + 1 is shown to vanish asymptotically, and the one with #{i 1 , . . . , i 2m } > m + 1 is shown to be equal to zero.
The second proof is in the spirit of statistical mechanics and is based on the eigenvalue density in (2.3). Indeed, the convergence is derived with the help of large-deviation type arguments and the minimization of a certain energy functional. In particular, the semicircle law turns out to be the unique minimizer, because of which it is sometimes called an equilibrium measure for that functional. We partially follow the presentation in [De98, Ch. 6], which is based on [Jo98] and [DMK98]. A general reference for equilibrium measures and related material is [ST97]. Sketch of the second proof: the equilibrium measure method. The starting point is the observation that the joint density P N of the unscaled eigenvalues in (2.3) is of the form P N (x) = 1 ZN e −HN (x) with the Hamiltonian (2.14) In order to obtain a non-degenerate limit law, we have to rescale the λ (N ) i in such a way that both parts of H N (x) are of the same order in N . Since the second part is always of order N 2 , it is clear that we should consider the scaling λ (N ) as in the theorem. The vector λ (N ) of the rescaled quantities has the density and we absorbed some terms in the new normalizing constant. In terms of the empirical measure of the rescaled quantities, µ N , the Hamiltonian takes the shape H N ≈ I(µ N ), where Here we suppressed the diagonal terms, i.e., the summands for i = j, which is a technical issue. Since the integration is only of the order N and the exponent of order N 2 , it is clear that the large-N behavior of the measure 1 is determined by the minimizer(s) of the variational problem The minimizer(s) are called equilibrium measure(s). According to (a high-dimensional variant of) the well-known Laplace method, the value of E should be the large-N exponential rate of P N (x) with speed N 2 , and the empirical measures µ N should converge towards the minimizer(s). The analysis of the problem in (2.18) is not particularly difficult. Using standard methods one shows the existence and uniqueness of the equilibrium measure and the compactness of its support. Using the Euler-Lagrange equation in the interior of its support, one identifies the equilibrium measure with the semicircle law, µ * . However, in order to show the convergence of µ N towards µ * , one needs to show that the contribution coming from outside a neighborhood of µ * is negligible, which is a more difficult issue. This is carried out in [Jo98].
The analysis of this functional and the proof of convergence towards its minimizer is similar to the proof in the special case where Q(x) = x 2 .
The equilibrium measure has a density, and its support is compact. If ψ denotes the density and [−a, a] its support, then ψ(x) = (a 2 − x 2 ) 1 2 h 1 (x) for |x| < a, where h 1 is a polynomial of order 2j − 2. (iii) Even more generally, one starts immediately from distributions as in (2.3) with x 2 i replaced by N V (x i ) (note the factor of N ) with some sufficiently regular function V tending to infinity at infinity sufficiently fast. With this ansatz, no rescaling is necessary, i.e., the empirical measure of the unscaled vector (λ (N ) 1 , . . . , λ (N ) N ) converges. The relevant functional is then the one in (2.19) with γ 2j x 2j replaced by V (x). The Euler-Lagrange equations for this functional are, for some l ∈ R, However, for general V , the explicit identification of the minimizer is considerably more difficult and involved. In general, if V is convex, then the support of the equilibrium measure is still an interval, but in the general case it consists of a finite union of intervals, provided that V is analytic [DMK98]. (iv) The energy functional I in (2.17) has an interpretation in terms of an electrostatic repulsion in the presence of an external quadratic field, if µ is the distribution of electrons. The second term is sometimes called logarithmic entropy or Voiculescu's entropy, see [Vo93] and [Vo94]. (v) An advantage of the equilibrium measure method is that it opens up the possibility of a large-deviation principle for the empirical measure of the rescaled eigenvalues. (This is, roughly speaking, the determination of the large-N decay rate of the probability for a deviation of the empirical measure from the semicircle law in terms of a variational problem involving the energy functional.) The first proof of such a principle is in [BAG97], after pioneering (and less rigorous) work in [Vo93] and [Vo94]. Extensive and accessible lecture notes on large deviation techniques for large random matrices may be found in [Gui04]. (vi) In the course of the equilibrium-measure proof of Theorem 2.3 (see [De98, Theorem 6.96]), for every k ∈ N, also the weak convergence of the kdimensional marginal of P N with density towards the k-fold product measure µ ⊗k * is proved. As an elementary consequence, N −k times the expected number of k-vectors of different rescaled eigenvalues in [a, b] converges towards µ * ([a, b]) k . (vii) There is an analogue of the semicircle law for the spectrum of the circular ensembles introduced in Section 2.4, without normalisation of the eigenvalues required. An innovative technique for deriving this law was introduced in [DS94] (see also [DE01]), where the asymptotic independence and normality of the traces of powers of the random matrix under consideration is shown. Related results are derived in [DS94] for the problem of the longest increasing subsequence of a uniform random permutation, which is introduced in Section 3.5. ✸

Correlation functions
In this section we let P N : W N → [0, ∞) be any probability density on the Weyl chamber W N in (1.2) and λ = (λ 1 , . . . , λ N ) ∈ W N be a random variable with density P N . We introduce the so-called correlation functions of P N , which will turn out to be important for two reasons: (1) much interesting information about the random variable λ can be expressed in terms of the correlation functions, and (2) when specializing P N to an orthogonal polynomial ensemble, the correlation functions admit a determinantal representation which will be fundamental for the asymptotic analysis of the ensemble.
We extend P N to a permutation invariant function on R N . Then P N = 1 N ! P N is a probability density on R N . For k ∈ N, the k-point correlation function is defined as k is a probability density on R k , the marginal density of P N in (2.21). It is a simple combinatorial exercise to see that, for any measurable set A ⊂ R, the quantity A k R (N ) k (x) d k x is equal to the expected number of ktupels (λ i1 , . . . , λ i k ) of distinct particles such that λ ij ∈ A for all j = 1, . . . , k. In particular, R (N ) 1 (x) dx is the expected number of particles in dx. As a first important application, the probability that a given number of particles lie in a given set can be expressed in terms of the correlation functions as follows.
Lemma 2.6. For any N ∈ N, any m ∈ {0, 1, . . . , N } and any interval I ⊂ R, Sketch of the proof. We only treat the case m = 0, the general case being a simple extension. Expand where the functions ζ k are permutation symmetric polynomials, which are defined by the relation for any z ∈ R and α = (α 1 , . . . , α N ) ∈ R N . Now multiply by the density P N and integrate over R N . Using the explicitly known coefficients of the polynomials ζ k , and using the permutation invariance of R (N ) k , one arrives at (2.23). Also the expected number of spacings in the vector λ can be expressed in terms of the correlation functions. For x = (x 1 , . . . , x N ) ∈ W N , u ∈ R and s, t ≥ 0 denote by the number of nearest-neighbor spacings in the sequence x 1 , . . . , x N below the threshold s, respectively the number of these spacings between those of the x 1 , . . . , x N that are located in the interval with diameter 2t around u. Clearly, . It is convenient to extend S (N ) (s; ·) and S (N ) t (s, u; ·) to permutation invariant functions on R N .
Lemma 2.7. For any N ∈ N and t, s > 0, and u ∈ R, Sketch of the proof. We do this only for t = ∞. For k ≥ 2 and y = (y 1 , . . . , y k ) ∈ R k , let 1l{|y i − y j | ≤ s} and for any x ∈ W N . Multiplying this with the density P N , integrating over W N and using the permutation symmetry of P N = 1 N ! P N and Now an obvious change of variables and the symmetry of R (N ) k yields the assertion for t = ∞.

The orthogonal polynomial method
In this section we briefly describe the most fruitful and most commonly used ansatz for the deeper investigation of the density P N in (2.3): the method of orthogonal polynomials. This technique has been first applied to random matrices by Mehta [Me60] but relies on much older research. For the general theory of orthogonal polynomials see [Sz75]. We follow [De98, Sect. 5] and treat a general orthogonal polynomial ensemble of the form where Q : R → R is continuous and so large at infinity that all moments of the measure e −Q(x) dx are finite. We normalized P N to a probability density on R N .
In the GUE-case Q(x) = x 2 , these are the well-known Hermite polynomials, scaled such that the leading coefficients are one.) Elementary linear manipulations show that the Vandermonde determinant in (1.3) can be expressed in terms of the same determinant with the monomials x j replaced by the polynomials We normalize the π j now to obtain an orthonormal system (φ j ) j∈N0 of L 2 (R) with respect to the Lebesgue measure: the functions An important role is played by the kernel K N defined by The k-point correlation function R (N ) k in (2.22) admits the following fundamental determinantal representation.
In particular, (2.35) Sketch of the proof. Using the determinant multiplication theorem, is easily seen that the density P N may be written in terms of the functions φ j as Using the special structure of this kernel and some elegant but elementary integration method (see [De98,Lemma 5.27]), one sees that the structure of the density is preserved under successive integration over the coordinates, i.e., In particular, Z N = N !, and (2.34) holds.
Remark 2.9 (Determinantal processes). Lemma 2.8 offers an important opportunity for far-reaching generalisations. One calls a point process (i.e., a locally finite collection of random points on the real line) a determinantal process if its correlation functions are given in the form (2.34), where K is, for some measure µ on R having all moments, the kernel of a nonnegative and locally trace class integral operator L 2 (R, µ) → L 2 (R, µ). Because of [De98, Lemma 5.27], correlation functions that are built according to (2.34) form a consistent family of N -particle distributions and therefore determine a point process on R. To a certain extent, random matrix calculations only depend on the determinantal structure of the correlation functions are may be used as a starting point for generalisations. ✸ Now let λ = (λ 1 , . . . , λ N ) ∈ W N be a random variable with density P N = N ! P N . We now express the probability that a given interval I contains a certain number of λ i 's in terms of the operator K N on L 2 (I) with kernel K N (x, y).
where Id denotes the identical operator in L 2 (I).
Sketch of the proof. From Lemma 2.6 and (2.34) we have On the other hand, for any γ ∈ R, by a classical formula for trace class operators, Now differentiate m times with respect to γ and put γ = 1 to arrive at (2.38).

Spacings in the bulk of the spectrum, and the sine kernel
In this section, we explain the limiting spacing statistics in the bulk of the spectrum of a random unitary-invariant (N ×N ) matrix in the limit N → ∞. We specialize to the matrix distribution in (2.1) with F as in (2.2) and Q(x) = x 2j for some j ∈ N. This has the technical advantage of a perfect-scaling property of the eigenvalues: as was pointed out in Remark 2.5(ii), the correct rescaling of the eigenvalues is λ (N ) In order to ease the notation, we shall consider λ (N ) instead of λ (N ) . Note that the distribution of λ (N ) is the orthogonal polynomial ensemble in (2.28) with Q(x) = N x 2j , and we shall stick to that choice of Q from now. Let ψ : R → [0, ∞) denote the density of the equilibrium measure (i.e., the unique minimizer) for the functional in (2.19) with γ 2j = 1. According to the semicircle law, the rescaled eigenvalues λ (N ) i lie asymptotically in the support of ψ, which is the rescaled bulk of the spectrum. In particular, the spacings between neighboring rescaled eigenvalues should be of order 1 N , and hence the spacings of the unscaled eigenvalues are of order N 1 2j −1 . We fix a threshold s > 0 and a point u ∈ supp(ψ) • in the bulk of the rescaled spectrum and want to describe the number of spacings ≤ s N of the rescaled eigenvalues in a vicinity of u.
), the number of spacings ≤ s N in the sequence λ (N ) in a t Ninterval around u; see (2.25). We expect that this number is comparable to t N N , and we want to find the asymptotic dependence on s and u.
We continue to follow [De98, Sect. 5] and stay in the framework of Section 2.7, keeping all assumptions and all notation, and specializing to Q(x) = N x 2j . We indicate the N -dependence of the weight function Q(x) = N x 2j by writing K (N ) N for the kernel K N defined in (2.33) and (2.31). Abbreviate for the 1-point correlation function with respect to the ensemble in (2.28) with Q(x) = N x 2j ; hence R (N ) 1 (u) du is the density of 1 N times the number of rescaled eigenvalues in du (see below (2.22)). From (2.35) we have κ N (u) = R (N ) 1 (u). Hence, the asymptotics of κ N (u) can be guessed from the semi-circle law: we should have κ N (u) = R (N ) We shall adapt the scaling of the expected number of spacings to the spot u where they are registered by using the scaling 1 κN (u) instead of 1 N . This will turn out to make the value of the scaling limit independent of u.
We use now Lemmas 2.7 and 2.8 and an elementary change of the integration variables to find the expectation of the number of rescaled eigenvalue spacings as follows.
(2.42) Hence, we need the convergence of the rescaled kernel in the determinant on the right hand side. This is provided in the following theorem. The well-known sine kernel is defined by (2.43) . Then, uniformly on compact subsets in u ∈ supp(ψ) • and x, y ∈ R, For a rough outline of the proof using Riemann-Hilbert theory, see Section 2.10 below. proof, even for more general functions Q, is in [PS97]. See also [D99] and [BI99] for related results. The main tool for deriving (2.44) (and many asymptotic assertions about orthogonal polynomials) are the Riemann-Hilbert theory and the Deift-Zhou steepest decent method. (v) Analogous results for weight functions of Laguerre type (recall (2.6)) for β = 2 have been derived using adaptations of the methods mentioned in (iv). The best available result seems to be in [Va05], where weight functions of the form µ(dx i ) = x α i e −Q(xi) dx i are considered with α > −1, and Q is an even polynomial with positive leading coefficient. The cases β = 1 and β = 4 are considered in [DGKV05]. (vi) The orthogonal and symplectic cases (i.e., β = 1 and β = 4) for Hermitetype weight functions µ(dx i ) = e −Q(xi) dx i with Q a polynomial have also been carried out recently [DG05a]. (vii) Using the well-known Christoffel-Darboux formula (where q j = π j /c j ; see (2.31)), one can express the kernel K N defined in (2.33) in terms of just two of the orthogonal polynomials. Note the formal analogy between the right hand sides of (2.45) and (2.43). ✸ Now we formulate the main assertion about the limiting eigenvalue spacing for random unitary-invariant matrices. Denote by K sin the integral operator whose kernel is the sine kernel in (2.43).
is the density of the Gaudin distribution.
Sketch of the proof. In (2.42), replace the normalized r-integral by the integral over the delta-measure on u and use Proposition 2.11 to obtain left hand side of (2.46) On the other hand, note that = right hand side of (2.48), (2.49) as an application of the product differentiation rule shows.
Remark 2.14. (i) It is instructive to compare the asymptotic spacing distribution of the rescaled eigenvalues of a large random matrix (which have a mutual repellence) to the one of N independently on the interval [0, 1] randomly and uniformly distributed points (where no interaction appears).
The latter can be realized as a standard conditional Poisson process, given that there are precisely N Poisson points in [0, 1]. The asymptotic spacing density for the latter is just v → e −v , and the former is v → p(v) as in Theorem 2.13. Note that the asymptotics of p(v) for v ↓ 0 and the one for v → ∞ are both smaller than the one of e −v . Indeed, it is known that [DIZ97] and [De98, Sect. 8.2]. (ii) Another variant of the assertion in (2.46) is about the number of pairs of rescaled, not necessarily neighboring, eigenvalues whose difference is in a fixed interval (a, b): The last term accounts for the pairs i = j. (iii) Proposition 2.11 and Theorem 2.13 are extended to a large class of Wigner matrices in [Jo01a], more precisely to the class of random Hermitian matrices of the form W + aV , where W is a Wigner matrix as in Section 2.2, a > 0 and V is a standard GUE-matrix. The entries of W are not assumed to have a symmetric distribution, but the expected value is supposed to be zero, the variance is fixed, and the (6 + ε)-th moments for any ε > 0 are supposed to be uniformly bounded. This result shows universality of the limiting spacing distribution in a large class of Wigner matrices. The identification of the distribution of the eigenvalues of W + aV uses the interpretation of the eigenvalue process of (W + aV ) a≥0 as a process of non-colliding Brownian motions as in [Dy62b], see Section 4.1 below. (iv) After appropriate asymptotic centering and normalization, the distribution of the individual eigenvalues for GUE in the bulk of the spectrum is asymptotically Gaussian. Indeed, for i N = (a+o(1))N with a ∈ (− √ 2, √ 2) (i.e., a is in the interior of the support of the semicircle law µ * in (2.10)), the correct scaling of the i N -th eigenvalue is iN is asymptotically standard normal as N → ∞. Also joint distributions of several bulk eigenvalues in this scaling are considered in [Gus04]. In particular, it turns out that λ (N ) The edge of the spectrum, and the Tracy-Widom distribution In this section we explain the limiting scaled distribution of the largest eigenvalue, λ (N ) N , of an (N × N ) GUE-matrix, i.e., we specialize to j = 1. Let be the vector of the eigenvalues. According to Lemma 2.1, its distribution is the orthogonal polynomial ensemble in (2.28) with Q(x) = x 2 . Hence, the distribution of the vector of rescaled eigenvalues, N ≤ t} is, for any t ∈ R, identical to the event that no eigenvalue falls into the interval (t, ∞). Hence we may apply Lemma 2.10 for I = (t, ∞) and m = 0. In order to obtain an interesting limit as N → ∞, we already know from the semicircle law that t should be chosen as t = √ 2N + O(N α ) for some α < 1 2 . It will turn out that α = − 1 6 is the correct choice. As in the preceding section, we denote by K (N ) N the kernel K N defined in (2.33) for the choice Q(x) = N x 2 , with the functions φ j defined in (2.31) such that (2.32) holds. Using Lemma 2.6 for m = 0 and (2.34), we see, after an elementary change of measure, that (2.51) In order to obtain an interesting limit, one needs to show that the integrand on the right hand side of (2.51) converges. This is provided in the following theorem. By Ai : R → R we denote the Airy function, the unique solution to the differential equation The corresponding kernel, the Airy kernel, is given by (2.52) Note the formal analogy to (2.43) and (2.45).
Proposition 2.15 (Edge asymptotics for K N ). Uniformly in x, y ∈ R on compacts, at the edge of the spectrum, i.e., in ± √ 2, while it scales with 1 N in the interior of the support of the equilibrium measure, (− √ 2, √ 2) (see Proposition 2.11). (ii) The Airy kernel already appeared in [BB91] in a related connection. Proofs of Proposition 2.15 were found independently by Tracy and Widom [TW93a] and Forrester [Fo93]. (iii) For an extension of Proposition 2.15 to the weight function Q(x) = x 2j for some j ∈ N, see [De98, Sec. 7.6], e.g. The real and symplectic cases (i.e., β = 1 and β = 4) have also been recently carried out [DG05b].
(iv) Analogous results for weight functions of Laguerre type (recall (2.6) and Remark 2.12(v)) for β = 1 and β = 4 are derived in [DGKV05]. Both boundaries, the 'hard' edge at zero and the 'soft ' one at the other end, are considered. ✸ Next, we formulate the asymptotics for the edge of the spectrum, i.e., the largest (resp. smallest) eigenvalues. Let q : R → R be the solution 5 [HML80] of the Painlevé II differential equation It is uniquely determined by requiring that q(x) > 0 for any x < 0, and it has asymptotics q( This is the distribution of the by now famous GUE Tracy-Widom distribution; its importance is clear from the following. Proof. Using (2.51) and Proposition 2.15, we obtain where K Ai is the operator on L 2 ([s, ∞)) with kernel K Ai . The relation to the Painlevé equation is derived in [TW94a] using a combination of techniques from operator theory and ordinary differential equations. (ii) There are analogous statements for GOE and GSE [TW96]. The limiting distributions are modifications of the GUE Tracy-Widom distribution. Indeed, for β = 1 and β = 4, respectively (recall (2.5)), F 2 is replaced by (2.58) (iii) The expectation of a random variable with distribution function F 2 is negative and has the value of approximately −1.7711. (iv) In [TW94a], also the joint distribution of the first m top eigenvalues is treated; they admit an analogous limit theorem. The starting point for the proof is Lemma 2.6 and (2.34). (v) Theorem 2.17 is generalized to a large class of Wigner matrices in [So99].
It is assumed there that the entries have a symmetric distribution with all moments finite such that the asymptotics for high moments are bounded by those of the Gaussian. The proof is a variant of the method of moments (see the first proof of Theorem 2.3). The main point is that the expected trace of high powers (appropriately coupled with the matrix size) of the random matrix is bounded by a certain asymptotics, which is essentially the same as for GUE. Since the expected trace of high moments depends on the matrix entries only via the moments, which are the same within the class considered, the result then follows from a comparison to the known asymptotics for GUE. (vi) If the index i N is a bit away from the edge N , then the i N -th largest eigenvalue scales to some Gaussian law. Indeed, if i N = N − k N with k N → ∞, but k N /N → 0, then the correct scaling is , and one main result of [Gus04] is that X (N ) iN is asymptotically standard normal. Also joint distributions of several eigenvalues in this scaling are considered in [Gus04]. In particular, it turns out that λ (N ) iN and λ (N ) 10 Some elements of Riemann-Hilbert theory Apparently, the most powerful technical tool for deriving limiting assertions about orthogonal polynomial ensembles is the Riemann-Hilbert (RH) theory. This theory dates back to the 19th century and was originally introduced for the study of monodromy questions in ordinary differential equations, and has been applied to a host of models in analysis. Applications to orthogonal polynomials were first developed in [FIK90], and this method was first combined with a steepest-decent method in [DZ93]. Since then, a lot of deep results on random matrix theory and related models could be established using a combination of the two methods. The reformulation in terms of RH theory leaves the intuition of orthogonal polynomial ensembles behind, but creates a new framework, in which a new intuition arises and new technical tools become applicable which are suitable to deal with the difficulties stemming from the great number of zeros of the polynomials. For a recent general survey on Riemann-Hilbert theory, see [It03]; for a thorough exposition of the adaptation and application of this theory to orthogonal polynomials, see the lectures [De98], and [Ku03], [D01] and [BI99].
In this section, we give a rough indication of how to use Riemann-Hilbert theory for scaling limits for orthogonal polynomials, in particular we outline some elements of the proof of Proposition 2.11. We follow [De98]. Let us start with the definition of a Riemann-Hilbert problem in a situation specialized to our purposes, omitting all technical issues.
Let Σ be a finite union of the images of smooth, oriented curves in C, and suppose there is a smooth function v (called the jump matrix ) on Σ with values in the set of complex regular (2 × 2)-matrices. We say a matrix-valued function where I is the (2 × 2)-identity matrix, and Y + (x) and Y − (x) are the limiting boundary values of Y in x ∈ Σ coming from the positive and negative side of Σ, respectively. 6 The main connection with orthogonal polynomials is in Proposition 2.19 below. Assume that µ(dx) = w(x) dx is a positive measure on R having all moments and a sufficiently regular density w, and let (π n ) n∈N0 be the sequence of orthogonal polynomials for the L 2 -inner product with weight w, such that the degree of π n is n and the highest coefficient one. Hence, for some k n > 0, Recall the Cauchy transform, Here we think of R as of an oriented curve from −∞ to ∞, parametrized by the identity map. Note that Proposition 2.19 (RH problem for orthogonal polynomials, [FIK90], [FIK91]). Fix n ∈ N and consider the jump matrix v(x) = 1 w(x) , z ∈ C\R.
(2.62) is the unique solution of the RH problem 7 (2.63) The main advantage of the characterisation of the orthogonal polynomials in terms of a RH problem is that it provides a technical frame in which the difficulties stemming from the oscillations of the polynomials close to their zeros can be resolved. Now we specialize to w(x) = e −N Q(x) with Q(x) = x 2j for some j ∈ N, recall Remark 2.5(ii) and Section 2.8. We now write π (N ) n instead of π n for the orthogonal polynomials. We shall (extremely briefly) indicate how the asymptotics of the N -th orthogonal polynomial π (N ) N can be deduced from RH theory, building on Proposition 2.19.
The first main step is a transformation of (2.63) which absorbs the exponential term of the jump matrix into an inverse exponential term in the solution of the new RH problem. 8 For doing this, we need to use some information about the variational formula in (2.19) with γ 2j = 1. Recall the Euler-Lagrange equations in (2.20) for the equilibrium measure µ * (dx) = ψ(x) dx, and put The intuitive idea behind the choice of g is the fact that, if x * 1 , . . . , x * N ∈ R denote the zeros of π (N ) N and µ N their empirical measure, then we can write compare also to (1.4). Since the asymptotic statistics of the zeros and of the ensemble particles are very close to each other, we should have π (N ) N ≈ e N g , and e N g will indeed turn out to be the main term in the expansion.
Consider the transformed jump matrix Then the unique solution, m (1) , of the RH problem (R, v (1) ) can easily be calculated from Y (n) in Proposition 2.19; its (1, 1)-entry is π (N ) N e −N g . This means that the leading (exponential) term has been isolated in the transformed RH problem (R, v (1) ). It turns out that, outside the support of the equilibrium measure, v (1) (x) is exponentially close to the identity matrix, and inside this support we have is analytic in C, and hence z a ψ(t) dt depends on the integration curve from a to z: any two curves lead to a difference by an integer multiple of 2πi . Hence, z → e nϕ(z) is well-defined and analytic in C \ [−a, a] and therefore this is true for its boundary functions on (−a, a), ϕ + and ϕ − . The next main step is a deformation of (R, v (1) ), which isolates the second term in the expansion of π (N ) N , which is of fourth-root order and hence much more subtle. Indeed, the decomposition in the second line of (2.67) gives rise to a deformation into a RH problem (Σ, v (2) ), where Σ is the union of the real line and two curves from −a to a in the upper and lower half plane, respectively, and v (2) is some suitable jump matrix on Σ. It is relatively easy to prove that, in L 2sense, as N → ∞, we have v (2) Hence, the unique solution, m (2) , of the problem (Σ, v (2) ) should converge towards the unique solution, m ∞ , of the RH problem ([−a, a], v ∞ ). This is true, but relatively hard to prove, in particular on supp(µ * ) and here in particular close to the boundaries ±a. It is easy to compute that (2.68) Computing m (2) , re-substituting m (1) and m ∞ , and considering the (1, 1)-entry, we obtain therefore the asymptotics of π (N ) N outside the critical points ±a: if z ∈ C \ supp(µ * ), (2.69) This explains how to derive the Plancherel-Rotach asymptotics for the orthogonal polynomials for the weight function w(x) = e −N x 2j . Note that the error terms in (2.69) are locally uniform outside neighborhoods of ±a. Exploiting the Christoffel-Darboux formula in (2.45), one obtains the statement of Proposition 2.11.
In order to obtain the asymptotics of Proposition 2.15, i.e., the asymptotics of π (N ) N (z) for z close to ±a, one uses an appropriate deformation into a suitable RH problems involving the Airy function, see [De98,Sect. 7.6], e.g.

Random matrices and the Riemann zeta function
Excitingly, it turned out in the early seventies that the spacings of the zeros of the Riemann zeta function show a close relation to those of the eigenvalues of certain random matrices. The famous Riemann zeta function is defined on {ℜ(s) > 1} as Riemann showed in 1859 that ζ can be meromorphically continued to the whole complex plane, and that the functional equation Γ(s/2)ζ(s) √ π = π s Γ( 1 2 (1 − s))ζ(1−s) holds. This continuation has simple zeros at the negative even integers and a simple pole at 1, which is the only singularity. Furthermore, there are infinitely many zeros in the so-called critical strip {0 < ℜ(s) < 1}, and none of them is real. These zeros are called the non-trivial zeros; they are located symmetrically around the real axis and around the line {ℜ(s) = 1 2 }, the critical line. Denote them by ρ n = β n + i γ n with γ −1 < 0 < γ 1 ≤ γ 2 ≤ . . . . The famous Riemann Hypothesis conjectures that β n = 1 2 for every n, i.e., every non-trivial zero lies on the critical line {ℜ(s) = 1 2 }. This is one of the most famous open problems in mathematics and has far reaching connections to other branches of mathematics. Many rigorous results in analytic number theory are conditional on the truth of the Riemann Hypothesis. There is extensive evidence for it being true, as many partial rigorous results and computer simulations have shown. See [Ed74] and [Ti86] for much more on the Riemann zeta function.
It is known that the number π(x) of prime numbers ≤ x behaves asymptotically as π(x) = Li(x) + O(x Θ log x) as x → ∞, where Li(x) is the principal value of x 0 1 log u du, which is asymptotic to x log x , and Θ = sup n∈N β n . Hence, the Riemann Hypothesis is equivalent to a precise asymptotic statement about the prime number distribution.
More interestingly from the viewpoint of orthogonal polynomial ensembles, the Riemann Hypothesis has also much to do with the vertical distribution of the Riemann zeros. Let N (T ) be the number of zeros in the critical strip up to height T , counted according to multiplicity. It is known that N (T ) = T 2π log T 2πe +O(log T ) as T → ∞. In the pioneering work [Mo73], vertical spacings of the Riemann zeros are considered. Denote by (2.71) the number of pairs of rescaled critical Riemann zeros whose difference lies between a and b. Then it was proved in [Mo73], assuming the Riemann Hypothesis, The last term accounts for the pairs m = n. Note the close similarity to (2.50).
Calculating millions of zeros, [Od87] confirms this asymptotics with an extraordinary accuracy. The Lindelöf Hypothesis is the conjecture that ζ( 1 2 +i t) = O(t ε ) as t → ∞ for any ε > 0. The (2k)-th moment of the modulus of the Riemann zeta function, was originally studied in an attempt to prove the Lindelöf Hypothesis, which is equivalent to I k (T ) = O(t ε ) as T → ∞ for any ε > 0 and any k. The latter statement has been proved for k = 1 and k = 2. Based on random matrix calculations, [KS00] conjectured that where G is the Barnes G-function, and This so-called Keating-Snaith Conjecture was derived by an asymptotic calculation for the Fourier transform of the logarithm of the characteristic polynomial of a random matrix from the Circular Unitary Ensemble introduced in Section 2.4. This conjecture is one of the rare (non-rigorous, however) progresses in recent decades in the understanding of the Riemann zeros.

Random growth processes
In this section we consider certain classes of random growth processes which turned out in the late 1990es to be closely connected to certain orthogonal polynomial ensembles. There is a number of physically motivated random growth processes which model growing surfaces under influences of randomly occurring events (like nucleation events) that locally increase a substrate, but have far-reaching correlations on a long-time run. In one space dimension, for these kinds of growth processes, limiting phenomena are conjectured that have morally some features of random matrices in common, like the fluctuation behavior of power-order 1/3 (instead of the order 1/2 in the central limit theorem and related phenomena) and the universality of certain rescaled quantities. Recently some of these models could be analysed rigorously, after exciting discoveries of surprising relations to orthogonal polynomial ensembles had been made.
Random growth models may be defined in any dimension, and two and three dimensional models are of high interest. However, the high-dimensional cases seem mathematically intractable yet, such that we restrict to one-dimensional 9 models in this text. General physics references on growing surfaces are the monographs [BS95] and [Me98]; see also [KS92]. Much background is also provided in [P03] and [Fe04b]. Recent surveys on some growth models that have been solved in recent years by methods analogous to those used in random matrix theory are [Jo01c] and [Ba03].
After a short description of one basic model that cannot be handled rigorously yet in Section 3.1, we shall treat basically only two models: the cornergrowth model introduced in Section 3.2 and the PNG model introduced in Section 3.6. The main results on these two models are presented in Sections 3.3 and Section 3.4, respectively in Sections 3.6 and 3.7. The famous and muchstudied problem of the longest increasing subsequence of a random permutation is touched in Section 3.5, since it is instrumental for the PNG model (and also important on its own). Furthermore, in Section 3.8, we mention the Plancherel measure as an technically important toy model that links combinatorics and orthogonal polynomials.

The Eden-Richardson model
A fundamental model for random growth is the so-called Eden-Richardson model, which is defined as follows. The model is a random process (A(t)) t≥0 of subsets of Z 2 such that A(t) ⊂ A(s) for any t < s. At time t = 0, the set A 0 is equal to {0}, the origin in Z 2 . We call a site (i, j) ∈ Z 2 \ A(t) active at time t if some neighbor of (i, j) belongs to A(t). As soon as (i, j) is active, a random waiting time w(i, j) starts running, and after this time has elapsed, (i, j) is added to the set process as well. The waiting times w(i, j), (i, j) ∈ Z 2 , are assumed to be independent and identically distributed (0, ∞)-valued random variables. They can be discrete or continuous. In the case of N-valued waiting times, we consider the discrete-time process (A(t)) t∈N0 instead of (A(t)) t≥0 . If and only if the distribution of the waiting times is exponential, respectively geometric, the process (A(t)) t≥0 , respectively (A(t)) t∈N0 , enjoys the Markov property: in the discrete-time case, at each time unit any active site chooses independently with a fixed probability if it immediately belongs to the set process or not. In this special case, the model is called the Eden-Richardson model. The Markov property is not present for any other distribution.
Actually, the Eden-Richardson model is equivalent to what probabilists call last-passage percolation, which we will explain more closely in Remark 3.1 below.
The natural question is about the asymptotic behavior of the set A(t) for large t. It is not so difficult to conjecture that there should be a law of large numbers be valid, i.e., there should be a deterministic set A ⊂ R 2 such that 1 t A(t) → A as t → ∞. A proof of this fact can be derived using the subadditive ergodic theorem [Ke86], which considers the Markovian case. However, an identification of the limiting set A and closer descriptions of A for general waiting time distributions seem out of reach. In physics literature, it is conjectured that the fluctuations be of order t 1/3 . It is rather hard to analyze Eden's model mathematically rigorously. Reasons for that are that A(t) may and does have holes and that the growth proceeds in any direction. No technique has yet been found to attack the asymptotics of the fluctuations rigorously. This is why we do not spend time on the Eden model, but immediately turn to some simpler variant which has been successfully treated.

The corner-growth model
An important simpler variant of Eden's model is known as the corner growth model. This is a growth model on N 2 0 instead of Z 2 , and growth is possible only in corners. At time zero, A(0) is the union of the x-axis N 0 × {0} and the yaxis {0} × N 0 . Points in N 2 \ A(t) are called active at time t if their left and their lower neighbors both belong to A(t). As soon as a point (i, j) is active, its individual waiting time w(i, j) starts running, and after it elapses (i, j) is added to the set. This defines a random process (A(t)) t≥0 of growing subsets of N 2 0 . Again, if the waiting times are N-valued, we consider (A(t)) t∈N0 , and the Markov property is present only for the two above mentioned waiting time distributions: the exponential, respectively the geometric, distributions.
It is convenient to identify every point (i, j) with the square [i− 1 2 , i+ 1 2 )×[j − 1 2 , j + 1 2 ) and to regard A(t) as a subset of [ 1 2 , ∞) 2 . The process (A(t)) t≥0 consists of an infinite number of growing columns, of which almost all are of zero height and which are ordered non-increasingly in height. One can view these columns as a vector of runners who proceed like independent random walkers, making a unit step after an individual independent waiting time, subject to the rule that the (i + 1)-st runner is stopped by the i-th runner as long as they are on the same level. Note that this is a suppression mechanism, not a conditioning mechanism. A realization of A(t) is as follows (the active sites are marked by '×'). Much of the interest in the corner-growth model stems from the fact that it has a couple of connections to other well-known models and admits several alternate descriptions: Switching the signs of w(i, j) and ignoring that −w(i, j) is negative, we see that −G(M, N ) is the minimal travel time (now with passage 'times' −w(i, j)) from (0, 0) to (M, N ), which is the well-known model of first-passage percolation. An interpretation is as follows. If at the origin there is the source of a fluid, whose floating time along the bond (i, j) is −w(i, j), then the set A(t) = {(M, N ) : − G(M, N ) ≤ t} is the set of bonds that are wet by time t. ✸ Remark 3.2 (Totally asymmetric exclusion process). The boundary of the set A(t) ⊂ [ 1 2 , ∞) 2 is a curve that begins with infinitely many vertical line segments of unit length, proceeds with finitely many horizontal and vertical line segments of unit length, and ends with infinitely many horizontal line segments of unit length. If a square is added to A(t), then a vertical/horizontal pair of lines is changed into a horizontal/vertical pair. If we replace vertical lines by a '1' and horizontal lines by a '0' and determine the index that refers to the main diagonal of R 2 as 0, then we can think of the corner growth model as of a particle process (x k (t)) k∈Z ∈ {0, 1} Z where x k (t) = 1 means that one particle is present at site k at time t. In the case of geometric waiting time distribution, the dynamics of this process is as follows. At each discrete time unit, every particle independently moves to the right neighboring site with a fixed probability, provided this site is vacant. Otherwise, it does not move. These are the dynamics of the so-called totally asymmetric exclusion process in discrete time. The event {G(M, N ) = t} is the event that the particle that was initially at site 1 − N has moved M steps by time t. There is an analogous representation in continuous time for the exponential waiting time distribution. ✸ Remark 3.3 (Directed polymers in random environment). Let (S n ) n∈N0 be a simple random walk on Z, then the process (n, S n ) n∈N0 is interpreted as a directed polymer in Z 2 . Let (v(i, j)) i∈N0,j∈Z be an i.i.d. field of real random variables. Every monomer (n, S n ) receives the weight βv(n, S n ), where β > 0 is interpreted as the inverse of the temperature. This induces a probability measure on N -step paths given by In the zero-temperature limit β → ∞, the measure Q N,β is concentrated on those paths (S 0 , . . . , S N ) which minimize N n=0 v(n, S n ). This is the analog of the corner-growth model with switched signs of the random variables; compare to (3.2). It is believed that the directed polymer at positive, sufficiently small, temperature essentially exhibits the same large-N behavior as the zerotemperature limit, but this conjecture is largely unproven. An account on the recent research on directed polymers in random environment is in [CSY04]. ✸ Remark 3.4 (Tandem queues). At time zero, there is an infinite number of customers in the first queue, and there is an infinite number of other queues, which are initially empty and have to be passed by every customer one after another. The first customer in any queue (if present) is served after a random waiting time, which has the distribution of the waiting times in the corner growth model, and he or she proceeds to the end of the next queue. Then, at every time t, the height of the i-th column of the set A(t) is equal to the number of customers which have passed or reached the i-th queue. A general and systematic discussion of the relation between tandem queues and orthogonal polynomial ensembles appears in [OC03]. ✸ A systematic study of the random variable on the right side of (3.2) and its asymptotics towards Brownian analogs is in [Ba01]; see also [GTW01], [OY02], [BJ02] and [Do03]. In fact, for N fixed and under appropriate moment conditions, in the limit M → ∞, this random variable (after proper centering and rescaling) converges in distribution towards where W 1 , . . . , W N are N independent standard Brownian motions on R starting at the origin. Using Donsker's invariance principle, this may be explained as follows. Assume that E[w(1, 1)] = 0 and E[w(1, 1) 2 ] = 1. The first upstep of a path π in (3.2) may be expected in the (t N −1 M )-th step, the second in the (t N −2 M )-th step and so on, where we later optimize on 1 ≥ t 1 ≥ · · · ≥ t N −1 ≥ 0. The partial sums of w(i, A rather beautiful fact [Ba01], [GTW01] is that L(N ) is in distribution equal to the largest eigenvalue of a GUE matrix, λ (N ) N . (For generalisations of this fact to Brownian motion in the fundamental chamber associated with a finite Coxeter group see [BBO05].) Recall from Theorem 2.17 that we may approximate λ (N ) Combining the limits for M → ∞ and N → ∞, one is lead to the appealing conjecture (still assuming that E[w(1, 1)] = 0 and This assertion has indeed been proven independently in [BM05] and [BS05], under the additional assumption that M = o(N a ) for a < 3 14 . The main tool is a classical strong approximation of random walks by Brownian motion, which works so well that M can diverge together with N at some speed. However, the most interesting case is where M and N are of the same order, and this case is open yet in general. For the two special cases of the geometric and the exponential distribution, (3.4) has been proven for M ≈ const. × N . Our next two sections are devoted to a description of this result.

Johansson's identification of the distribution
In his beautiful work [Jo00a], Kurt Johansson deeply investigated the cornergrowth model for both particular waiting-time distributions, the geometric and the exponential distribution. He identified the distribution of G(M, N ) in terms of the distribution of the largest particle of the Laguerre ensemble (see (2.6)) in the exponential case, and of the Meixner ensemble (its discrete analog) in the geometric case.  .2), and let the w(i, j) be i.i.d. geometrically distributed with parameter q ∈ (0, 1), i.e., w(i, j) = k ∈ N with probability (1 − q)q k . Then, for any M, N ∈ N with M ≥ N , and for any t ∈ N, Remark 3.6. (i) The right hand side of (3.5) is the probability that the largest particle in the Meixner ensemble on N N with parameters q and M − N is smaller than t + N .
(ii) There is an extension of Proposition 3.5 to the case where the parameter of the geometric distribution of w(i, j) is of the form a i b j for certain numbers a i , b j ∈ (0, 1), see [Jo01c,Sect. 2]. (iii) An analogous formula holds for the case of exponentially distributed waiting times, and the corresponding ensemble is the Laguerre ensemble (Gamma-distribution in place of the negative Binomial distribution), see (2.6). This formula is derived in [Jo00a] using an elementary limiting procedure which produces the exponential distribution from the geometric one. It is remarkable that no direct proof is known yet. Distributions other than the exponential or geometric one have not yet been successfully treated. ✸ Sketch of the proof of Proposition 3.5. The proof in [Jo00a] relies on various combinatoric tools, which have been useful in various parts of mathematics for decades. A general reference is [Sa91]. A generalized permutation is an array of two rows with integer entries such that the row of the pairs is non-decreasingly ordered in lexicographical sense. An example is σ = 1 1 1 1 1 2 2 2 2 3 4 4 1 1 3 3 3 1 1 1 3 3 2 3 ; (3.6) the entries of the first and second line are taken from {1, 2, 3, 4} and {1, 2, 3}, respectively. A longest increasing subsequence of the second row has the length 8; it consists of all the '1' and the last three '3'. Also the first two ones and all the threes form a longest increasing subsequence.
Lemma 3.7 (Matrices and generalized permutations). For any M, N, k ∈ N, the following procedure defines a one-to-one map between the set of (M × N )matrices (W (i, j)) i≤M,j≤N with positive integer entries and total sum i≤M,j≤N W (i, j) equal to k, and the set of generalized permutations of length k whose first row has entries in {1, . . . , M } and whose second row has entries in {1, . . . , N }: Repeat every pair (i, j) ∈ {1, . . . , M } × {1, . . . , N } precisely W (i, j) times, and list all pairs in lexicographical order. By this procedure, the quantity max π∈Π(M,N ) (i,j)∈π W (i, j) is mapped onto the length of the longest nondecreasing subsequence of the second row.
As an example for M = 4, N = 3, the matrix is mapped onto the generalized permutation σ in (3.6). (In order to appeal to the orientation of the corner growth model, we ordered the rows of W from the bottom to the top, contrary to the order one is used to from linear algebra.) The two paths linking the coordinates (1, 1), (2, 1), (2, 3), (4, 3) and (1, 1), (1, 3), (4, 3), respectively, are maximal paths in (3.2); they correspond to the longest increasing subsequences mentioned below (3.6).
Remark 3.8. (i) For the application of Lemma 3.7 for W (i, j) = w(i, j) geometrically distributed random variables, it is of crucial importance that this distribution induces a uniform distribution on the set of (M × N )matrices with fixed sum of the entries. (ii) Obviously, Lemma 3.7 works a priori only for integer-valued matrices. ✸ The next step is a famous bijection between generalized permutations and Young tableaux. A semi-standard Young tableau 10 is a finite array of rows, nonincreasing in lengths, having integer entries which are nondecreasing along the rows and strictly increasing along the columns. The shape of the tableau, λ = (λ i ) i , is the vector of lengths of the rows. In particular, λ 1 is the length of the longest row of the tableau, and i λ i is the total number of entries. An example of a semi-standard Young tableau with shape λ = (10, 8, 8, 3, 1) and entries in {1, . . . , 6} is as follows.
1 1 2 2 3 3 3 4 4 6 2 2 3 4 4 4 5 5 3 3 5 5 5 5 6 6 4 5 6 6 Lemma 3.9 (Robinson-Schensted-Knuth (RSK) correspondence, [K70]). For any M, N, k ∈ N, there is a bijection between the set of generalized permutations of length k whose first row has entries in {1, . . . , M } and whose second row has entries in {1, . . . , N }, and the set of pairs of semi-standard Young tableaux of the same shape with total number of entries equal to k, such that the entries of the first Young tableau are taken from {1, . . . , M } and the ones of the second from {1, . . . , N }. This bijection maps the length of the longest non-decreasing subsequence of the second row of the permutation onto the length of the first row of the tableau, λ 1 .
The algorithm was introduced in [Sc61] for permutations (it is a variant of the well-known patience sorting algorithm) and was extended to generalized permutations in [K70].
Sofar, the distribution of G(M, N ) has been reformulated in terms of the length of the first row of pairs of semi-standard Young tableaux. The next and final tool is a combinatorial formula for the number of Young tableaux.
Lemma 3.10 (Number of semi-standard Young tableaux). The number of semi-standard Young tableaux of shape λ and elements in {1, . . . , N } is equal to 10 For the notions of (standard) Young tableaux and Young diagrams, see Section 3.8 below.
The reader easily recognizes that the combinatorial formula in Lemma 3.10 is the kernel of the formula in (3.5). Putting together the tools listed sofar, one easily arrives at (3.5).
Remark 3.11. An alternate characterization and derivation of the distribution of G(M, N ) is given in [Jo02a,Sect. 2.4] in terms of the Krawtchouk ensemble, (3.8) There a family of random non-colliding one-dimensional nearest-neighbor processes is analyzed, which is a discrete analog of the multilayer PNG-droplet model in Section 3.7 below. The joint distribution of this cascade of processes is identified in terms of the the Krawtchouk ensemble, and the marginal distribution of the rightmost process is identified in terms of G(M, N ). This implies that i.e., G(M, N ) is characterized in terms of the largest particle of the Krawtchouk ensemble. ✸

Asymptotics for the Markovian corner-growth model
Having arrived at the description in (3.5), the machinery of statistical mechanics and orthogonal polynomials can be applied. The outcome is the following.
Theorem 3.12 (Asymptotics for the corner-growth model, [Jo00a]). Consider the model of Proposition 3.5. Then, for any γ ≥ 1, where F 2 is the distribution function of the GUE Tracy-Widom distribution introduced in (2.55), and σ(γ, q) is some explicit function.
Remark 3.13. (i) In Theorem 3.12 the weak law of large numbers lim t→∞ A qualitative picture of A is as follows. Me also depend on L and q.) Indeed, computations similar to those of Section 2.7 imply that right hand side of (3.5) (3.13) The Meixner kernel satisfies the scaling limit where K Ai is the Airy kernel in (2.53), and f = f (γ, q) and σ = σ(γ, q) are as in the theorem. Now the remainder of the proof is analogous to the proof of Theorem 2.17.

Longest increasing subsequences of random permutations
Another problem that has been recognized to be strongly related to random growth processes is the problem of the length of the longest increasing subsequence of a random permutation. Let S N denote the set of permutations of 1, . . . , N , and let σ be a random variable that is uniformly distributed on S N , i.e., a random permutation. The length of the longest increasing subsequence of σ is the maximal k such that there are indices 1 ≤ i 1 < i 2 < · · · < i k ≤ N satisfying σ(i 1 ) < σ(i 2 ) < · · · < σ(i k ). We denote this length by ℓ N . In the early 1960's, Ulam raised the question about the large-N behavior of ℓ N . Based on computer simulations, he conjectured that c = lim N →∞ N −1/2 E(ℓ N ) exists in (0, ∞). The verification of this statement and the identification of c have become known as 'Ulam's problem'. A long list of researchers contributed to this problem, including Hammersley, Logan and Shepp, Vershik and Kerov, and Seppäläinen. By the end of the 1990's, it was known that the above limit exists with c = 2, and computer simulations suggested that 11 A survey on the history of Ulam's problem may be found in [OR00] and [AD99]. There is a 'Poissonized' version of Ulam's problem, which is strongly related and provides a technical tool for the solution of Ulam's problem. Consider a homogeneous Poisson process on (0, ∞) 2 with parameter one, and let L(λ) be the maximal number of points of this process which can be joined together by a polygon line that starts at (0, 0), ends at ( √ λ, √ λ) and is always going in an up/right direction. Then it is easy to see 12 that the distribution of L(λ) is equal to the distribution of ℓ N * , where N * is a Poisson random variable with parameter λ. Via Tauberian theorems, asymptotics of the distribution of L(λ) as λ → ∞ stand in a one-to-one correspondence to the large-N asymptotics of ℓ N .
There are exact formulas for the distributions both of ℓ N and L(λ), which have been proved by many authors using various methods (see [BDJ99]). Indeed, for any n ∈ N, we have (3.16) In [BDJ99], sophisticated and deep methods are applied to the right hand side of (3.16), which have previously been established in [DZ93], [DZ95] and [DVZ97]: the steepest-decent method for the computation of asymptotics of solutions to certain Riemann-Hilbert problems. As a result, a limit law for ℓ N is proved, which shows again the universality of the Tracy-Widom distribution for GUE in (2.55): Theorem 3.14 (Limit law for ℓ N , [BDJ99]). Let ℓ N be the length of the longest increasing subsequence of a random permutation, which is uniformly distributed over S N . Then, as N → ∞, the scaled random variable converges in distribution towards the Tracy-Widom distribution for GUE. Moreover, all moments of χ N also converge towards the moments of this distribution. Both assertions are true also for (L(λ) − 2 √ λ)λ − 1 6 as λ → ∞.
Sketch of the proof. We sketch some elements of the proof, partially also following [P03, Sect. 3.1]. We consider the Poissonized version and consider L(λ 2 ) instead of L(λ).
The starting point is an explicit expression for the probability of {L(λ 2 ) ≤ N } for any N ∈ N and any λ > 0 in terms of the Toeplitz determinant 13 D N,λ = det T N (e 2λ cos(·) ). More precisely, one has a remarkable formula which has first been derived in [Ge90], based on the RSKcorrespondence of Lemma 3.9. On [0, 2π] we introduce the inner product p, q λ = 2π 0 p(e i θ )q(e i θ ) e 2λ cos θ dθ 2π . (3.19) Consider the sequence of orthogonal polynomials (π (λ) N ) N ∈N0 with respect to ·, · λ which is obtained via the Gram-Schmidt algorithm from the monomials z n , n ∈ N 0 . We normalize π (λ) N such that π (λ) N (z) = z N + O(z N −1 ) and define V (λ) N = π (λ) N 2 , such that we have Classical results on orthogonal polynomials (see [Sz75] for some background) imply the identities For our special choice of the weight function, e 2λ cos θ , one obtains a nonlinear recursion relation for the sequence (π (λ) N (0)) N ∈N0 , which are called the discrete Painlevé II equations. Indeed, the numbers R (λ) N = (−1) N +1 π (λ) N (0) satisfy , implies that we are dealing with that solution of (2.54) that is positive in (−∞, 0). Hence, q is identical to the solution q of (2.54) with q(x) ∼ Ai(x) as x → ∞; recall the text below (2.54).
The technically hardest works of the proof are the proofs of the convergence in (3.23) and of the convergence of the moments, which require an adaptation of the Deift-Zhou steepest descent method for an associated Riemann-Hilbert problem.
3.6 The poly nuclear growth model Consider the boundary of a one-dimensional substrate, which is formed by the graph of a piecewise constant function with unit steps. At each time t ≥ 0, the separation line between the substrate and its complement is given as the graph of the function h(·, t) : R → R. Occasionally, there occur random nuclear events in states x * at times t * , and the process of the pairs (x * , t * ) forms a Poisson point process in the space-time half plane R× [0, ∞) with intensity equal to two. Such an event creates an island of height one with zero width, i.e., h has a jump of size one in x * at time t * . Every island grows laterally (deterministically) in both directions with velocity one, but keeps its height, i.e., for small ε > 0 the curve h(·, t * + ε) has the height h(x * , t * ) in the ε-neighborhood of x * and stays on the same level as before t * outside this neighborhood: The bullet marks the nucleation event, and the two arrows indicate the lateral growth of velocity one in the two directions. We call the graph of h(·, t * + ε) in the ε-neighborhood of x * a growing island. If two growing islands at the same level collide, then they merge together and form a common growing island. The nucleation events occur only on top of a growing island, and they occur with constant density equal to two. This is a (rather simple) model for poly nuclear growth (PNG) in 1+1 dimension. Among various initial conditions that one could impose, we shall consider only two: the flat case, where h(x, 0) = 0 for any x ∈ R, and the droplet case, where h(x, 0) = −∞ for x = 0 and h(0, 0) = 0. The droplet case may also be defined with the initial condition h(·, 0) = 0 by requiring that nucleation events at time t may happen only in [−t, t].
Let us first consider the droplet case. A beautiful observation [PS00] is the fact that the PNG model stands in a one-to-one relation to the Poissonized problem of the longest increasing subsequence in a rectangle. Using this correspondence, one arrives at the following limit assertion.
Theorem 3.15 (Limit law for the PNG droplet, [PS00]). Let h(x, t) be the height of the PNG droplet at time t over the site x, and let c ∈ [−1, 1]. Then where F 2 is the GUE Tracy-Widom distribution function, see (2.55).
Idea of proof. We consider the space-time half plane R×[0, ∞). For any spacetime point (x, t), we call the quarter plane with lower corner at (x, t) and having the two lines through (x, t) with slope 1 and −1 as boundaries the (x, t)-quarter plane. Recall that nucleation events occur in the (0, 0)-quarterplane only, which is the region {(x, t) : |x| ≤ t}. First note that every nucleation event at some space-time point (x * , t * ) influences the height of the curve h only within the (x * , t * )-quarter plane. Second, note that any nucleation event (y * , s * ) within the (x * , t * )-quarter plane contributes an additional lifting by level one (to the lift created by the nucleation event (x * , t * )) for any space-time point in the intersection of the two quarter planes of the nucleation events, since the growing island created by (y * , s * ) will be on top of the growing island created by (x * , t * ). However, if (y * , s * ) occurs outside the (x * , t * )-quarter plane, their influences are merged to a lift just by one step since their growing islands are merged to one growing island. Now fix a space-time point (x, t) in the (0, 0)-quarter plane. In the space-time plane, consider the rectangle R having two opposite corners at the origin and at the point (x, t) and having sides of slopes 1 and −1 only. Condition on a fixed number N of nucleation events (x * 1 , t * 1 ), . . . , (x * N , t * N ) in the rectangle R.
Rotate the rectangle by 45 degrees. The preceding observations imply that only those nucleation events contribute to the height h(x, t) which can be joined together by a polygon line consisting of straight up/right lines, leading from the corner of the rectangle R at the origin to the corner at (x, t). The maximal number of nucleation events along such path is equal to the height h(x, t). Hence, the length of the longest increasing subsequence in a unit square with Poisson intensity λ = √ t 2 − x 2 has the same distribution as the height h(x, t). Using Theorem 3.14, one concludes the assertion.
In particular, the fluctuation exponent 1/3 is rigorously proved for this growth model. Such a result has not yet been achieved for any other growth model of this type. However, this fluctuation behavior is conjectured for a large class of (1 + 1)-dimensional growth processes, provided the spatial correlations are not too weak.
The flat initial condition, h(·, 0) = 0, interestingly leads to the GOE Tracy-Widom distribution instead of the GUE one: Theorem 3.16 (Limit law for the flat PNG model, [PS00]). Let h(x, t) be the height of the flat PNG model at time t over the site x. Then, where F 1 is the GOE Tracy-Widom distribution function, see (2.58).
The above explanation for the droplet case has to be adapted to the flat case by replacing the rectangle with corners at the origin and (x, t) by the triangle with base on the axis t = 0, corner at (x, t) and side slopes 1 and −1. See [Fe04a] for more detailed results on the flat PNG model.
For other initial conditions (among which some lead to the GSE Tracy-Widom distribution, F 4 ), see [P03,Sect. 3]. We recall that a discrete-space version of the PNG model is analyzed in [Jo02a,Sect. 2.4]; see also Remark 3.11. A recent survey on the PNG droplet and its relation to random matrices and further random processes, like directed polymers, the longest increasing subsequence problem and Young tableaux, appears in [FP05].

The multi-layer PNG droplet and the Airy process
The PNG droplet has been analysed also as a process. Interestingly, the limiting distribution of the height process in the correct scaling bears a close relationship to Dyson's Brownian motions (see Theorem 4.1), which is best seen when additional layers of substrate separation lines are introduced. The so-called multilayer PNG droplet (sometimes also called the discrete PNG model) is defined as follows. We write h 0 instead of h and add an infinite sequence of separation lines h ℓ (x, t) with ℓ ∈ −N with initial condition h ℓ (x, 0) = ℓ. Nucleation events only occur to the zeroth line h 0 , and they occur at time t in the interval [−t, t] only (i.e., we consider the droplet case). Every merging event in the ℓ-th line (i.e., every event of an amalgamation of two neighboring growing islands at the same height) creates a nucleation event in the (ℓ − 1)-st line at the same site. Apart from this rule, every island on any level grows deterministically with unit speed into the two lateral directions as before.
Hence, randomness is induced only at the zeroth line, and all the other lines are deterministic functions of h 0 . Observe that the strict ordering h ℓ (x, t) > h ℓ−1 (x, t) for any x, t, ℓ is preserved. Hence, the lines form a family of noncolliding step functions with unit steps. For any ℓ ∈ −N 0 and at any time t > 0, the ℓ-th line h ℓ (·, t) is constant equal to ℓ far away from the origin. Only a finite (random) number of them have received any influence coming from the nucleation events, and only within a finite (random) space-time window. 14 An interesting observation [PS02a] is that, in the long-time limit, the multilayer PNG droplet process approaches the large-N limit of Dyson's Brownian motions (see Section 4.1 below) in the appropriate scaling. 15 More precisely, let λ (N ) (t) = (λ (N ) 1 (t), . . . , λ (N ) N (t)) ∈ W N be Dyson's Brownian motion at time t as in Theorem 4.1. Then the Airy process may be introduced as the scaled limiting distribution of the largest particle, more precisely, Convergence has been established in the sense of finite-dimensional distributions in [PS02a] and in process sense in [Jo03]. For any y > 0, the random variable Ai(y) has the GUE Tracy-Widom distribution F 2 in (2.55), and the family of these random variables forms an interesting stochastic process. The Airy process (Ai(y)) y∈R is a stationary, continuous non-Markovian stochastic process which may be defined via its finite dimensional distributions, using a determinant formula involving the Airy kernel K Ai in (2.52) [PS02a], see also [P03,Sect. 5].
In [PS02a] it turns out that, in the appropriate scaling, the joint distribution of all the lines h ℓ of the multilayer PNG droplet approaches the Airy process. We state the consequence of this statement for the first line as follows. where (Ai(y)) y∈R is the Airy process.
Some progress on the process version of the flat PNG model has been made in [Fe04a]. Discrete versions of the PNG model have been analysed in [IS04a], [IS04b].
Another interesting process that converges (after proper rescaling) in distribution towards the Airy process is the north polar region of the Aztec diamond [Jo05].

The Plancherel measure
The Plancherel measure is a distribution on the set of Young tableaux which exhibits an asymptotic behavior that is remarkably similar to that of the spectrum of Gaussian matrix ensembles. Most interestingly, this measure may be studied for any value of the parameter β, which is restricted to the values 1, 2 and 4 in the matrix cases.
A Young diagram, or equivalently a partition λ = (λ 1 , λ 2 , . . . ) of {1, . . . , N } is an array of N boxes, such that λ 1 of them are in the first row, λ 2 of them in the second and so on. Here λ is an integer-valued partition such that λ 1 ≥ λ 2 ≥ . . . , and i λ i = N . We think of the rows as being arranged on top of each other. A standard Young tableau is a Young diagram together with a filling of the boxes with the numbers 1, . . . , N such that the numbers are strictly increasing along the rows and along the columns. 16 The vector λ is called the shape of the tableau. For every λ, we denote by d λ the number of Young tableaux of shape λ. For every β > 0, we define the Plancherel measure as the distribution on the set Y N of partitions of {1, . . . , N }, which is given by We can conceive λ (N ) k , the length of the k-th row, as an N 0 -valued random variable under the probability measure Pl (β) N on Y N . The case β = 2 has been studied a lot. Basically, it was shown that the limiting statistics of the sequence λ (N ) 1 , λ (N ) 2 , . . . , in an appropriate scaling, is the same as the one for the eigenvalues of an (N ×N ) GUE-matrix. We mention just a few important results. As a by-product of their study of the longest increasing subsequence of a random permutation, in [BDJ99] the limit theorem lim N →∞ is shown, where F 2 is the Tracy-Widom GUE distribution function. The conjecture of [BDJ99] that for every k ∈ N the scaled limiting distribution of λ (N ) k is identical to the one of the k-th largest eigenvalue of a GUE-matrix was independently proved in [BDJ00] for k = 2, and for general k in [Jo01b] and [BOO00]. The convergence of the moments of the scaled row lengths was also proved in [BDJ99], [BDJ00] and [Jo01b], respectively. The bulk-scaling limit was also proved in [BOO00]. The case β = 1 (which is analogous to the GOE case instead the GUE case) has been studied in [BE01].

Non-colliding random processes
In this section we systematically discuss conditional multi-dimensional random processes given that the components never collide with each other. These processes are sometimes called vicious walkers, non-colliding processes or nonintersecting paths in the literature. The earliest hint at a close connection between non-colliding random processes and orthogonal polynomial ensembles was found in [Dy62b], where a natural process version of the Gaussian Unitary Ensemble was considered. It turned out there that the mutual repellence in (2.3) receives a natural interpretation in terms of Brownian motions conditioned on never colliding with each other. This theme apparently was not taken up in the literature up to the beginning of the nineties, when people working in stochastic analysis turned to this subject. Since the discovery of close connections also with random growth models at the end of the nineties, non-colliding processes became an active research area.

Dyson's Brownian motions
A glance at the Hermite ensemble in (2.3) shows that there is a mutually repelling force between the eigenvalues: the density vanishes if any two of the N arguments approach each other. It does not seem easy to derive an intuitive reason for this repellence from random matrix considerations, but if the matrix M is embedded in a natural process of random Hermitian matrices, then the process of eigenvalues admits a nice identification that makes the repellence natural.
This theorem has to be explained in some detail.
Remark 4.2. (i) It is remarkable that, in particular, the process of eigenvalue vectors is Markov. This is not true for, say, the process of the largest eigenvalues, (λ N (t)) t≥0 . (ii) The original proof in [Dy62b] makes nowadays an old-fashioned impression. See [Br91] for a modern stochastic analysis treatment of an analogous matrix-valued process for Wishart-matrices in the real-valued setting. In this setting, the process of eigenvalues also turns out to be Markov, but does not admit a conditional interpretation. The latter is also true in the analogous GOE setting. (iii) The event of never colliding, {λ 1 (t) < λ 2 (t) < · · · < λ N (t) for all t > 0}, has zero probability for N independent Brownian motions. Hence, the definition of the conditioned process needs some care. First observe that the non-colliding event is the event {λ(t) ∈ W N for all t > 0}, where Probabilists like to write this event as {T = ∞}, where T = inf{t > 0 : λ(t) ∈ W c N } is the exit time from W N , the first time of a collision of any two of the particles. One way to construct the conditional process is to condition on the event {T > t} and prove that there is a limiting process as t → ∞. Another one is to consider the Doob-h transform of the vector of N independent standard Brownian motions with some suitable function h : W N → (0, ∞) that vanishes on the boundary of W N and is harmonic for the generator of the N -dimensional Brownian motion in W N . Remarkably, it turns out that h = ∆ N , the Vandermonde determinant, satisfies all these properties, and that the h-transform with this function h is identical with the outcome of the first construction. See Section 4.2 below for a general treatment of this issue. (iv) The Markov process (λ(t)) t≥0 has the invariant measure x → ∆ N (x) 2 dx, which cannot be normalized. (v) Also in the real and the symplectic version, the eigenvalue process, (λ(t)) t≥0 , turns out to be a diffusion. An elementary application of Ito's formula shows that (λ(t)) t≥0 satisfies the stochastic differential equation (see [Br91] for related formulas) where B 1 , . . . , B N are independent Brownian motions, and β ∈ {1, 2, 4} is the parameter as in (2.5). The generator of the process (λ(t)) t≥0 is The generators in the GOE and the GSE setting have a factor different from 2 before the drift term. Apparently this circumstance makes it impossible to conceive the processes as Doob transforms of N independent processes. ✸

Harmonicity of the Vandermonde determinant
Now we consider more general multi-dimensional random processes and their conditional version given that no collision of the particles occurs. As was pointed out in Remark 4.2(iii), the construction needs some care, since the conditioning is on a set of probability zero. It turns out that the rigorous definition may be given for many processes in terms of a Let us turn first to the time-continuous case with continuous paths, more precisely, to diffusions. We fix N ∈ N and an interval I and let X = (X(t)) t≥0 be a stochastic process on I N . Assume that X 1 , . . . , X N are N independent and identically distributed diffusions X i = (X i (t)) t≥0 on I. Under the measure P x they start at X i (0) = x i ∈ I, where x = (x 1 , . . . , x N ). By p t (x, y) we denote the transition density function of any of the diffusions X i , i.e., Recall the Weyl chamber and its exit time, (4.4) In words: T is the first time of a collision of any two of the N components of the process. Recall the Vandermonde determinant ∆ N (x) = 1≤i<j≤N (x j − x i ). In order to be able to construct a Doob-h transform of the process with h = ∆ N on W N , the basic requirements are: (1) ∆ N is positive on W N , (2) ∆ N is harmonic with respect to the generator G of the process X, i.e., G∆ N = 0, and (3) ∆ N (X(t)) is integrable for any t > 0.
Clearly, the first prerequisite is satisfied. Furthermore, it turns out that ∆ N is harmonic for a quite large class of processes: Lemma 4.3 (Harmonicity of ∆ N , continuous case [KO01]). We have G∆ N = 0 (i.e., ∆ N is harmonic with respect to G) if there are a, b, c ∈ R such that (4.5) The proof consists of an elementary calculation. Lemma 4.3 in particular covers the cases of Brownian motion, squared Bessel processes (squared norms of Brownian motions) and generalized Ornstein-Uhlenbeck processes driven by Brownian motion. For general diffusions, existence and identification of positive harmonic functions for the restriction of the generator to the Weyl chamber are open.
As a consequence of Lemma 4.3, we can introduce the Doob h-transform of X with h = ∆ N . This is a diffusion on W N ∩ I N , which we also denote X. Its transition probability function is given by (4.6) The transformed process is often called the conditional process X, given that there is no collision of the components. In order to justify this name, one must show that lim t→∞ P x (X(s) ∈ dy | T > t) = P x (X(s) ∈ dy), for any x, y ∈ W N , s > 0.
(4.7) This may be proven in many examples with the help of the Markov property at time s and an asymptotic formula for P z (T > t) as t → ∞, see Remark 4.10(ii). In Section 4.3 we provide two tools. In Section 4.4, we list a couple of examples of ∆ N -transformed diffusions, whose marginal distribution is an orthogonal polynomial ensemble.

The discrete case.
There is also a discrete version of Lemma 4.3. Recall that a vector v on a discrete set I is called a positive regular function for a matrix Q with index set I × I if all the components of v are positive and Qv = v holds.
Lemma 4.4 (Regularity of ∆ N , discrete case [KOR02]). Let (X(n)) n∈N be a random walk on R N such that the step distribution is exchangeable and the N -th moment of the steps is finite.
(i) Then ∆ N is harmonic for the walk, i.e., E x [∆ N (X(1))] = ∆ N (x) for any x ∈ R N , and the process ∆ N (X(n)) n∈N0 is a martingale with respect to the natural filtration of (X(n)) n∈N .
(ii) If (X(n)) n takes values in Z N only and no step from W N to W c N has positive probability, then the restriction of ∆ N to W N ∩Z N is a positive regular function for the restriction P WN = (p(x, y)) x,y∈WN ∩Z N of the transition matrix P = (p(x, y)) x,y∈Z N , i.e., for any x ∈ Z N ∩ W N . (4.8) The condition in Lemma 4.4(ii) is a severe restriction. It in particular applies to nearest-neighbor walks on Z N with independent components, and to the multinomial walk, where at each discrete time unit one randomly chosen component makes a unit step, see Section 4.4. Further examples comprise birth and death processes and the Yule process [Do05,Ch. 6].
Under the assumptions of Lemma 4.4, one can again define the h-transform of the Markov chain X by using the transition matrix P = ( p(x, y)) x,y∈WN ∩Z N with p(x, y) = p(x, y) ∆ N (y) ∆ N (x) , x, y ∈ W N ∩ Z N .
Remark 4.5. Arbitrary random walks with i.i.d. components are considered in [EK05+]. Under the sole assumption of finiteness of sufficiently high moments of the steps, it turns out there that the function where τ = inf{n ∈ N : X(τ ) / ∈ W N } is the exit time from W N , is a positive regular function for the restriction of the walk to W N . (Note that V coincides with ∆ N in the special cases of Lemma 4.4 (ii).) Since the steps are now arbitrarily large, the term 'non-colliding' should be replaced by 'ordered'. Furthermore, an ordered version of the walk is constructed in terms of a Doob h-transform with h = V , and some asymptotic statements are derived, in particular an invariance principle towards Dyson's Brownian motions. ✸

Some tools
We present two technical tools that prove useful in the determination of probabilities of non-collision events.

The Karlin-McGregor formula
An important tool for calculating non-colliding probabilities is the Karlin-McGregor formula, which expresses the marginal distribution of the non-colliding process in terms of a certain determinant.
At time T , the i-th and the j-th coordinate of the process coincide for some i < j, which we may choose minimal. Reflect the path (X(s)) s∈ [T,t] in the (i, j)plane, i.e., map this path onto the path (X λ (s)) s∈ [T,t] , where λ ∈ S N is the transposition that interchanges i and j. This map is measure-preserving, and the endpoint of the outcome is at y σ•λ if X(t) = y σ . Summing on all i < j (i.e., on all transpositions λ), substituting σ • λ and noting that its signum is the negative signum of σ, we see that the right hand side of (4.10) is equal to its negative value, i.e., it is equal to zero. The proof is finished. No assumption on spatial dependence of the transition probability function is needed. (ii) For discrete-time processes on Z there is an analogous variant of Lemma 4.6, but a kind of continuity assumption has to be imposed: The steps must be −1, 0 or 1 only, i.e., it must be a nearest-neigbor walk. This ensures that the path steps on the boundary of W N when leaving W N , and hence the reflection procedure can be applied. It turns out that Schur z is a multipolynomial in x 1 , . . . , x N , and it is homogeneous of degree z 1 + · · · + z N − N 2 (N − 1). Its coefficients are nonnegative integers and may be defined in a combinatorial way. It has the properties Schur z (1, . . . , 1) = ∆ N (z)/∆ N (x * ) (where we recall that x * = (0, 1, 2, . . . , N − 1)), Schur x * (x) = 1 for any x ∈ R N , and Schur z (0, . . . , 0) = 0 for any z ∈ W N \ {x * }.
A combination of the Karlin-McGregor formula and the Schur polynomials identifies the asymptotics of the non-collision probability and the limiting joint distribution of N standard Brownian motions before the first collision: Lemma 4.8. Let (X(t)) t≥0 be a standard Brownian motion, starting at x ∈ W N . Then, as t → ∞, the limiting distribution of t − 1 2 X(t) given that T > t has the density y → 1 Z ϕ(y)∆ N (y) on W N , where ϕ is the standard Gaussian density, and Z the normalization constant. Furthermore, P x (T > t) = ∆ N (x)t − N 4 (N −1) (C + o(1)) as t → ∞ for some C > 0. Note that the limiting distribution is of the form (1.1) with ∆ 2 N replaced by ∆ N , i.e., with β = 1. Sketch of proof. Lemma 4.6 yields P x (T > t, t e − x 2 2 /(2t) ∆ N (z)Schur y (z), (4.12) where we put z i = e xi/ √ t . Now we consider the limit as t → ∞. The second term is (1 + o(1)), and the continuity of Schur y implies that the last term converges to ∆ N (y)/∆ N (x * ). Using the approximation e xi/ √ t − 1 ∼ x i / √ t, we see that ∆ N (z) ∼ t − N 4 (N −1) ∆ N (x). Hence, the right hand side of (4.12) is equal to ϕ(y)∆ N (y)∆ N (x)t − N 4 (N −1) (1/∆ N (x * ) + o(1)). Integrating on y ∈ W N , we obtain the last statement of the lemma. Dividing the left hand side of (4.12) by P x (T > t) and using the above asymptotics, we obtain the first one.

Marginal distributions and ensembles
We apply now the technical tools of Section 4.3 to identify the marginal distribution of some particular ∆ N -transformed processes as certain orthogonal polynomial ensembles.

The continuous case.
Lemma 4.9 (Marginal distribution for ∆ N -transformed diffusions, [KO01]). Assume that I is an interval and X is a diffusion on I N such that the Vandermonde determinant ∆ N is harmonic for its generator and ∆ N (X(t)) is integrable for any t > 0. Assume that there is a Taylor expansion p t (x, y) p t (0, y) = f t (x) ∞ m=0 (xy) m a m (t), t ≥ 0, y ∈ I, for x in a neighborhood of zero, where a m (t) > 0 and f t (x) > 0 satisfy lim t→∞ a m+1 (t)/a m (t) = 0 and f t (0) = 1 = lim t→∞ f t (x). Then, for any t > 0 and some suitable C t > 0, lim x→0 x∈W N P x (X(t) ∈ dy) = C t ∆ N (y) 2 P 0 (X(t) ∈ dy), y ∈ W N . (4.13) Furthermore, for any x ∈ W N , of the above conditioned Brownian motion process, given that no collision happens by time S. The matrix diffusion (M 1 (t) + i M 2 (t)) t∈[0,S] is a oneparameter interpolation between GUE and GOE (hence it is sometimes called a two-matrix model). Indeed, recall the well-known independent decomposition of a Brownian motion (B(t)) t≥0 into the Brownian bridge (B(t) − t S B(S)) t∈[0,S] and the linear function ( t S B(S)) t∈[0,S] and decompose M 1 (t) in that way. Collecting the bridge parts of M 1 (t) + i M 2 (t) in one process and the remaining variables in the other, we obtain the interpolation. (vi) Infinite systems of non-colliding random processes are considered in [Ba00] and in [KNT04]. The nearest-neighbor discrete-time case is the subject of [Ba00] where the limiting distribution at time N of the left-most walker is derived, conditional on a certain coupling of the total number of leftsteps among all the walkers with N ; the outcome is a certain elementary transformation of the Tracy-Widom distribution for GUE. In [KNT04], a system of N Brownian motions, conditional on non-collision until a fixed time S, is analysed in the limit N → ∞ and S → ∞, coupled with each other in various ways. ✸

The discrete case.
We present three examples of conditioned random walks on Z N : the binomial random walk (leading to the Krawtchouk ensemble), the Poisson random walk (leading to the Charlier ensemble) and its de-Poissonized version, the multinomial walk. For i = 1, . . . , N , let X i = (X i (n)) n∈N0 be the binomial walk, i.e., at each discrete time unit the walker makes a step of size one with probability p ∈ (0, 1) or stands still otherwise. The walks X 1 , . . . , X N are assumed independent. Under P x , the N -dimensional process X = (X 1 , . . . , X N ) starts at X 0 = x ∈ N N 0 . The ∆ N -transformed process on Z N ∩ W N has the transition probabilities P x (X(n) = y) = P x (X(n) = y, T > n) ∆ N (y) ∆ N (x) , x, y ∈ Z N ∩ W N , n ∈ N.
The Poisson random walk, X i = (X i (t)) t≥0 , on N 0 makes steps of size one after independent exponential random times. If X 1 , . . . , X N are independent, the process X = (X 1 , . . . , X N ) on N 0 makes steps after independent exponential times of parameter N , and the steps are uniformly distributed on the set of the N unit vectors. The embedded discrete-time walk is the so-called multinomial walk; at times 1, 2, 3, . . . , a randomly picked component makes a unit step. Lemma 4.4(ii) applies also here, and we may consider the ∆ N -transformed version, both in continuous time and in discrete time. The marginal distribution of the discrete-time process is given in (4.15), and the same formula holds true for the continuous-time version with n ∈ N replaced by t > 0.
Analogously to the binomial walk, the marginal distributions of both conditioned walks, when the process is started at x * = (0, 1, 2, . . . , N − 1), may be identified in terms of well-known ensembles, which we introduce first. The Charlier ensemble with parameter α > 0 and N ∈ N is given as (4.17) The de-Poissonized Charlier ensemble is defined as Then the free multinomial random walk has the marginals P x (X(n) = y) = Mu N,n (y − x).
(i) Let X = (X(t)) t≥0 be the Poisson walk, then the marginal distribution of the conditional process satisfies, for any t > 0 and x ∈ Z N ∩ W N , P x * (X(t) = x) = Ch N,t (x). (4.20) (i) Let X = (X(n)) n∈N0 be the multinomial walk, then the marginal distribution of the conditional process satisfies, for any n ∈ N 0 and x ∈ N N 0 ∩ W N , P x * (X(n) = x) = dPCh N,n+N (N −1)/2 (x). (4.21) The proofs of Lemma 4.12 are based on the Karlin-McGregor formula and explicit calculations for certain determinants.