A system of differential equations for the Airy process

The Airy process is characterized by its finite-dimensional distribution functions. We show that each finite-dimensional distribution function is expressible in terms of a solution to a system of differential equations.


I. Introduction
The Airy process τ → A τ , introduced by Prähofer and Spohn [6], is the limiting stationary process for a certain 1 + 1-dimensional local random growth model called the polynuclear growth model (PNG). It is conjectured that the Airy process is, in fact, the limiting process for a wide class of random growth models. (This class is called the 1 + 1-dimensional KPZ universality class in the physics literature [5].) The PNG model is closely related to the length of the longest increasing subsequence in a random permutation [2]. This fact together with the result of Baik, Deift and Johansson [3] on the limiting distribution of the length of the longest increasing subsequence in a random permutation shows that the distribution function Pr (A τ < ξ) equals the limiting distribution function, F 2 (ξ), of the largest eigenvalue in the Gaussian Unitary Ensemble [7]. F 2 is expressible either as a Fredholm determinant of a certain trace-class operator (the Airy kernel) or in terms of a solution to a nonlinear differential equation (Painlevé II). The finite-dimensional distribution functions are expressible as a Fredholm determinant of a trace-class operator (the extended Airy kernel) [4,6]. It is natural to conjecture [4,6] that these distribution functions are also expressible in terms of a solution to a system of differential equations. It is this last conjecture which we prove.

II. Statement
The Airy process is characterized by the probabilities where K is the operator with m × m matrix kernel having entries We assume throughout that τ 1 < · · · < τ m , and think of K as acting on the m-fold direct sum of L 2 (α, ∞) where α < min ξ j .
To state the result we let R = K (I − K) −1 and let A(x) denote the m × m diagonal matrix diag (Ai(x)) and χ (x) the diagonal matrix diag ( χ j (x)), where χ j = χ (ξj ,∞) . Then we define the matrix functions Q(x) andQ(x) by (where forQ the operators act on the right). These and R(x, y) are functions of the ξ j as well as x and y. We define the matrix functions q,q and r of the ξ j only by Finally we let τ denote the diagonal matrix diag (τ j ). Our differential operator is D = j ∂ j , where ∂ j = ∂/∂ξ j , and the system of equations is Here the brackets denote commutator and ξ denotes the diagonal matrix diag (ξ j ). This can be interpreted as a system of ordinary differential equations if we replace the variables ξ 1 , . . . , ξ m by ξ 1 + ξ, . . . , ξ m + ξ, where ξ 1 , . . . , ξ m are fixed and ξ variable. Then D = d/dξ, and the ξ j are regarded as parameters.
To get a representation for det (I − K) observe that where the last factor denotes multiplication by the diagonal matrix with all entries zero except for the j th , which equals δ(x − ξ j ). We deduce that Hence D log det(I − K) = Tr r, and so it follows from (3) that since the trace of [τ, r] equals zero. This gives the representation Here the determinant is evaluated at (ξ 1 , . . . , ξ m ) and in the integral ξ + η is shorthand for (ξ 1 + η, . . . , ξ m + η).
If m = 1 the commutators drop out, q =q, equations (1) and (2) are Painlevé II and these are the previously known results. Note Added in Proof: After the submission of this manuscript, Adler and van Moerbeke [1] found a PDE involving different quantitites than ours for the case m = 2.

III. Proof
The proof will follow along the lines of the derivation in [7] for the case m = 1. There the kernel was "integrable" in the sense that its commutator with M , the operator of multiplication by x, was of finite rank. The same was then true of the resolvent kernel, which was useful. But now our kernel is not integrable, so there will necessarily be some differences.
We have already defined the matrix functions Q andQ and we define It follows from (5) and the fact that τ and A commute that 4 Because of the fact ρ L χ = R and our interpretation of R ij (x, ξ j ) as R ij (x, ξ j +) we are able to write R δ ρ in place of ρ L δ ρ. 5 The meaning of δ here and later is this: If U and V are matrix functions then U δ V is the matrix with i, j entry k U ik (ξ k ) V kj (ξ k ). Thus R δ Q is the matrix function with i, j entry k R ik (x, ξ k ) Q kj (ξ k ). This makes it compatible with our use of δ also as a multiplication operator so that, for example, (R δ ρ) (A) = R δ (ρ A).

Next, it follows from (4) that
and it follows from this that ∂ j Q = −R δ j Q. Summing over j, adding to (6) and evaluating at ξ k give If we define p ij = P ij (ξ i ) then we obtain Next we use the facts that D 2 − M commutes with L and that M commutes with χ . It follows that It follows from this that Applying both sides to A and using the fact that (D 2 − M )A = 0 we obtain The first term on the right equals R δ Q . For the second term observe that so we can interpret that term as −R y δ Q (the subscript denotes partial derivative) where −R y (x, y) is interpreted as not containing the delta-function summand which arises from the jumps of R. With this interpretation of R y we can write the second term on the right as −R y δ Q. Thus, Using this we obtain from (6) and then from (6) once more It follows from (5) that (We replaced R δ ρ by R δ R since, recall, R y does not contain delta-function summands.) We use this and also the identity Rδ[τ, Q] − [τ, RδQ] = −[τ, Rδ]Q, and the fact that δ and τ commute. The result is that ]. It follows from (7) that ∂ j P = −R δ j P . Summing over j, adding to the above and evaluating at ξ k give Hence D p is equal to Equivalently, in view of (8), Let us compute D u. We have where we use (7) again. This is equal to and so This gives D u = −q q.
To get equation (1) we apply D to (8) and use (10) and (12). We find that which is (1). Finally, to get equation (2) we use the fact that χ j (y) ρ jk (y, x) is equal to χ k (x) times ρ kj (x, y), where ρ is the resolvent kernel for the matrix kernel with i, j entry L ji (x, y) χ j (y). Hencẽ Q jk (x) is equal to χ k (x) times the Q kj (x) associated with L ji . Consequently for all the differentiation formulas we have for the Q kj (ξ k ), etc., there are analogous formulas for thẽ Q jk (ξ k ), etc.. The difference is that we have to reverse subscripts and replace r by r t and τ by −τ . The upshot is that, by computations analogous to those used to derive (1), we derive another equation which can be obtained from (1) by making the replacements q →q t ,q → q t , r → r t , τ → −τ and then taking transposes. The result is equation (2).