Random pure quantum states via unitary Brownian motion

We introduce a new family of probability distributions on the set of pure states of a finite dimensional quantum system. Without any a priori assumptions, the most natural measure on the set of pure state is the uniform (or Haar) measure. Our family of measures is indexed by a time parameter $t$ and interpolates between a deterministic measure ($t=0$) and the uniform measure ($t=\infty$). The measures are constructed using a Brownian motion on the unitary group $\mathcal U_N$. Remarkably, these measures have a $\mathcal U_{N-1}$ invariance, whereas the usual uniform measure has a $\mathcal U_N$ invariance. We compute several averages with respect to these measures using as a tool the Laplace transform of the coordinates.


Introduction
Defining models of randomness for quantum objects has become a central problem in quantum information theory which has found many interesting and novel applications.Probability measures on the set of quantum states have been investigated thoroughly in recent years [17,14,2] both in the physical and the mathematical literature.Random quantum channels have also been a subject of interest [10,5,6].In particular, ensembles of quantum channels are the central idea behind the recent breakthroughs in the additivity conjecture [9].Random quantum states can arise in two different ways.First, they describe states of open systems which are subjected to random interaction with an (unknown) environment.This aspect has been the starting point of the so-called induced measures which describe finite dimensional systems in interaction with a usually larger, but finite dimensional environment.Statistical ensembles of quantum states can also be used to study physical properties of generic states, such as entanglement, purity or other physically relevant quantities.
Defining a model of randomness for quantum states amounts to specifying a probability measure on the set of density matrices on the corresponding Hilbert space.When one considers only pure states (rank one density matrices), it turns out that there exists a unique natural candidate for such a probability measure: the Lebesgue measure on the unit sphere of the underlying Hilbert space, π ∞ .This measure on the set unit vectors, called the Fubini-Study or the uniform distribution, is canonical in the sense that it is invariant under changes of bases: an element ψ of this ensemble has the same distribution as one of its rotations U ψ, for any given unitary matrix U ∈ U N (C).This invariance property, which characterizes the uniform distribution, justifies its use when no information about the internal structure of the system is known.
Very recently, new ensembles of pure states have been constructed, in order to take into account any available a priori information on the system.In [7], the authors introduce an ensemble of states for quantum multipartite systems.Given a graph which encodes the entanglement between the different parties, a probability measure on the pure states of the total system is constructed.The ensemble of graph states is different from the uniform ensemble because it contains the structure information about the initial entanglement present in the system.
In the present work, we are going to generalize the uniform ensemble in another direction, by removing the rotation invariance condition.Pure state ensembles on a single-partite system with no full invariance property exhibit preferred states which have a larger probability than other states.The model introduced in the present work has a symmetry group of smaller dimension U N −1 (C) and this feature makes is suitable for modeling systems on which some partial information is available.Given a fixed state ψ of a quantum system, we shall introduce a one parameter family of probability measures π ψ t indexed by a real parameter t 0. The parameter t can be interpreted as a time parameter is such a way that for t = 0, the measure is deterministic, being supported on the state ψ, and in the limit t → ∞, the measure π ψ t approaches the uniform measure π ∞ .For each value of the parameter t, the measure π ψ t is invariant under the subgroup of rotations which leave invariant the vector ψ, making it the preferred state of the measure.Our construction is based on the unitary Brownian motion, a stochastic process valued in the set of unitary matrices.At fixed time t, this process itself is an interpolation between the identity matrix (at t = 0) and the unique invariant Haar measure on the compact group of unitary matrices (when t → ∞).The construction is motivated by the similar procedure that was used in the definition of the Fubini-Study measure.
The paper is organized as follows.In section 2 we review the definition and some basic properties of the unitary Brownian motion.Section 3 contains the definition of the new family of ensembles of pure states, that are analyzed in section 4 using the Laplace transform.Finally, we compute in section 5 averages of some quantities of interest in quantum information theory.
Let us now introduce some notation.In quantum information theory, any norm one vector (or pure state) x gives rise to a probability vector.More precisely, if e i is the canonical basis of H ≃ C N , then we can decompose ψ in the following form To such a state we naturally associate the probability vector Physically, if e i determines the level of energies of an atom and if x represents the wave function describing this atom, the quantity |ψ i | 2 represents the probability to be in the energy level e i , that is P[to be in the state e i ] = |ψ i | 2 .Another physical motivation related to probability theory concerns the measurement of observables.It is known in quantum mechanics, that a physical quantity of a quantum system H ≃ C N is represented by an observable, which is an auto-adjoint operator on H. Let A be an observable and A = p i=1 λ i P i be its spectral decomposition.If ψ is a reference vector state of H, it follows from the axioms of quantum mechanics that a measurement of the observable A gives a random result λ i : In particular if the projectors P i are the one dimensional projectors on Ce i we recover the previous probability.
Let us now recall some elements of probability theory that we are going to use.If X and Y are two independent real Gaussian random variables of mean 0 and variance 1/2, then Z = X + iY is said to have a complex Gaussian distribution of mean 0 and variance 1.We denote by N C (0, 1) the law of Z.A complex vector (Z 1 , . . ., Z n ) is said to have a multivariate complex Gaussian distribution N n C (0, I n ) if the random variables Z 1 , . . ., Z n are independent and have distribution N C (0, 1).
We shall also extensively use the Haar (or uniform) measure Haar N on the unitary group U (C); it is the unique probability measure which is invariant by left and right multiplication by unitary elements:

Unitary Brownian Motion
This section is devoted to the presentation of the unitary Brownian motion: definition, properties, stochastic calculus, invariant measure.In particular, we present all the ingredients that we are going to use for generating random quantum states.
The unitary Brownian motion refers to the natural definition of a Brownian motion on the unitary group of complex matrices.This is a special case of a Brownian motion on a differential manifold and more precisely on a compact Lie group (in differential geometry this is sometimes called the heat kernel measure).This way, we can consider the Brownian motion on M sa N (C), the unique Gaussian process (H t ) which satisfies In an equivalent way, the process (H t ) has the same distribution of the random Hermitian matrix whose upper-diagonal coefficients are We have now all the ingredients to define the unitary Brownian motion.This is the process (U t ) t 0 , solution of the stochastic differential equation1 (4) In particular, we have ( 5) As a warm-up calculation, let us check that the process (U t ) is unitary.In order to compute d(U * t U t ), we need to use Ito stochastic formulas for matrix valued stochastic processes.Such formulas have been derived in [1, Section 2.1]: if (X t ) and (Y t ) are two matrix valued stochastic processes defined by This way, we obtain Since the unitarity condition is satisfied at t = 0 (U 0 U * 0 = I) and the identity is a solution of the above ordinary differential condition, by uniqueness of solutions we have U t U * t = I for all time t.

2.2.
Laplace Beltrami operator, Markov generator.For the sake of completeness, let us describe the Markov generator of this process.In particular, this allows us to motivate the definition of the Brownian motion from a geometric point of view.To this end, we denote by u N (C) the Lie algebra of U N (C) and by (X 1 , . . ., X N 2 ) an orthonormal basis of u(N ).For all smooth functions F : The operator 1 2 ∆, where is then the Markov generator of the unitary Brownian motion (this justifies the name heat kernel measure which is sometimes used to defining this process).The operator ∆ is actually the Laplace-Beltrami operator on the Riemmanian manifold U N (C) endowed with the Riemmanian metric induced by the scalar product on matrices Let us stress that this operator does not depend on any particular choice of an orthonormal basis.The Markov generator character of ∆ is expressed in the following proposition.
Then for all t 0, we have (10) and the processes ( X k , iH t ), k = 1, . . ., N 2 are independent standard real Brownian motions.
The proof of this proposition relies on the classical Ito formula; this is a classical result of stochastic analysis on manifolds.A particular consequence of this proposition is that for all smooth function F , the process defined by for all t is a martingale (with respect to the natural filtration associated with the Brownian motions (B kl t , C kl t , D k t )) and U t is the unique process "in distribution" satisfying such a property (this property characterizes the unitary Brownian motion and could be used as a starting definition).
2.3.Invariant measure.As announced in the introduction, the UBM will help us to define a new family of random states which interpolates between deterministic and uniformly distributed random states.This relies on the large time behavior and the invariant measure of the UBM.Theorem 2.2.For all initial unitary conditions U 0 the solution of the stochastic differential equation converges in distribution to the Haar measure Haar N on U N (C).In other words, the Haar measure is the unique invariant measure of the Markov process, solution of (11).
The theorem above justifies the property of interpolation between the identity operator (U 0 = I) and the Haar measure for large time (t goes to infinity) for the unitary operator U t .This property is essential for our definition of new ensembles of random pure states.This fact will be made precise in Section 3.
Theorem 2.2 also shows that in general the distribution of the UBM (U t ) is not invariant by unitary multiplication (except under the invariant measure) but the distribution of (U t ) is nevertheless invariant by unitary conjugation and by inversion.
Proposition 2.3.Let (U t ) be the UBM defined by the SDE (11) and let V be any unitary matrix in U N (C).The processes (V U t V * ) and (U −1 t ) have the same distribution than (U t ).Proof.The property concerning the stochastic process (V U t V * ) follows from the fact that it satisfies the same stochastic differential equation (11) with the same initial condition.Let (W t ) defined by W t = V U t V * for all t, we have Since (V H t V * ) is a Brownian motion on the Hermitian matrices, we see that (W t ) and (U t ) satisfies the same SDE.Hence, as they start with the same initial condition, the two processes must have the same distribution.
The statement for the inverse is an easy consequence of the Ito formula (11) and the fact that the inversion corresponds to the complex adjoint and then is a linear mapping (not affected by derivation) 2.4.Useful formulas.We continue by investigating some properties of the UBM which are going to be useful for studying the random pure states generated by the UBM.In particular, we will be interested in the properties of the coefficients of the matrix (U t ) which we denote by U jk t , for 1 j, k n.The stochastic differential equations satisfied by these elements are of the following form (12) and for their complex conjugates ( 13) In the next section we will need the following expressions The previous formula relies on the stochastic bracket for the elements of (H t ), that is, where δ is the Kronecker delta symbol.From this, we obtain the brackets2 of the matrix coordinates which is given by ( 16)

Random Pure States Generated by Unitary Brownian Motion
Before developing our theory for random pure states generated by UBM, we review the definition and the basic properties of the uniform (or Fubini-Study) probability measure on the set of pure states (or unit vectors).The lack of any a priori information on the state ψ of a quantum system described by a Hilbert space H ≃ C N imposes the choice of a measure which should be invariant by changes of bases.In our setting of finite dimensional complex Hilbert spaces, changes of bases are implemented by unitary operators U ∈ U N (C).As a consequence, we ask that the uniform probability measure should be unitarily invariant.A probability measure π on the unit ball of H is called unitarily invariant if for all Borel subsets A and for all unitary operators U ∈ U N (C), (17) π(U A) = π(A).
The above condition determines uniquely the measure π: it is the normalized surface area of the unit ball of C N , which we shall denote by π ∞ , for consistency reasons which shall be clear later.Moreover, we introduce the image measure σ ∞ = sq # π ∞ , where sq : ).In other words, if the unit vector ψ has distribution π ∞ , then the probability vector (|ψ i | 2 ) N i=1 has distribution σ ∞ .Other that the abstract definition of the invariant measure π ∞ , there are two more characterization of this probability that are important in what follows: (a) Let X ∈ C N be a standard complex Gaussian vector.Then X/ X has distribution π ∞ .(b) Let U ∈ U N (C) be a Haar-distributed random unitary matrix.The first column (or any column, or any line) of U has distribution π ∞ .
The first statement above is useful when one needs to sample from π ∞ .The second statement above will be the starting point for the definition of new probability measures on the unit sphere of C N .
Start with a fixed vector ψ ∈ C N of norm one, and define the stochastic process (ψ t ), where (18) ψ t = U t ψ, ∀t 0, and (U t ) is a UBM starting at U 0 = I N .This gives rise to a stochastic process valued in the unit sphere, with ψ 0 = ψ.In the sequel, we study the properties of this process, whose distribution at time t we denote by π ψ t .As before, the distribution of the probability vector |ψ j t | 2 is denoted by σ ψ t , making explicit the dependence in the initial condition ψ 0 = ψ.
Definition 3.1.The ensemble of pure states (unit vectors) of C N having distribution π ψ t is called the unitary Brownian motion induced ensemble at time t.The distribution of the square moduli of the coordinates of a random vector ψ in this ensemble will be denoted by Let us first discuss the connection between the distribution π ψ t and the Haar measure π ∞ .First, note that the distribution of ψ t depends on the initial value ψ, in contrast with Haar distributed random states, whose distribution is invariant.In particular, we do not have for π ψ t invariance by all unitary transformations, that is, if V denotes a unitary operator, in general the processes (V ψ t ) and (ψ t ) have different distributions.More precisely we have the following result.Proposition 3.2.Consider a unitary operator V and a (possibly random) unit vector ψ.Let (ψ t ) be the process generated by a unitary Brownian motion independent of ψ, with initial condition ψ 0 = ψ.Then, the processes (V ψ t ) and (U t V ψ) have the same distribution.In particular, the processes (V ψ t ) and (ψ t ) have the same distribution if and only if V ψ ∼ ψ.
Proof.This follows from the fact that Since (U t ) and ( Ũt ) = (V U t V * ) have the same distribution and are independent of ψ and V ψ, the first part is then straightforward.The second part is a trivial consequence of first part.
Proposition 3.2 is more instructive when we look at deterministic initial conditions.In particular, the processes (V ψ t ) and (ψ t ) have the same distributions if and only if V ψ = ψ, restricting the class of unitary transformations which leave invariant the process.In other words, the distribution of (ψ t ) is invariant under all unitary transformations which fix the initial condition.In conclusion, the process (ψ t ) has a U N −1 (C) invariance group, whereas a uniform unit vector ψ ∼ π ∞ has a U N (C) invariance group.

PDE for the Laplace transform of σ ψ t
We are now in the position to develop our model in more detail, using as a main tool a partial differential equation satisfied by the Laplace transform of the amplitude vector.Let ψ ∈ C N a fixed unit vector and consider the process generated by a UBM ( Ũt ) with Ũ0 = I: Consider also a fixed unitary matrix V such that V e 1 = ψ, where e 1 = (1, 0, . . ., 0) is the first element of the canonical basis of C N .The one can write (21) where (U t ) is another UBM starting at U 0 = V .Note that the choice of the matrix V satisfying V e 1 = ψ is not important, because of Proposition 3.2.In this way, the initial condition of the problem has been transfered into the UBM (U t ) and we have , which corresponds to the first column of the unitary Brownian motion U t .As is was discussed in the Introduction, such a unit norm vector gives rise to a probability vector (|ψ j t | 2 ) j = (|U j1 t | 2 ) j having distribution σ ψ t .This random variable is compactly supported, hence its Laplace transform determines its distribution.Let us define the Laplace transform by (notice the positive sign in the exponential) for all λ = (λ 1 , . . ., λ N ) ∈ C N and all t 0. Since the random variable is bounded, the function λ → ϕ(λ; t) is complex analytic for each t.
The partial derivatives of the function ϕ read: The following theorem is the main result of this paper, establishing a partial differential equation for the Laplace transform ϕ.In principle, it allows to recover ϕ and then, by Laplace inversion, the probability vector (|U j1 t | 2 ) N j=1 .
Theorem 4.1.The Laplace transform ϕ of the random vector (|U j1 t | 2 ), j = 1, . . ., N satisfies the following partial differential equation , there exists a sequence (a n ) n 0 of real numbers (depending on c) such that Proof.Equation (29) follows from equation (25) of Theorem 4.1 by letting λ k = 0 for all k = j.In the rest of the proof, we shall drop the index j, since the initial condition will be encoded into ϕ(λ; 0) = exp(λ|U j1 t | 2 ).Using separation of variables, we look for solutions of the form ϕ(λ; t) = f (λ)g(t).Neither of f or g can be zero, hence we obtain (33) Note that the left hand side of the above equation depends only on t and the right-hand since depends only on λ.This is impossible unless both are equal to a constant C, in which case g(t) = e Ct (we can move the constant factor to f ) and f satisfies the ordinary differential equation Writing f as a power series f (λ) = n 0 a n λ n , we get: These equations can be summarized as (we put a , it follows that f = 0, which is impossible.Hence, C = −Λ n for some n 0. We can compute all the coefficients of the series expansion of f from the recurrence relations above: a n is free (40) We obtain the final expression for the Laplace transform: The second sum in the above formula admits a more compact expression using the Kummer confluent hypergeometric function 1 F 1 : (43) and thus a n e −Λnt λ n 1 F 1 (n + 1; N + 2n; λ).The above infinite triangular system of linear equations can be solved in principle and explicit formulas for the coefficients a n can be found.Analytical formulas can be obtained (and easily proved by induction) in two particular cases.For c = 0, one can show that the unique solution to the equations above are given by (47) We were not able to obtain analytical expressions for all the coefficients in other particular cases.We gather next the first six coefficients in the important case c = 1/N : (49) 5. Properties of the measure σ ψ t This section contains a list of results which address important statistical properties of probability vectors distributed along the measure σ ψ t .Moments, as well as covariances, are shown to satisfy ordinary differential equations that are solvable, see Propositions 5.1 and 5.3.We also compute quantities relevant to quantum information theory, such as average values of observables in Lemma 5.4 and bounds for average Rényi entropies.
From Proposition 4.2, it is easy to obtain the expression of the moments of the j-th coordinate |ψ j t | 2 .To this end, we define a family of maps y y p (t) = ∂ p ϕ ∂λ p (0; t).The applications y p satisfies a particular system of ordinary differential equations.Proposition 5.1.With the convention that y 0 ≡ 1, the applications y p satisfy the following ordinary differential equations on R + : The solution of the system (52) can be expressed in terms of a n and Λ n in the following way The coefficients a n depend on the initial condition (see the previous section).
From the explicit computations in the previous section, we can specify to the coefficients a n for the initial condition ψ = (1, 0 . . ., 0).Indeed, the moments of |U In particular, when t goes to infinity, we recover the usual result for the Haar measure π , which is formula (7.60) in [2].We use this expression to bound the entropy of the coordinate vector of an uniform point on the unit sphere of C N :

2. 1 .
Definition.We now define the unitary Brownian motion (UBM).To start, let us introduce some notations.For N ∈ N, we denote U N (C) the unitary group on M N (C), that is, U N (C) = {U ∈ GL n (C)/U U * = I} and let denote by M sa N (C) the set of Hermitian matrices on M N (C), that is, M sa N (C) = {H ∈ M N (C)/H * = H}.The set M sa N (C) is a real linear subspace of M N (C), that we endow with the scalar product A, B = N Tr[A * B] = N Tr[AB].

Remark 4 . 3 . 1 N − 1 .
Note that in the limit t → ∞, only the n = 0 term survives and we obtain m which is the result for the Haar measure.This is consistent with[11], Lemma 4.2.4,where the m-th moment in the Haar case was shown to be N +m−It is interesting to note that equation (29) does not depend on the actual value of j.However, it does depend on the initial condition c.Next, we compute explicitly ϕ for particular values of the initial conditionc = |U j1 0 | 2 ∈ [0, 1].The values of the coefficients a n appearing in the proposition can be computed in principle from the following initial conditions:ϕ(λ; 0) = e λc ϕ(0; t) = 1.As a first observation, note that he latter relation fixes the value of the constant coefficient in the series, a 0 = 1.The first condition translates to (p = n + m): . The maps, indexed by positive integers p 1 depend implicitly on the size parameter N .The dependence on the index j is encoded in the initial condition y p (0) = |ψ j | 2p = |U j1 0 | 2p .Interchanging the derivation operator ∂ p and the expectation E, we obtain (51) 2n) p−n e −Λnt .Proof.Using Proposition 4.2, we can rewrite the formula (42) for ϕ(λ; t) in the form (54) ϕ(λ; t) = Ee λ|U j1 t | 2 = a n e −Λnt (N + 2n) p−n λ p and then, taking the p-th derivative of the expression above, we obtain (55) y p (t) = p! 2n) p−n e −Λnt .


∞ .It is easy to compute average value of an observable A ∈ M N (C) for the Fubini-Study ensemble:A = ψ|A|ψ dπ ∞ (ψ) = e 1 |U * AU |e 1 dHaar(U ) (69) = e 1 | U * AU dHaar(U )|e 1 finish by deriving bounds for the Réyni entropies.Recall that these quantities are defined for a pure state ψ by (72) S p (ψ) integer.For the random pure state generated by a unitary Brownian motion, We shall estimate E[S p (ψ t )] with the help of the Jensen inequality.We have indeed E[S p (ψ t )where Y p (t) = N j=1 E|U j1 t | 2p are the sum-of-moments functions.Using the moment formulas obtained in the beginning of this section, one can compute in principle the function Y p (t).In particular, in the case when ψ = e 1 , we have(75) Y p (t) = p! p n=0 p p − n (N − 1) n + (N − 1)(−1) n (N + n − 1) n (N + 2n) p−n e −Λnt .In the limit t → ∞, only the term n = 0 survives, and we obtain lim t→∞ Y p (t) = (N + p) N + p p −1