Cycle time of stochastic max-plus linear systems

We analyze the asymptotic behavior of sequences of random variables defined by an initial condition, a stationary and ergodic sequence of random matrices, and an induction formula involving multiplication is the so-called max-plus algebra. This type of recursive sequences are frequently used in applied probability as they model many systems as some queueing networks, train and computer networks, and production systems. We give a necessary condition for the recursive sequences to satisfy a strong law of large numbers, which proves to be sufficient when the matrices are i.i.d. Moreover, we construct a new example, in which the sequence of matrices is strongly mixing, that condition is satisfied, but the recursive sequence do not converges almost surely.


Model
We analyze the asymptotic behavior of the sequence of random variables (x(n, x 0 )) n∈N defined by: x(0, x 0 ) = x 0 x i (n + 1, x 0 ) = max j (A ij (n) + x j (n, x 0 )) , where (A(n)) n∈N is a stationary and ergodic sequence of random matrices with entries in R ∪ {−∞}.Moreover, we assume that A(n) has at least one finite entry on each row, which is a necessary and sufficient condition for x(n, x 0 ) to be finite.(Otherwise, some coefficients can be −∞.) Such sequences are best understood by introducing the so-called max-plus algebra, which is actually a semiring.
Definition 1.1.The max-plus semiring R max is the set R ∪ {−∞}, with the max as a sum (i.e. a ⊕ b = max(a, b)) and the usual sum as a product (i.e. a ⊗ b = a + b).In this semiring, the identity elements are −∞ and 0.
We also use the matrix and vector operations induced by the semiring structure.For matrices A, B with appropriate sizes, ( , and for a scalar a ∈ R max , (a ⊗ A) ij = a ⊗ A ij = a + A ij .Now, Equation (1) x(n + 1, x 0 ) ⊗ A(n)x(n, x 0 ).In the sequel, all products of matrices by vectors or other matrices are to be understood in this structure.
For any integer k ≥ n, we define the product of matrices A(k, n) := A(k) • • • A(n) with entries in this semiring.Therefore, we have x(n, x 0 ) = A(n − 1, 0)x 0 and if the sequence has indices in Z, which is possible up to a change of probability space, we define a new random vector y(n, x 0 ) := A(−1, −n)x 0 , which has the same distribution as x(n, x 0 ).Sequences defined by Equation 1 model a large class of discrete event dynamical systems.This class includes some models of operations research like timed event graphs (F.Baccelli [1]), 1-bounded Petri nets (S.Gaubert and J. Mairesse [10]) and some queuing networks (J.Mairesse [15], B. Heidergott [12]) as well as many concrete applications.Let us cite job-shops models (G.Cohen et al. [7]), train networks (H.Braker [6], A. de Kort and B. Heidergott [9]), computer networks (F.Baccelli and D. Hong [3]) or a statistical mechanics model (R. Griffiths [11]).For more details about modelling, see the books by F. Baccelli and al. [2] and by B. Heidergott and al. [13].

Law of large numbers
The sequences satisfying Equation (1) have been studied in many papers.If a matrix A has at least one finite entry on each row, then x → Ax is non-expanding for the L ∞ norm.Therefore, we can assume that x 0 is the 0-vector, also denoted by 0, and we do it from now on.
We say that (x(n, 0)) n∈N defined in (1) satisfies the strong law of large numbers if 1  n x(n, 0) n∈N converges almost surely.When it exists, the limit in the law of large numbers is called the cycle time of (A(n)) n∈N or (x(n, 0)) n∈N , and may in principle be a random variable.Therefore, we say that (A(n)) n∈N has a cycle time rather than (x(n, 0)) n∈N satisfies the strong law of large numbers.Some sufficient conditions for the existence of this cycle time were given by J.E. Cohen [8], F. Baccelli and Liu [4,1], Hong [14] and more recently by Bousch and Mairesse [5], the author [16] or Heidergott et al. [13].
Bousch and Mairesse proved (Cf.[5]) that, if A(0)0 is integrable, then the sequence 1  n y(n, 0) n∈N converges almost-surely and in mean and that, under stronger integrability conditions, 1  n x(n, 0) n∈N converges almost-surely if and only if the limit of 1  n y(n, 0) n∈N is deterministic.The previous results can be seen as providing sufficient conditions for this to happen.Some results only assumed ergodicity of (A(n)) n∈N , some others independence.But, even in the i.i.d.case, it was still unknown, which sequences had a cycle time and which had none.
In this paper, we solve this long standing problem.The main result (Theorem 2.4) establishes a necessary and sufficient condition for the existence of the cycle time of (A(n)) n∈N .Moreover, we show that this condition is necessary (Theorem 2.3) but not sufficient (Example 1) when (A(n)) n∈N is only ergodic or mixing.Theorem 2.3 also states that the cycle time is always given by a formula (Formula (3)), which was proved in Baccelli [1] under several additional conditions.
To state the necessary and sufficient condition, we extend the notion of graph of a random matrix from the fixed support case, that is when the entries are either almostsurely finite or almost-surely equal to −∞, to the general case.The analysis of its decomposition into strongly connected components allows us to define new submatrices, which must have almost-surely at least one finite entry on each row, for the cycle time to exist.
To prove the necessity of the condition, we use the convergence results of Bousch and Mairesse [5] and a result of Baccelli [1].To prove the converse part of Theorem 2.4, we perform an induction on the number of strongly connected components of the graph.The first step of the induction (Theorem 3.11) is an extension of a result of D. Hong [14].
The paper is organized as follows.In Section 2, we state our results and give examples to show that the hypotheses are necessary.In Section 3, we successively prove Theorem 2.3 and Theorem 2.4

Theorems
In this section we attach a graph to our sequence of random matrices, in order to define the necessary condition and to split the problem for the inductive proof of the converse theorem.
Before defining the graph, we need the following result, which directly follows from Kingman's theorem and goes back to J.E. Cohen [8]: n∈N is an ergodic sequence of random matrices with entries in R max such that the positive part of max ij A ij (0) is integrable, then the sequences 1  n max i x i (n, 0) n∈N and 1  n max i y i (n, 0) n∈N converge almost-surely to the same constant γ ∈ R max , which is called the maximal (or top) Lyapunov exponent of (A(n)) n∈N .
We denote this constant by γ (A(n)) n∈N , or γ(A).
1.The constant γ(A) is well-defined even if (A(n)) n∈N has a row without finite entry.
2. The variables max i x i (n, 0) and max i y i (n, 0) are equal to max ij A(n − 1, 0) ij and max ij A(−1, −n) ij respectively.
Let us define the graph attached to our sequence of random matrices as well as some subgraphs.We also set the notations for the rest of the text.Let (A(n)) n∈N be a stationary sequence of random matrices with values in R d×d max .i) The graph of (A(n)) n∈N , denoted by G(A), is the directed graph whose nodes are the integers between 1 and d and whose arcs are the pairs (i, j) such that ii) To each strongly connected component (s.c.c) c of G(A), we attach the submatrices Nodes which are not in a circuit are assumed to be alone in their s.c.cThose s.c.c are called trivial and they satisfy A (c) = −∞ a.s. and therefore γ (c) = −∞.
iii) A s.c.c c is reachable from a s.c.c c (resp.from a node i) if c = c (resp.i ∈ c) or if there exists a path on G(A) from a node in c (resp.from i) to a node in c.In this case, we write c → c. (resp.i → c).
iv) To each s.c.c.c, we associate the set {c} constructed as follows.First, one finds all s.c.c.downstream of c with maximal Lyapunov exponent.Let C be their union.
Then the set {c} consists of all nodes between c and C: Remark 2.2 (Paths on G(A)).
1.The products of matrices satisfy the following equation: which can be read as 'A(k, k − n) ij is the maximum of the weights of paths from i to j with length n on G(A), the weight of the l th arc being given by A(k − l)'.
For k = −1, it implies that y i (n, 0) is the maximum of the weights of paths on G(A) with initial node i and length n but γ(A) is not really the maximal average weight of infinite paths, because the average is a limit and maximum is taken over finite paths, before the limit over n.However, Theorem 3.3, due to Baccelli and Liu [1,4], shows that the maximum and the limit can be exchanged.
2. Previous author used such a graph, in the fixed support case, that is when In that case, the (random) weights where almost surely finite.Here, we can have weights equal to −∞, but only with probability strictly less than one.
3. In the literature, the isomorphic graph with weight A ji on arc (i, j) is often used, although only in the fixed support case.This is natural in order to multiply vectors on their left an compute x(n, 0).Since we mainly work with y(n, 0) and thus multiply matrice on their right, our definition is more convenient.
With those definitions, we can state the announced necessary condition for (x(n, X 0 )) n∈N to satisfy a strong law of large numbers: Theorem 2.3.Let (A(n)) n∈N be a stationary and ergodic sequence of random matrices with values in R d×d max and almost-surely at least one finite entry on each row, such that the positive part of max ij A ij (0) is integrable.
1.If the limit of 1  n y(n, 0) n∈N is deterministic, then it is given by: That being the case, for every s.c.c c of G(A), the submatrix A {c} of A(0) whose indices are in {c} almost-surely has at least one finite entry on each row.
2. If 1 n x(n, 0) n∈N converges almost-surely, then its limit is deterministic and is equal to that of 1  n y(n, 0) n∈N , that is we have: To make the submatrices A {c} more concrete, we give on Fig. 1 an example of a graph G(A) with the exponent γ (k) attached to each s.c.c c k and we compute {c 2 }.The maximal Lyapunov exponent of s.c.c.downstream of c 2 , is γ (5) .The only s.c.c.downstream of c 2 with this Lyapunov exponent is c 5 and the only s.c.c. between c 2 and c 5 is c 3 .Therefore, {c 2 } is the union of c 2 , c 3 and c 5 .

Figure 1: An example of computations on G(A)
The necessary and sufficient condition in the i.i.d.case reads

random matrices with values in R d×d max and almost-surely at least one finite entry on each row, such that max
converges almostsurely if and only if for every s.c.c c, the submatrix A {c} of A(0) defined in Theorem 2.3 almost-surely has at least one finite entry on each row.That being the case the limit is given by Equation ( 3).Remark 2.3.We also prove that, when A(0)0 ∈ L 1 , the limit of 1 n y(n, 0) is deterministic if and only if the matrices A {c} almost-surely have at least one finite entry on each row.
The stronger integrability ensures the convergence of 1 n x(n, 0) to this limit, like in [5,Theorem 6.18].There, it appeared as the specialization of a general condition for uniformly topical operators, whereas in this paper it ensures that B0 is integrable for every submatrix B of A(0) with at least one finite entry on each row.
Actually, we prove that 1 n x(n, 0) converges, provided that ∀c, A {c} 0 ∈ L 1 , (see Proposition 3.5).We chose to give a slightly stronger integrability condition, which is easier to check because it does not depend on G(A).

Examples
To end this section, below are three examples that show that the independence is necessary but not sufficient to ensure the strong law of large numbers and that the integrability condition is necessary.We will denote by x ⊤ the transpose of a vector x.
Example 1 (Independence is necessary).Let A and B be defined by For any positive numbers γ 1 and γ 2 such that for any integrable functions f and g on R d×d max .Moreover, its support is the full shift {A, B} N , but we have and thus, according to Theorem 2.3, 1 n x(n, 0) n∈N does not converge.Finally, even if (A(n)) n∈N is a quickly mixing sequence, which means that it is in some sense close to i.i.d., and G(A) is strongly connected, does (A(n)) n∈N fail to have a cycle time.
To prove Equation ( 4), let us denote by τ the permutation between 1 and 2 and by g(C, i) the only finite entry on the i th row of C. It means that for any i, g(A, i) = A ii and g(B, i) = B iτ (i) .Since all arcs of the diagram arriving to a node (A, i) are coming from a node (C, i), while those arriving at a node (B, i) are coming from a node (C, τ (i)), we almost surely have and thus It is easily checked that the invariant distribution of the Markov chain is given by the following table: and that g is equal to 0 except in (A, 1).Therefore, we have which implies Equation (4).
The next example, due to Bousch and Mairesse shows that the cycle time may not exist, even if the A(n) are i.i.d.
Therefore x 1 (n, 0) = 0 and max We notice that G(A) has two s.c.c c 1 = {1} and c 2 = {2, 3}, with Lyapunov exponents γ (c1) = 0 and γ (c2) = p, and 2 → 1.Therefore, we check that the first row of A {c2} has no finite entry with probability p. Theorem 2.4 gives a necessary and sufficient condition for the existence of the cycle time of an i.i.d.sequence of matrices A(n) such that max Aij (0) =−∞ |A ij (0)| is integrable.But the limit of 1 n y(n, 0) n∈N exists as soon as A(0)0 is integrable.Thus, it would be natural to expect Theorem 2.4 to hold under this weaker integrability assumption.However, it does not, as the example below shows.

Additional notations
To interpret the results in terms of paths on G(A), and prove them, we redefine the A {c} and some intermediate submatrices.
Definition 3.1.To each s.c.c c, we attach three sets of elements.i) Those that only depend on c itself.
ii) Those that depend on the graph downstream of c.
iii) Those that depend on {c}, as defined in Definition 2.2.
With those notations, the {c} of Definition 2.2 is denoted by H c , while A {c} is A {c} (0).
As in Remark 2.2, we notice that the coefficients y Consequently γ (c) , γ(A [c] ) and γ(A {c} ) are the maximal average weight of infinite paths on c, F c and G c respectively.Since γ [c] is the maximum of the γ (c) for s.c.c c downstream of c, the interpretation suggests it might be equal to γ(A [c] ) and γ(A {c} ).That this is indeed true has been shown by F. Baccelli [1].
Clearly, γ(A [c] ) ≥ γ(A {c} ) ≥ γ(A [c] ), but the maximum is actually taken for finite paths, so that the converse inequalities are not obvious.

Formula for the limit
Up to a change of probability space, we can assume that A(n) = A • θ n , where A is a random variable and (Ω, θ, P) is an invertible ergodic measurable dynamical system.We do it from now on.
Let L be the limit of 1 n y(n, 0) n∈N , which exists according to [5, Theorem 6.7] and is assumed to be deterministic.
By definition of G(A), if (i, j) is an arc of G(A), then, with positive probability, we have A ij (−1) = −∞ and If c → c, then for every i ∈ c and j ∈ c, there exists a path on G(A) from i to j, therefore L i ≥ L j .Since this holds for every j ∈ F c , we have: To show that max j∈Fc L j = γ [c] , we have to study the Lyapunov exponents of submatrices.
The following proposition states some easy consequences of Definition 3.1 which will be useful in the sequel.Proposition 3.2.The notations are those of Definition 3.1 i) For every s.c.c.c, x [c] (n, x 0 ) = x Fc (n, x 0 ).
ii) For every s.c.c.m, and every i ∈ c, we have: iii) Relation → is a partial order, for both the nodes and the s.c.c.iv) If A(0) has almost-surely at least one finite entry on each row, then for every s.c.c.c, A [c] (0) has almost-surely has least one finite entry on each row.v) For every c ∈ E c , we have γ (c) ≤ γ [c] ≤ γ [c] and The next result is about Lyapunov exponents.It is already in [1,4] and its proof does not uses the additional hypotheses of those articles.For a point by point checking, see [16].Theorem 3.3 (F.Baccelli and Z. Liu [1,4,2]).If (A(n)) n∈N is a stationary and ergodic sequence of random matrices with values in R d×d max such that the positive part of max i,j A ij is integrable, then γ(A) = max c γ (c) .
Applying this theorem to sequences A [c] (n) n∈N and A {c} (n) n∈N , we get the following corollary.
Corollary 3.4.For every s.c.c.c, we have It follows from Proposition 3.2 and the definition of Lyapunov exponents that for every s.c.c c of G(A), Combining this with Equation ( 6) and Corollary 3.4, we deduce that the limit of 1 n y(n, 0) n∈N is given by Equation (2).

A {c} (0) has at least one finite entry on each row
We still have to show that for every s.
Dividing by n and letting n to +∞, we have L i ≤ max j∈Fc\Hc L j .Replacing L according to Equation ( 2) we get γ [c] ≤ max k∈Ec\Gc γ [k] .This last inequality contradicts Proposition 3.2 v).Therefore, A {c} (0) has almost-surely at least one finite entry on each row.

The limit is deterministic
Let us assume that 1 n x(n, 0) n∈N converges almost-surely to a limit L ′ .It follows from [5,Theorem 6.7] that 1 n y(n, 0) n∈N converges almost-surely, thus we have We compound each term of this relation by θ n+1 and, since x(n, 0) = y(n, 0) • θ n , it proves that: When n tends to +∞, it becomes L ′ • θ − L ′ = 0. Since θ is ergodic, this implies that L ′ is deterministic.
Since 1 n y(n, 0) = 1 n x(n, 0) • θ n , L ′ and L have the same law.Since L ′ is deterministic, L = L ′ almost-surely, therefore L is also the limit of 1 n x(n, 0) n∈N .This proves formula (3) and concludes the proof of Theorem 2.3 The remainder of this subsection is devoted to the proof of Proposition 3.5.It follows from Propositions 3.2 and 3.4 and the definition of Lyapunov exponents that we have, for every s.c.c c of G(A), Therefore, it is sufficient to show that lim inf n is a stronger statement.We prove Equation ( 11) by induction on the size of G c .The initialization of the induction is exactly Hypothesis 2. of Proposition 3.5.
Let us assume that Equation ( 11) is satisfied by every c such that the size of G c is less than N , and let c be such that the size of G c is N + 1.Let us take I = c and J = H c \c.If c is not trivial, it is the situation of Hypothesis 3. with Ã = A {c} , which almost-surely has at least one finite entry on each row thanks to Hypothesis 1.Therefore, Equation ( 9) is satisfied.If c is trivial, G(B) is not strongly connected, but Equation ( 9) is still satisfied because D(−1)0 = ( Ã(−1)0) I ∈ R I .Moreover, J is the union of the c such that c ∈ G c \{c}, thus the induction hypothesis implies that: , therefore the right side of the last equation is γ [c] and we have: Equation ( 9) ensures that, for every i ∈ I, there exists almost-surely a T ∈ N and a Because of upper bound (10) and inequality (7), it implies that which, because of Equation ( 12), proves Equation (11).This concludes the induction and the proof of Proposition 3.5.

Left products
As recalled in the introduction, T. Bousch an J. Mairesse proved that 1 n x(n, 0) n∈N converges almost-surely as soon as the limit of 1 n y(n, 0) n∈N is deterministic.Therefore, the hypotheses of Proposition 3.5 should imply the existence of the cycle time.But the theorem in [5,Theorem 6.18] assumes a reinforced integrability assumption, that is not necessary for our proof.We will prove the following proposition in this section: Proposition 3.6.Let (A(n)) n∈N be an ergodic sequence of random matrices with values in R d×d max such that the positive part of max ij A ij (0) is integrable and that satisfies the three hypotheses of Proposition 3.5.
If Hypothesis 1. is strengthened by demanding that A {c} (0)0 is integrable, then the sequence 1  n x(n, 0) n∈N converges almost-surely and its limit is given by Equation (3).
To deduce the results on x(n, 0) from those on y(n, 0), we introduce the following theorem-definition, which is a special case of J.-M.Vincent [18, Theorem 1] and directly follows from Kingman's theorem: Theorem-Definition 3.7 (J.-M.Vincent [18]).If (A(n)) n∈Z is a stationary and ergodic sequence of random matrices with values in R d×d max and almost-surely at least one finite entry on each row such that A(0)0 is integrable, then there are two real numbers γ(A) and γ b (A) such that It implies the following corollary, which makes the link between the results on (y(n, 0)) n∈N and those on (x(n, 0)) n∈N when all γ [c] are equal, that is when γ(A) = γ b (A).

Independent case
In this section, we prove Theorem 2.4.
Because of Theorem 2.3, it is sufficient to show that, if, for every s.c.c c, A {c} almost-surely has at least one finite entry on each row, then the sequence 1 n x(n, 0) converges almost-surely.To do this, we will prove that, in this situation, the hypotheses of Proposition 3.6 are satisfied.Hypothesis 1. is exactly Hypothesis 1. of Theorem 2.4 and Hypotheses 2. and 3. respectively follow from the next lemma and theorem.Definition 3.9.For every matrix A ∈ R d×d max , the pattern matrix A is defined by  8), with G(B) strongly connected.For every i ∈ I, let us define Proof.
1.For every ω ∈ A i , we prove our result by induction on n.
Since the A(n) almost-surely have at least one finite entry on each row, there exists an i Let us assume that the sequence is defined up to rank n.Since A(n + 1) almostsurely has at least one finite entry on each row, there exists an Since ω ∈ A i , we have: It means that every entry on row i n of D(n+1) is −∞, that is A inj (n + 1) = −∞ for every j ∈ J, therefore i n+1 ∈ I and Finally, we have: 2. As a first step, we want to construct a matrix M ∈ E such that ∀i ∈ I, ∃j ∈ J, M ij = 0.
Since P D = (−∞) I×J < 1, there are α ∈ I, β ∈ J and M 0 ∈ E with M 0 αβ = 0.For any i ∈ I, since G(B) is strongly connected, there is M ∈ E such that M ∈ E and M iα = 0. Therefore M i = M M 0 is in E and satisfies M i iβ = 0. Now let us assume I = {α 1 , • • • , α m } and define by induction the finite sequence of matrices P k .
• P 1 = M α1 • If there exists j ∈ J such that P k α k+1 j = 0, then P k+1 = P k .Else, since the matrices have at least one finite entry on each row, there is an i ∈ I, such that P k α k i , and It is easily checked that such P k satisfy, ∀l ≤ k, ∃j ∈ J, P k α l j = 0.
Therefore, we set M = P m and denote by p the smallest integer such that P A(1, p) = M > 0 Now, it follows from the definition of E and the ergodicity of (A(n)) n∈N that there is almost surely an N ∈ N , such that A(N + 1, N + p) = M .On A i , that would define a random j N ∈ J such that M iN jN = 0, where i N is defined according to the first point of the lemma.Then, we would have But A i is defined as the event on which there is never a path from i to J, so that we should have ∀n ∈ N, ∀j ∈ J, A(1, n)) ij = −∞.
This theorem is stated by D. Hong in the unpublished [14], but the proof is rather difficult to understand and it is unclear if it holds when A(1) takes infinitely many values.Building on [5], we now give a short proof of this result.
Proof.According to [5,Theorem 6.7], 1  n y(n, 0) n∈N converges a.s.We have to show that its limit is deterministic.
Let i be any integer in {1, • • • , d} and E be a recurrent state of (R(n)) n∈N .There exists a j ∈ [1, • • • , d] such that E ij = 0. Since G(A) is strongly connected, there exists a p ∈ N, such that (B(−1, −p)) ji = −∞ with positive probability.Let G be such that P (B(−1, −p)) ji = −∞, B(−1, −p) = G > 0. Now, F = EG is a state of the chain, reachable from state E and such that F ii = 0. Since E is recurrent, so is F and E and F belong to the same recurrence class.
Let E be a set with exactly one matrix F in each recurrence class, such that F ii = 0. Let S n be the n th time (R(m)) m∈N is in E.
Since the Markov chain has finitely many states and E intersects every recurrence class, S n is almost-surely finite, and even integrable.Moreover, the S n+1 − S n are i.i.d.(we set S 0 = 0) and so are the A(−S n − 1, −S n+1 ).Since P (S 1 > k) decreases exponentially fast, A(−1, −S 1 )0 is integrable and thus the sequence 1 n y(S n , 0) n∈N converges a.s.Let us denote its limit by l.
Let us denote by F 0 the σ-algebra generated by the random matrices A(−S n − 1, −S n+1 ).Then l is F 0 measurable, and the independence of the A(−S n − 1, −S n+1 ) means that (Ω, F 0 , P, θ S1 ) is an ergodic measurable dynamical system.Because of the choice of S 1 , we have l i ≥ l i • θ S1 , so that l i is deterministic.Now, let us notice that the limit of 1 n y i (n, 0) is that of 1 Sn y i (S n , 0), that is li E(S1) , which is deterministic.
This means that lim 1 n y i (n, 0) is deterministic for any i, and, according to Theorem 2.3, it implies that it is equal to γ(A).
be a stationnary version of the irreducible Markov chain on {A, B}×{1, 2} with transition probabilities given by the diagram of Figure2:

Figure
Figure 2: Transition probabilities of (A(n), i n ) n∈N
are the maximum of the weights of paths on the subgraph of G(A) with nodes in c, F c and H c respectively.
c.c c, A {c} (0) almost-surely has at least one finite entry on each row.Let us assume it has none.It means that there exists a s.c.c.c and an i ∈ c such that the set {∀j ∈ H c , A ij (−1) = −∞} has positive probability.On this set, we have:

Corollary 3 . 8 . 1 n 1 . 1 n
If (A(n)) n∈Z is a stationary and ergodic sequence of random matrices with values in R d×d max and almost-surely at least one finite entry on each row such that A(0)0 is integrable thenlim n 1 n x(n, 0) = γ(A)1 if and only if lim n 1 n y(n, 0) = γ(A)1.Let us go back to the proof of the general result on (x(n, 0)) n∈N .Because of Proposition 3.2 and Proposition 3.4 and the definition of Lyapunov exponents, we already have, for every s.c.c c of G(A),lim sup n 1 n x c (n, 0) ≤ γ [c] 1 a.s.Therefore it is sufficient to show that lim inf n x c (n, 0) ≥ γ [c] 1 a.s. and even that limn 1 n x {c} (n, 0) = γ [c]Because of corollary 3.8, it is equivalent to lim n y {c} (n, 0) = γ[c] 1.Since all s.c.c of G(A {c} ) are s.c.c of G(A) and have the same Lyapunov exponent γ (c) , it follows from the result on the y(n, 0) applied to A {c} .
and A ij = 0 otherwise.For every matrix A, B ∈ R d×d max , we have AB = A B. Lemma 3.10.Let (A(n)) n∈N be a stationary sequence of random matrices with values in R d×d max and almost-surely at least one finite entry on each row.Let us assume that there exists a partition (I, J) of [1, • • • , d] such that A = Ã satisfy Equation (

Finally, A i
is included in the negligible set ∀n ∈ N, A(n + 1, n + p) = M .Theorem 3.11.If (A(n)) n∈N is a sequence of i.i.d.random matrices with values in R d×d max such that the positive part of max ij A ij (0) is integrable, A(0) almost-surely has at least one finite entry on each row and G(A) is strongly connected, then we have ∀i ∈ [1, d], lim n 1 n y i (n, 0) = γ(A)