An essay on the general theory of stochastic processes

This text is a survey of the general theory of stochastic processes, with a view towards random times and enlargements of filtrations. The first five chapters present standard materials, which were developed by the French probability school and which are usually written in French. The material presented in the last three chapters is less standard and takes into account some recent developments.


Introduction
P.A. Meyer and C. Dellacherie have created the so called general theory of stochastic processes, which consists of a number of fundamental operations on either real valued stochastic processes indexed by [0, ∞), or random measures on [0, ∞), relative to a given filtered probability space Ω, F , (F t ) t≥0 , P , where (F t ) is a right continuous filtration of (F , P) complete sub-σ-fields of F . This theory was gradually created from results which originated from the study of Markov processes, and martingales and additive functionals associated with them. A guiding principle for Meyer and Dellacherie was to understand to which extent the Markov property could be avoided; in fact, they were able to get rid of the Markov property in a radical way.
At this point, we would like to emphasize that, perhaps to the astonishment of some readers, stochastic calculus was not thought of as a basic "elementary" tool in 1972, when C. Dellacherie's little book appeared. Thus it seemed interesting to view some important facts of the general theory in relation with stochastic calculus.
The present essay falls into two parts: the first part, consisting of sections 2 to 5, is a review of the General Theory of Stochastic Processes and is fairly well known. The second part is a review of more recent results, and is much less so. Throughout this essay we try to illustrate as much as possible the results with examples.
More precisely, the plan of the essay is as follows: • in Section 2, we recall the basic notions of the theory: stopping times, the optional and predictable σ-fields and processes,etc. • in Section 3, we present the fundamental Section theorems; • in Section 4, we present the fundamental Projection theorems; • in Section 5, we recall the Doob-Meyer decomposition of semimartingales; • in Section 6, we present a small theory of multiplicative decompositions of nonnegative local submartingales; • in Section 7, we highlight the role of certain "hidden" martingales in the general theory of stochastic processes; • in Section 8, we illustrate the theory with the study of arbitrary random times; • in Section 9, we study how the basic operations depend on the underlying filtration, which leads us in fact to some introduction of the theory of enlargement of filtrations; Acknowledgements I would like to thank an anonymous referee for his comments and suggestions which helped to improve the present text.

Basic notions of the general theory
Throughout this essay, we assume we are given a filtered probability space Ω, F , (F t ) t≥0 , P that satisfies the usual conditions, that is (F t ) is a right continuous filtration of (F , P) complete sub-σ-fields of F . A stochastic process is said to be càdlàg if it almost surely has sample paths which are right continuous with left limits. A stochastic process is said to be càglàd if it almost surely has sample paths which are left continuous with right limits.

Stopping times
Definition 2.1. A stopping time is a mapping T : Ω → R + such that {T ≤ t} ∈ F t for all t ≥ 0.
To a given stopping time T , we associate the σ-field F T defined by: We can also associate with T the σ-field F T − generated by F 0 and sets of the form: A ∩ {T > t} , with A ∈ F t and t ≥ 0.
We recap here without proof some of the classical properties of stopping times.
Proposition 2.2. Let T be a stopping time. Then T is measurable with respect to F T − and F T − ⊂ F T . Proposition 2.3. Let T be a stopping time. If A ∈ F T , then is also a stopping time.
1. For every A ∈ F S , the set A ∩ {S ≤ T } ∈ F T . 2. For every A ∈ F S , the set A ∩ {S < T } ∈ F T −.
Proposition 2.5 ( [26], Theorem 56,p.189). Let S and T be two stopping times such that S ≤ T . Then F S ⊂ F T .
One of the most used properties of stopping times is the optional stopping theorem. One can naturally ask whether there exist some other random times (i.e. nonnegative random variables) such that (2.1) or (2.2) hold. We will answer these questions in subsequent sections.

Progressive, Optional and Predictable σ-fields
Now, we shall define the three fundamental σ-algebras we always deal with in the theory of stochastic processes. Definition 2.7. A process X = (X t ) t≥0 is called (F t ) progressive if for every t ≥ 0, the restriction of (t, ω) → X t (ω) to [0, t] × Ω is B [0, t] ⊗ F t measurable. A set A ∈ R + × Ω is called progressive if the process 1 A (t, ω) is progressive. The set of all progressive sets is a σ-algebra called the progressive σ-algebra, which we will denote M.
Proposition 2.8 ( [69], Proposition 4.9, p.44). If X is a (F t ) progressive process and T is a (F t ) stopping time, then X T 1 {T <∞} is F T measurable.
Definition 2.9. The optional σ-algebra O is the σ-algebra, defined on R + × Ω, generated by all processes (X t ) t≥0 , adapted to (F t ), with càdlàg paths. A process X = (X t ) t≥0 is called (F t ) optional if the map (t, ω) → X t (ω) is measurable with respect to the optional σ-algebra O. The sequence (T n ) is called an announcing sequence for T . Now we enumerate some important properties of predictable stopping times, which can be found in [24] p.54, or [26] p.205. Proposition 2.24. Let S and T be two predictable stopping times. Then the stopping times S ∧ T and S ∨ T are also predictable.
Proposition 2. 25. Let A ∈ F T − and T a predictable stopping time. Then the time T A is also predictable. Proposition 2. 26. Let (T n ) be an increasing sequence of predictable stopping times and T = lim n T n . Then T is predictable.
We recall that a random set A is called evanescent if the set {ω : ∃ t ∈ R + with (t, ω) ∈ A} is P−null.
Definition 2.27. Let T be a stopping time.
1. We say that T is accessible if there exists a sequence (T n ) of predictable stopping times such that: Remark 2.28. It is obvious that predictable stopping times are accessible and that the stopping times which are both accessible and totally inaccessible are almost surely infinite.
Remark 2.29. There exist stopping times which are accessible but not predictable.  1. The accessible stopping times are predictable; 2. The filtration (F t ) is quasi-left continuous; 3. The filtration (F t ) does not have any discontinuity time: for all increasing sequences of stopping times (T n ). Proposition 2.35. Let X be a càdlàg adapted process. The following are equivalent: 1. X is quasi-left continuous; 2. there exists a sequence of totally inaccessible stopping times that exhausts the jumps of X; 3. for any increasing sequence of stopping times (T n ) with limit T , we have lim X Tn = X T a.s. on the set {T < ∞}.

Début theorems
In this section, we give a fundamental result for realizations of stopping times: the début theorem. Its proof is difficult and uses the same hard theory (capacities theory) as the section theorems which we shall state in the next section.
Definition 2.36. Let A be a subset of R + × Ω. The début of A is the function D A defined as: It is a nice and difficult result that when the set A is progressive, then D A is a stopping time ( [24], [26]): Theorem 23,p. 51). Let A be a progressive set, then D A is a stopping time.
Conversely, every stopping time is the début of a progressive (in fact optional) set: indeed, it suffices to take The proof of the début theorem is an easy consequence of the following difficult result from measure theory: Theorem 2.38. If (E, E) is a locally compact space with a countable basis with its Borel σ-field and (Ω, F , P) is a complete probability space, for every set A ∈ E ⊗ F , the projection π (A) of A into Ω belongs to F . Proof of the début theorem. We apply Theorem 2.38 to the set We can define the n-début of a set A by D n A (ω) = inf {t ∈ R + : [0, t] ∩ A contains at least n points} ; we can also define the ∞−début of A by: Theorem 2.39. The n-début of a progressive set A is a stopping time for n = 1, 2, . . . , ∞.
Proof. The proof is easy once we know that D 1 A (ω) is a stopping time. Indeed, by induction on n, we prove that D n+1 is also a stopping time as the début of the progressive set ∩A n .
It is also possible to show that the penetration time T of a progressive set A, defined by: A contains infinitely non countable many points} is a stopping time.
We can naturally wonder if the début of a predictable set is a predictable stopping time. One moment of reflexion shows that the answer is negative: every stopping time is the début of the predictable set ]T, ∞[ without being predictable itself. However, we have: One can deduce from there that: Proposition 2.41. Let A be a predictable set which is closed for the right topology 1 . Then its début D A is a predictable stopping time.
Now we are going to link the above mentioned notions to the jumps of some stochastic processes. We will follow [39], chapter I. Lemma 2.42. Any thin random set admits an exhausting sequence of stopping times.
Proposition 2.43. If X is a càdlàg adapted process, the random set U ≡ {∆X = 0} is thin; an exhausting sequence (T n ) for this set is called a sequence that exhausts the jumps of X. Moreover, if X is predictable, the stopping times (T n ) can be chosen predictable.
for n an integer and set V 0 = U 0 and The sets V n are optional (resp. predictable if X is predictable) and are disjoint. Now, let us define the stopping times so that D j n represents the j−th jump of X whose size in absolute value is between 2 −n and 2 −n+1 . Since X is càdlàg, V n does not have any accumulation point and the stopping times D k n (k,n)∈N 2 enumerate all the points in V n . Moreover, from Proposition 2.41, the stopping times D k n are predictable if X is predictable. To complete the proof, it suffices to index the doubly indexed sequence D k n into a simple indexed sequence (T n ).
In fact, we have the following characterization for predictable processes: Proposition 2.44. If X is càdlàg adapted process, then X is predictable if and only if the following two conditions are satisfied: 1. For all totally inaccessible stopping times T , Finally, we characterize F T and F T − measurable random variables: Proof. We only prove the first part, the proof for the second part being the same. We have just seen that the condition is sufficient. To show the condition is necessary, by the monotone class theorem, it suffices to check the theorem for indicators of sets that generate the sigma field F T − , i.e. for Z = 1 A when A ∈ F 0 and for Z = 1 B∩{s<T } , where B ∈ F s . But then, one can take X = 1 [0A,∞[ in the first case and X = 1 ]sB ,∞[ in the second case.

Section theorems
This section is devoted to a deep and very difficult result called the section theorem. The reader can refer to [26], p.219-220 or [24], p.70 for a proof. We will illustrate the theorem with some standard examples (here again the examples we deal with can be found in [26], [24] or [70]). Theorem 3.1 (Optional and predictable section theorems). Let A be an optional (resp. predictable) set. For every ε > 0, there is a stopping time (resp. predictable stopping time) T such that: where π is the canonical projection of Ω × R + onto Ω.
Throughout the paper, we shall use the optional and predictable section theorems. For now, we give some classical applications.
Theorem 3.2. Let (X t ) and (Y t ) be two optional (resp. predictable) processes. If for every finite stopping time (resp. every finite predictable stopping time) one has: then the processes (X t ) and (Y t ) are indistinguishable.
Proof. It suffices to apply the section theorem to the optional (resp. predictable) set Indeed, if the set A were not evanescent, there would exist a stopping time whose graph would not be evanescent and which would be contained in A. This would imply the existence of some t ∈ R + such that X T ∧t would not be equal to Y T ∧t almost surely.
and (Y t ) be two optional (resp. predictable) processes. If for every stopping time (resp. every predictable stopping time) one has: then the processes (X t ) and (Y t ) are indistinguishable 2 Proof. It suffices to apply the section theorems to the sets: To conclude this section, we give two other well known results as a consequence of the section theorems. We first recall the definition of the class (D): Definition 3.5 (class (D)). A process X is said to be of class (D) if the family {X T 1 T <∞ , T a stopping time} is uniformly integrable (T ranges through all stopping times). 2 Two processes X and X ′ defined on the same probability space are called indistinguishable if for almost all ω, Xt = X ′ t for every t.
A. Nikeghbali/The general theory of stochastic processes 357 Proposition 3.6. Let (Z t ) 0≤t≤∞ be an optional process. Assume that for all stopping times T , the random variable Z T is in L 1 and E [Z T ] does not depend on T . Then (Z t ) is a uniformly integrable martingale which is right continuous (up to an evanescent set).
Proof. Let T be a stopping time and let A ∈ F T . The assumptions of the proposition yield: and hence Consequently, we have: Now define also (X t ) as the càdlàg version of the martingale: By the optional stopping theorem, we have and with an application of the section theorem, we obtain that X and Z are indistinguishable and this completes the proof of the proposition.

Projection theorems
In this section we introduce the fundamental notions of optional (resp. predictable) projection and dual optional (resp. predictable) projection. These projections play a very important role in the general theory of stochastic processes. We shall give some nice applications in subsequent sections (in particular we shall see how the knowledge of the dual predictable projection of some honest times may lead to quick proofs of multidimensional extensions of Paul Lévy's arc sine law).
Here again, the material is standard in the general theory of stochastic processes and the reader can refer to the books [24] or [26,27] for more details and refinements.

The optional and predictable projections
By convention, we take F 0 = F 0− . Theorem 4.1. Let X be a measurable process either positive or bounded. There exists a unique (up to indistinguishability) optional process Y such that:  Proof. The uniqueness follows from the optional section theorem. The space of bounded processes X which admit an optional projection is a vector space. Moreover, let X n be a uniformly bounded increasing sequence of processes with limit X and suppose they admit projections Y n . The section theorem again shows that the sequence (Y n ) is a.s. increasing; it is easily checked that limY n is a projection for X.
By the monotone class theorem (as it is stated for example in [69] Theorem 2.2, p.3) it is now enough to prove the statement for a class of processes closed under pointwise multiplication and generating the sigma field F ⊗ B (R + ). Such a class is provided by the processes Let (H t ) be a càdlàg version of E [H|F t ] (with the convention that H 0− = H 0 ). The optional stopping theorem proves that satisfies the condition of the statement. The proof is complete in the bounded case. For the general case we use the processes X ∧ n and pass to the limit. Theorem 4.3. Let X be a measurable process either positive or bounded. There exists a unique (up to indistinguishability) predictable process Z such that: for every predictable stopping time T .
Definition 4.4. The process Z is called the predictable projection of X, and is denoted by p X.
Proof. The proof is exactly the same as the proof of Theorem 4.1, except that at the end, one has to apply the predictable stopping theorem we shall give below.
Theorem 4.5 (Predictable stopping theorem). Let (X t ) be a right continuous and uniformly integrable martingale. Then for any predictable stopping time, we have: Consequently, the predictable projection of X is the process (X t− ).
Proof. Let (T n ) be a sequence of stopping times that announces T ; we then have Since F T − = n F Tn , the desired result follows easily. Corollary 4.7. If X is a predictable local martingale, then X is continuous.
Proof. By localization, it suffices to consider uniformly integrable martingales. Let T be a predictable stopping time. Since X is predictable, X T is F T − measurable and hence: E [X T |F T − ] = X T , and from the predictable stopping theorem, X T = X T − . As this holds for any predictable stopping time and X is predictable, we conclude that X has no jumps.
We can also mention another corollary of the predictable stopping theorem which is not so well known: By considering T Λ , where Λ ∈ F T or Λ ∈ F T − , and where T is a stopping time or a predictable stopping, we may as well restate Theorems 4.1 and 4.3 as: Theorem 4.9. Let X be a measurable process either positive or bounded.
1. There exists a unique (up to indistinguishability) optional process Y such that: or any stopping time T . 2. There exists a unique (up to indistinguishability) predictable process Z such that: for any predictable stopping time T .
Remark 4.10. Optional and predictable projections share some common properties with the conditional expectation: if X is a measurable process and Y is an optional (resp. predictable) bounded process, then Now, we state and prove an important theorem about the difference between the optional and predictable sigma fields. In particular, if all (F t ) martingales are continuous, then O = P and every stopping time is predictable.
Proof. Since every optional projection is its own projection, it is enough to show that every optional projection is measurable with respect to P ∨ I. But this is obvious for the processes 1  To conclude this paragraph, let us give an example from filtering theory.
Brownian Motion. The process N is called the innovation process.

Increasing processes and projections 3
Definition 4.14. We shall call an increasing process a process (A t ) which is nonnegative, (F t ) adapted, and whose paths are increasing and càdlàg.
Remark 4.15. A stochastic process which is nonnegative, and whose paths are increasing and càdlàg, but which is not (F t ) adapted, is called a raw increasing process.
Increasing processes play a central role in the general theory of stochastic processes: the main idea is to think of an increasing process as a random measure on R + , dA t (ω), whose distribution function is A • (ω). That is why we shall make the convention that A process which can be written as the difference of two increasing processes (resp. integrable increasing processes) is called a process of finite variation (resp. a process of integrable variation). Now, let (A t ) be an increasing process and define its right continuous inverse: We have C t− = inf {s : A s ≥ t}. C t is the début of an optional set and hence it is a stopping time. Similarly, C t− is a stopping time which is predictable if (A t ) is predictable (as the début of a right closed predictable set). Now, we recall the following time change formula ( [27], p.132): for every nonnegative process Z, we have: With the time change formula, we can now prove the following theorem: Theorem 4.16. Let X be a nonnegative measurable process and let (A t ) be an increasing process. Then we have: Proof. We shall prove the second equality which is more difficult to prove. From (4.1), we have: Similarly, we can show that: Now, since C s− is a predictable stopping time, we have from the definition of predictable projections: and this completes the proof of the theorem.
Example 4.17. One often uses the above theorem in the following way. Take Now, we give a sort of converse to Theorem 4.16: 1. If for all bounded and measurable processes X we have then A is optional. 2. If for all bounded and measurable processes X we have

Random measures on (R + × Ω) and the dual projections
Definition 4.19. We call P-measure a bounded measure on the sigma field B (R + )⊗F (resp. O, P) which does not charge the P-evanescent sets of B (R + )⊗ F (resp. O, P).
We can construct a P-measure µ in the following way: let (A t ) be a raw increasing process which is integrable; for any bounded measurable process In fact, quite remarkably, all P-measures are of this form: Theorem 4.20 ([27], p.141). Let µ be a nonnegative P-measure (resp. P-measure) on B (R + )⊗F . Then, there exists a unique raw and integrable increasing process (A t ) (resp. a raw process of integrable variation), up to indistinguishability, such that for any bounded process X We say that A is the integrable increasing process (resp. the process of integrable variation) associated with µ.
Furthermore, (A t ) is optional (resp. predictable) if and only if for any bounded measurable process X: Definition 4.21. A P-measure is called optional (resp. predictable) if for any bounded measurable process X: Remark 4.22. From Theorem 4.18, the increasing process associated with an optional (resp. predictable) measure µ is optional (resp. predictable). Now, we give some interesting consequences of Theorem 4.20. The first one is useful to prove the uniqueness of the Doob-Mayer decomposition of supermartingales.
Theorem 4.23. Let (A t ) and (B t ) be two processes of integrable variation. If for any stopping time T Similarly, if (A t ) and (B t ) are two predictable processes of integrable variation and if for any t Proof. We prove only the optional case, the proof for the predictable case being the same.
Let µ be the measure associated with A − B. The condition of the theorem entails that for any stopping time T : Indeed, it suffices to consider the stopping times T H , with H ∈ F T . Remark 4.25. For the predictable case, one must be more cautious. First, we note that the condition of the theorem combined with the right continuity of the paths entail:

s. for any stopping time T.
From this we deduce now the unconditional form In this case, (A ∞ − A t ) and (B ∞ − B t ) have the same optional projection. Now, we define projections of P-measures: for all measurable and bounded processes X.
Example 4.27. Let µ be a nonnegative measure with a density f : Then we have: where g is any nonnegative measurable process such that for almost all s. For example g can be the optional or predictable projection of f .
But usually, unlike the above example, µ o (or µ p ) is not associated with o A (or p A); this leads us to the following fundamental definition of dual projections: for any bounded measurable X. We call dual predictable projection of A the predictable increasing process (A p t ) defined by: for any bounded measurable X.
Remark 4.29. The above definition extends in a straightforward way to processes of integrable variation.
Remark 4.30. Formally, the projection operation consists in defining conveniently the process E [A t |F t ], whereas the dual projection operation consists of defining the symbolic integral In particular, the dual projection of a bounded process needs not be bounded (for example, the dual projection of an honest time, as will be explained in a subsequent section). Now, we shall try to compute the jumps of the dual projections.

A and B have the same dual predictable projection if and only if for every
Proof. It is exactly the same as the proof of Theorem 4.23.
Theorem 4.32. Let (A t ) be a raw process of integrable variation.
1. Let T be a stopping time. Then the jump of A o at T , ∆A o T , is given by: with the convention that ∆A o ∞ = 0. 2. Let T be a predictable stopping time. Then the jump of A p at T , ∆A p T , is given by: We only deal with the optional case (the predictable case can be dealt with similarly). From Proposition 4.31, we have: Similarly, we have: and consequently . Now, the result follows from the fact that ∆A o T is F T measurable. Now we define predictable compensators.
Definition 4.33. Let (A t ) be an optional process of integrable variation. The dual predictable projection of A, which we shall denote by A, is also called the predictable compensator of A.
Why is A called compensator? From Proposition 4.31, we have: Consequently, A− A is a martingale and A is the process that one has to subtract to A to obtain a martingale which vanishes at 0. This is for example what one does to go from the Poisson process N t to the compensated Poisson process N t − λt. Another classical application is concerned with totally inaccessible stopping times: Proof.
(2) ⇒ (1). We have to prove that P [T = S < ∞] = 0 for any predictable stopping time S. We have: and hence (1) ⇒ (2). Let A t = 1 T ≤t and let A denote its predictable compensator. From Theorem 4.32, for all predictable stopping times S, we have: be the martingale introduced in the proof of Proposition 4.34. We associate with f , a Borel bounded function, the stochastic integral We can compute M f t more explicitly: where F (t) = t 0 dsf (s). An application of the optional stopping theorem yields E [M ∞ ] = 0, which implies: As this last equality holds for every Borel bounded function, the law of A T must be the standard exponential law.
Remark 4.36. This above proposition has some nice applications to honest times that avoids stopping times in the theory of progressive enlargements of filtrations as we shall see later (see [42], [64]). Now, we give two nice applications of dual projections. First, we refine Theorem 4.18: and similarly: Consequently, under the hypothesis (1), one obtains: As a by product of Theorem 4.37, we can show the characterization of stopping times by Knight and Maisonneuve. We introduce the sigma field (F ρ ) associated with an arbitrary random time ρ, i.e. a nonnegative random variable: then ρ is a (F t ) stopping time (the converse is Doob's optional stopping theorem).
Proof. For t ≥ 0 we have Comparing the two extreme terms, we get i.e ρ is a (F t ) stopping time. • How may E [M ∞ | F ρ ] on the one hand, and M ρ on the other hand, differ for a non stopping time ρ? The reader can refer to [6], [81] or [64] for some answers. • Given an arbitrary random time ρ, is it possible to characterize the set of martingales which satisfy (2.1)? (see [6], [81] or [64]). Now we shall see how an application of dual projections and their simple properties gives a simple proof of a multidimensional extension of Lévy's arc sine law. The results that follow are borrowed from [62].
, starting from 0, and (L t ) a normalization of its local time at level zero (see [62] for more precisions on this normalization).
Then the dual predictable projection A gµ t of 1 (gµ≤t) is: i.e. for every nonnegative predictable process (x t ), Remark 4.41. The random time g µ is not a stopping time; it is a typical example of an honest time (we shall define these times in a subsequent section).
Now we give a result which was obtained by Barlow, Pitman and Yor, using excursion theory. The proof we give is borrowed from [62]. , [62]). The variable g µ follows the law: i.e. the Beta law with parameters (µ, 1−µ). In particular, i.e. g is arc sine distributed ( [49]).
Proof. From Lemma 4.40, for every Borel function f : [0, 1] → R + , we have: (4.3) By the scaling property of (L t ), Moreover, by definition of (L t ) (see [62] or [21]), denotes a random variable which follows the gamma law with parameter µ, we have .

Now, plugging this in (4.3) yields
: To conclude, it suffices to use the duplication formula for the Gamma function .
We shall conclude this section giving a result which is useful in stochastic integration. Before, we need to introduce the notion of locally integrable increasing process.
Definition 4.43. A raw increasing process, such that A 0 = 0, is said to be locally integrable if there exists an increasing sequence of stopping times (T n ), such that lim n→∞ T n = ∞, and 1. Every predictable process of finite variation is locally integrable.

An optional process A of finite variation is locally integrable if and only if
there exists a predictable process A of finite variation such that (A − A) is a local martingale which vanishes at 0. When it exists, A is unique. We say that A is the predictable compensator of A.
Remark 4.45. We shall see an application of this result to the existence of the bracket M of a local martingale in next section.

The Doob-Meyer decomposition and multiplicative decompositions
This section is devoted to a fundamental result in Probability Theory: the Doob-Meyer decomposition theorem. This result is now standard and a proof of it can be found for example in [27,68,70].
Theorem 5.1 (Doob-Meyer decomposition). An adapted càdlàg process Y is a submartingale of class (D) null at 0 if and only if Y may be written: where M is a uniformly integrable martingale null at 0 and A a predictable integrable increasing process null at 0. Moreover, the decomposition (5.1) is unique.
The following theorem gives a necessary and sufficient condition for A to be continuous.
Proof. Let (T n ) be a sequence of stopping times that announce T . Since Y is of class (D), we have: which is: Consequently, we have: and the result of the proposition follows easily.
Now we give the local form of the Doob-Meyer decomposition; for this we need the following lemma: there exists a sequence of stopping times T n such that T n → ∞ as n → ∞ and such that the stopped process (Y t∧Tn ) is a submartingale of class (D) for every n).
Theorem 5.4. Let Y be a local submartingale. Then Y may be written uniquely in the form: where M is a local martingale null at 0 and A is a predictable increasing process null at 0.
Remark 5.5. All the previous results were stated for submartingales but of course they also hold for supermartingales. Given a supermartingale Z, it suffices to consider the submartingale Y = −Z to obtain the decomposition: To conclude the discussion on the Doob-Meyer decomposition, we give a result on the existence of the bracket X of a local martingale, which follows easily from the above theorem and Theorem 4.44.
Theorem 5.6. Let X be a local martingale null at 0. The following are equivalent: 1. There exists a unique predictable increasing process ( X t ) null at 0 such that X 2 − X t is a local martingale; 2. X is locally L 2 bounded; 3.
When one of these condition holds, we have:

Multiplicative decompositions
From this section on, we shall deal with more recent and less known aspects of the general theory of stochastic processes and stochastic calculus.
We start with a nice result about the existence of multiplicative decompositions for nonnegative submartingales and supermartingales. These decompositions have appeared much less useful than the additive decompositions. However, we mention some of them and give a nice application. The reader should refer to [36,55,3,37,65] for more details. We will also give a very elegant application of multiplicative decompositions to the theory of enlargements of filtrations in Section 8.
The first result in this direction is due to Itô and Watanabe and deals with nonnegative supermartingales. Let (Z t ) be a nonnegative càdlàg supermartingale, and define It is well known that Z is null on [T, ∞[. Theorem 6.1 (Itô-Watanabe [36]). Let (Z t ) be a nonnegative càdlàg supermartingale such that P (T 0 > 0) = 1. Then Z can be factorized as: with a positive local martingale Z  We shall see in subsequent sections how a refinement of this decomposition in the special case of the Azéma's supermartingales associated with honest times leads to some nice results on enlargements of filtration ( [63]). Now, we introduce a remarkable family of continuous local submartingales, which we shall call of class (Σ c ) (the subscript c stands for continuous) and which appear under different forms in Probability Theory (in the study of local times, of supremum of local martingales, in the balayage,etc.). The reader can refer to [69,61,60,65] for more references and details 4 Definition 6.3. Let (X t ) be a positive local submartingale, which decomposes as: We say that (X t ) is of class (Σ c ) if: is a continuous increasing process, with A 0 = 0; 3. the measure (dA t ) is carried by the set {t : X t = 0}.
If additionally, (X t ) is of class (D), we shall say that (X t ) is of class (Σ c D). Now, if in (1) N is only assumed to be càdlàg, we say that (X t ) is of class (Σ), dropping the subscript c. Similarly we define the class (ΣD).  Indeed, it is well known that once a nonnegative local martingale is equal to zero, then it remains null, while this is not true for nonnegative submartingales (consider for example |B t |, the absolute value of a standard Brownian Motion). Hence we shall use the following form of multiplicative decomposition: Proposition 6.5 ( [65]). Let (Y t ) t≥0 be a continuous nonnegative local submartingale such that Y 0 = 0. Consider its Doob-Meyer decomposition: The local submartingale (Y t ) t≥0 then admits the following multiplicative decomposition: where (M t ) t≥0 is a continuous local martingale, which is strictly positive, with M 0 = 1 and where (C t ) t≥0 is an increasing continuous and adapted process, with C 0 = 1. The decomposition is unique and the processes C and M are given by the explicit formulae: and (6.6) Remark 6.6. It is possible to find necessary and sufficient conditions on M and C for Y to be of class (D) (see [65]). Now, if we want Y to be of class (Σ c ), Proposition 6.5 takes the following more precise form: be a nonnegative, continuous local submartingale with X 0 = 0. Then, the following are equivalent: There exists a strictly positive, continuous local martingale (M t ), with M 0 = 1, such that: where The local martingale (M t ) is given by: Now, we can give a nice characterization of the local submartingales of the class (Σ c ) in terms of frequency of vanishing. More precisely, let (M t ) be a strictly positive and continuous local martingale with M 0 = 1 and denote by E (M ) the set of all nonnegative local submartingales with the same martingale part M in their multiplicative decomposition (6.3). Then the following holds: and it is the smallest element of E (M ) in the sense that: Consequently, (Y ⋆ t ) has more zeros than any other local submartingale of E (M ).
Proof. It suffices to note that any element Y ∈ E (M ) decomposes as Y t = M t C t − 1. Since Y must be nonnegative, we must have: But 1 Ct is decreasing, hence we have: and this proves the Corollary.

Some hidden martingales
In this section, we illustrate the power of martingale methods by extending some (well) known results for the Brownian Motion to larger classes of stochastic processes which do not in general enjoy scaling or Markov properties. First, we need the following lemma which we will not prove since its proof is similar to the proof of Proposition 3.6 (it is in fact much simpler, see [69], Proposition 3.5, p.70).
Lemma 7.1. Let (X t ) be a càdlàg, adapted and bounded process. Then (X t ) is a martingale if and only if for any bounded stopping time T : Now, we state and prove with stochastic calculus arguments a characterization of predictable increasing processes among optional processes.
Theorem 7.2. Let (A t ) be an increasing optional process such that: Then (A t ) is predictable if and only if for every bounded martingale (X t ), Proof. We first note that (7.1) is equivalent to: which by stopping is equal to: We now take X ≡ 1 and we obtain, for any bounded predictable process H: Hence A = A p and A is predictable.
and consequently (7.1) is equivalent to: Now, we give an illustration of the power of martingale methods. We start with two similar results on Brownian Motion, obtained with excursion theory and Markov processes methods, and then we show how these results can be extended to a much wider class of processes, using martingale methods. This is also here the opportunity for us to review some classical martingale techniques. We start with a definition, which extends the class (Σ c ) to a wider class (Σ) which also contains some discontinuous martingales, such as Azéma's second martingale or its generalization (see [62]). Remark 7.6. In addition to the examples given after definition 6.3, we can give the following ones: • Let (M t ) be a local martingale (starting from 0) with only negative jumps and let S t ≡ sup u≤t M u ; then is of class (Σ). In this case, X has only positive jumps. • Let (R t ) be a Bessel process (starting from 0) of dimension 2(1 − µ), with µ ∈ (0, 1). Define: In the filtration G t ≡ F gµ(t) of the zeros of the Bessel process R, the stochastic process: is a submartingale of class (Σ) whose increasing process in its Doob-Meyer decomposition is given by: where as usual Γ stands for Euler's gamma function. Recall that µ ≡ 1 2 corresponds to the absolute value of the standard Brownian Motion; thus for µ ≡ 1 2 the above result leads to nothing but the celebrated second Azéma's martingale (X t − A t , see [11,81]). In this example, X has only negative jumps.
The local submartingales of the class (Σ) have the following nice characterization based on stochastic calculus: 1. The local submartingale (X t ) is of class (Σ); 2. There exists an increasing, adapted and continuous process (C t ) such that for every locally bounded Borel function f , and is a local martingale. Moreover, in this case, (C t ) is equal to (A t ), the increasing process of X.
Proof. (1) =⇒ (2) . First, let us assume that f is C 1 and let us take C t ≡ A t . An integration by parts yields: Since (dA t ) is carried by the set {t : X t = 0}, we have we have thus obtained that: and consequently (F (A t ) − f (A t ) X t ) is a local martingale. The general case when f is only assumed to be locally bounded follows from a monotone class argument and the integral representation (7.3) is still valid.
(2) =⇒ (1) . First take F (a) = a; we then obtain that C t − X t is a local martingale. Hence the increasing process of X in its Doob-Meyer decomposition is C, and C = A. Next, we take: F (a) = a 2 and we get: Hence, we must have: Thus dA s is carried by the set of zeros of X. Now, we state and prove the so called Doob's maximal identity, which is obtained as an easy application of Doob's optional stopping theorem, but which has many nice and deep applications (see [63]). If we note and if S is continuous, then for any a > 0, we have: Hence, x S ∞ is a uniform random variable on (0, 1).

379
Hence M T S T is also a uniform random variable on (0, 1), independent of F T .
Proof. Formula (7.5) is a consequence of (7.4) when applied to the martingale (M T +u ) u≥0 and the filtration (F T +u ) u≥0 . Formula (7.4) itself is obvious when a ≤ x, and for a > x, it is obtained by applying Doob's optional stopping theorem to the local martingale (M t∧Ta ), where T a = inf {u ≥ 0 : M u ≥ a}. Knight ([46,47]) which motivated our study: Proposition 7.9 (Knight [47]). Let (B t ) denote a standard Brownian Motion, and S its supremum process. Then, for ϕ a nonnegative Borel function, we have:

Now let us mention a result of
.
Furthermore, if we let T x denote the stopping time: then for any nonnegative Borel function ϕ, we have: Now, we give a more general result, which can also be applied to some discontinuous processes such as the one parameter generalizations of Azéma's second submartingale: Theorem 7.10 ([61]). Let X be a local submartingale of the class (Σ), with only negative jumps, such that lim t→∞ A t = ∞. Define (τ u ) the right continuous inverse of A: τ u ≡ inf {t : A t > u} .
Let ϕ : R + → R + be a Borel function. Then, we have the following estimates: and Proof. The proof is based on Theorem 7.7 and Lemma 7.8. We shall first prove equation (7.6), and for this, we first note that we can always assume that 1 ϕ is bounded and integrable. Indeed, let us consider the event Now, if (ϕ n ) n≥1 is a decreasing sequence of functions with limit ϕ, then the events (∆ ϕn ) are increasing, and n ∆ ϕn = ∆ ϕ . Hence, by approximating ϕ from above, we can always assume that 1 ϕ is bounded and integrable.

Now, let
its Lebesgue derivative f is given by: , is a positive local martingale (whose supremum is continuous since (M t ) has only negative jumps), with M 0 = 1 − exp − But since (dA t ) is carried by the zeros of X and since τ u corresponds to an increase time of A, we have X τu = 0. Consequently, Now let us note that if for a given t 0 < ∞, we have X t0 > ϕ (A t0 ), then we must have: and hence we easily deduce that: where the last equality is obtained by an application of Doob's maximal identity (Lemma 7.8).
To obtain the second identity of the Theorem, it suffices to replace ϕ by the function ϕ u defined as: Remark 7.11. The estimate of Knight is a consequence of Theorem 7.10, with X t = S t − B t , and A t = S t . For applications of Theorem 7.10 to the processes (t − g µ (t)) µ introduced in Remark 7.6 and to the Skorokhod's stopping problem, see [61].

General random times, their associated σ-fields and Azéma's supermartingales
The role of stopping times in Probability Theory is fundamental and there are myriads of applications of the optional stopping theorems (2.1) and (2.2). However, it often happens that one needs to work with random times which are not stopping times: for example, in mathematical finance, in the modeling of default times (see [30] or [40] for an account and more references) or in insider trading models ( [33]); in Markov Processes theory (see [28]); in the characterization of the set of zeros of continuous martingales ( [12]), in path decomposition of some diffusions (see [73], [57,58], [42] or [63]); in the study of Strong Brownian Filtrations (see [18]), etc. One of the aims of this essay is to go beyond the classical (yet important) concept of stopping times. One of the main tools to study random times which are not stopping times is the theory of progressive enlargements (or expansions) of filtrations. This section is devoted to important definitions and results from the general theory of stochastic processes which are useful to develop the theory of progressive enlargements of filtrations. We first give the definition of BMO and H 1 spaces which we shall use in the sequel. For more details, the reader can refer to [27].
Let Ω, F , (F t ) t≥0 , P be a filtered probability space. We recall that the space H 1 is the Banach space of (càdlàg) (F t )-martingales (M t ) such that The space of BMO martingales is the Banach space of (càdlàg) square integrable (F t )-martingales (Y t ) which satisfy where T ranges over all (F t )-stopping times. It is a very nice result of Meyer that the dual of the space H 1 is the space BMO ( [52]). There are essentially two classes of random times that have been studied in detail which are not stopping times: ends of optional or predictable sets (see for example [2], or [28]) and pseudo-stopping times ( [59]). In this essay, we shall always note L instead of ρ for the end of an optional or predictable

Arbitrary random times and some associated sigma fields
Indeed, these random times, as will be clear in the sequel, have many interesting properties on their own, and hence, noting them differently will avoid confusion. Pseudo-stopping times have been discovered only recently: they have been introduced in [59], following Williams [74]: It is equivalent to assume that (8.1) holds for bounded martingales, since these are dense in H 1 . It can also be proved that then (8.1) also holds for all uniformly integrable martingales (see [59]).
We shall in the sequel give a characterization of pseudo-stopping times but for now we indicate immediately that a class of pseudo-stopping times with respect to a filtration (F t ) which are not in general (F t ) stopping times may be obtained by considering stopping times with respect to a larger filtration (G t ) such that (F t ) is immersed in (G t ), i.e: every (F t ) martingale is a (G t ) martingale. This situation is described in ( [22]) and refered to there as the (H) hypothesis (this situation is discussed in more detail in subsection 9.3) . Here is a well known example: let B t = B 1 t , . . . , B d t be a d-dimensional Brownian motion, and R t = |B t |, t ≥ 0, its radial part; it is well known that the natural filtration of R, is immersed in (B t ≡ σ {B s , s ≤ t} , t ≥ 0), the natural filtration of B. Thus an example of (R t ) pseudo-stopping time is: Recently, D. Williams [74] showed that with respect to the filtration (F t ) generated by a one dimensional Brownian motion (B t ) t≥0 , there exist pseudostopping times ρ which are not (F t ) stopping times. D. Williams' example is the following: let is a pseudo-stopping time.
Now, we give the definitions of some sigma fields associated with arbitrary random times, following Chung and Doob ( [23]): Definition 8.4. Three classical σ-fields associated with a filtration (F t ) and any random time ρ are: Remark 8.5. As usual, when dealing with predictable processes on [0, ∞], we assume that there is a sigma field F 0− = F 0 in the filtration (F t ) .
Remark 8.6. When ρ is a stopping time, we have F ρ = F ρ+ and the definitions of F ρ and F ρ− coincide of course with the usual definitions of the sigma fields associated with a stopping time.
Remark 8.7. In general, F ρ+ = F ρ ; for example, take ρ to be the last time before 1 a standard Brownian motion is equal to zero; then the sign of the excursion between ρ and 1 is F ρ+ measurable and orthogonal to F ρ (see [69], [80] or [28]). We shall have a nice discussion about the differences between these two sigma fields, related to Brownian filtrations, at the end of this section.
One must be very careful when comparing two such sigma fields. For example, ρ ≤ ρ ′ does not necessarily imply that F ρ ⊂ F ρ ′ . However, we have the following useful result: 28], p. 142). Let ρ and ρ ′ be two random times, such that ρ ≤ ρ ′ . If ρ is measurable with respect to F ρ ′ (resp. F ρ ′ − ), then The previous assumption is always satisfied if ρ is the end of an optional (resp. predictable) set.
When dealing with arbitrary random times, one often works under the following conditions: • Assumption (C): all (F t )-martingales are continuous (e.g: the Brownian filtration). When we refer to assumptions (CA), this will mean that both the conditions (C) and (A) hold.
Lemma 8.9. Under the condition (A), we have There is also another important family of random times, called honest times, and which in fact coincides with ends of optional sets. Every stopping time is an honest time (take L t ≡ L ∧ t). There are also examples of honest times which are not stopping times. For example, let X be an adapted and continuous process and set: X t = sup s≤t X s , X = sup s≥0 X s . Then the random variable L = inf s : X s = X is honest. Indeed, on the set {L < t}, we have L = inf s : X s = X t , which is (F t ) measurable 6 . Now, we characterize honest times (see [41], [28], p. 137, or [68] p. 373): We postpone examples to the next section where we are able to give more details.

Azéma's supermartingales and dual projections associated with random times
A few processes play a crucial role in the study of arbitrary random times: • the (F t ) supermartingale chosen to be càdlàg, associated to ρ by Azéma ([2]); • the (F t ) dual optional and predictable projections of the process 1 {ρ≤t} , denoted respectively by A ρ t and a ρ t ; • the càdlàg martingale which is in BMO(F t ) (see [28] or [81]).
Lemma 8.13. If ρ avoids any (F t ) stopping time (i.e. condition (A) is satisfied), then A ρ t = a ρ t is continuous. Under condition (C), A ρ is predictable (recall that we proved that under condition (C) the predictable and optional sigma fields are equal) and consequently A ρ = a ρ .
Under conditions (CA), Z ρ is continuous.
First we give a result which is not so well known and which may turn out to be useful in modeling default times: When studying random times which are not stopping times, one usually makes the assumption (A): consequently, in the sequel, we make the assumption (A) unless stated otherwise. The reader can refer to [28,42] if he wants more general results.

The case of honest times
We now concentrate on honest times which are the best known random times after stopping times. We state a very important result of Azéma which has many applications and which is very useful in the theory of progressive enlargements of filtrations: Theorem 8.15 (Azéma [2]). Let L be an honest time that avoids (F t ) stopping times, and let denote its Doob-Meyer decomposition. Then A ∞ follows the exponential law with parameter 1 and the measure dA t is carried by the set {t : Z t = 1}. Moreover, A does not increase after L, i.e. A L = A ∞ .
Finally we have: Let us now give an example. Consider again a Bessel process of dimension 2(1 − µ), starting from 0. Set and more generally, for T > 0, a fixed time, Proposition 8. 16. Let µ ∈ (0, 1), and let (R t ) be a Bessel process of dimension δ = 2 (1 − µ). Then, we have: Proof. We have: where (using the Markov property), i.e. H 0 is the first time when a Bessel process of dimension δ, starting from R t (we call its law P), hits 0. Thus, we have proved so far that: denotes the law of a Bessel process of parameter −ν, starting from 0, then the law of L y ≡ sup {t : R t = y}, is given by: dt.
Now, from the time reversal property for Bessel processes ( [21] p.70, or [69]), we have: consequently, from (8.3), we have (recall µ = −ν): and the desired result is obtained by straightforward change of variables in the above integral.
Remark 8.17. The previous proof can be applied mutatis mutandis to obtain: Remark 8.18. It can be easily deduced from Proposition 8.16 that the dual predictable projection A gµ t of 1 (gµ≤t) is: Indeed, it is a consequence of Itô's formula applied to Z gµ t and the fact that N t ≡ R 2µ t − L t is a martingale and (dL t ) is carried by {t : R t = 0}.
When µ = 1 2 , R t can be viewed as |B t |, the absolute value of a standard Brownian Motion. Thus, we recover as a particular case of our framework the celebrated example of the last zero before 1 of a standard Brownian Motion (see [42] p.124, or [81] for more references). Then: and Proof. It suffices to take µ ≡ 1 2 in Proposition 8.16.
Corollary 8.20. The variable µ is exponentially distributed with expectation 1; consequently, its law is independent of µ.
Proof. The random time g µ is honest by definition (it is the end of a predictable set). It also avoids stopping times since A gµ t is continuous (this can also be seen as a consequence of the strong Markov property for R and the fact that 0 is instantaneously reflecting). Thus the result of the corollary is a consequence of Remark 8.18 following Proposition 8.16 and Lemma 8.15.
Given an honest time, it is not in general easy to compute its associated supermartingale Z L . Hence it is important (in view of the theory of progressive enlargements of filtrations) to dispose some characterizations of Azéma's supermartingales which also provide a method way to compute them explicitly. We will give two results in this direction, borrowed from [63] and [60].
Let (N t ) t≥0 be a continuous local martingale such that N 0 = 1, and lim t→∞ N t = 0. Let S t = sup s≤t N s . We consider: Proposition 8.21 ([63]). Consider the supermartingale 1. In our setting, the formula: holds. 2. The Doob-Meyer additive decomposition of (Z t ) is: (8.5) The above proposition gives a large family of examples. In fact, quite remarkably , every supermartingale associated with an honest time is of this form. More precisely: Theorem 8.22 ([63]). Let L be an honest time. Then, under the conditions (CA), there exists a continuous and nonnegative local martingale (N t ) t≥0 , with N 0 = 1 and lim t→∞ N t = 0, such that: We shall now outline a nontrivial consequence of Theorem 8.22 here. In [7], the authors are interested in giving explicit examples of dual predictable projections of processes of the form 1 L≤t , where L is an honest time. Indeed, these dual projections are natural examples of increasing injective processes (see [7] for more details and references). With Theorem 8.22, we have a complete characterization of such projections: Corollary 8.23. Assume the assumption (C) holds, and let (C t ) be an increasing process. Then C is the dual predictable projection of 1 g≤t , for some honest time g that avoids stopping times, if and only if there exists a continuous local martingale N t in the class C 0 such that

Now let us give some examples.
where (B t ) t≥0 is a Brownian Motion starting at 1, and stopped at Then where (B t ) is a standard Brownian Motion, and ν > 0. We have: Consequently, Then, under the law P x , x > 0, the local martingale N t = s (R t ) s (x) , t ≥ 0 satisfies the required conditions of Proposition 8.21, and we have: and Theorem 8.22 is a multiplicative characterization; now we shall give an additive one. Theorem 8.27 ([60]). Again, we assume that the conditions (CA) hold. Let (X t ) be a submartingale of the class (Σ c D) satisfying: lim Then (X t ) is related to the Azéma's supermartingale associated with L in the following way: where (ℓ t ) is the local time of B at 0. This example plays an important role in the celebrated Williams' path decomposition for the standard Brownian Motion on [0, T 1 ]. One can also consider T ±1 = inf {t ≥ 0 : |B t | = 1) and τ = sup {t < T ±1 : |B t | = 0}. |B t∧T±1 | satisfies the conditions of Theorem 8.27, and hence: Example 8.29. Let (Y t ) be a real continuous recurrent diffusion process, with Y 0 = 0. Then from the general theory of diffusion processes, there exists a unique continuous and strictly increasing function s, with s (0) = 0, lim x→+∞ s (x) = +∞, lim x→−∞ s (x) = −∞, such that s (Y t ) is a continuous local martingale. Let we easily note that X is a local submartingale of the class (Σ c ) which satisfies the hypotheses of Theorem 8.27. Consequently, if we note A. Nikeghbali/The general theory of stochastic processes 391 we have: we have: where (L y t ) is the local time of M at y. Then, under the law P x , for any x > 0, the local martingale (M t = −s (R t )) satisfies the conditions of the previous example and for 0 ≤ x ≤ y, we have: where L s(y) t is the local time of s (R) at s (y), and where This last formula was the key point for deriving the distribution of g y in [67], Theorem 6.1, p.326.

The case of pseudo-stopping times
In this paragraph, we give some characteristic properties and some examples of pseudo-stopping times. We do not assume here that condition (A) holds, but we assume that P [ρ = ∞] = 0. Proof. We have: Hence, , and the announced equivalence follows now easily.
Remark 8.34. More generally, the approach adopted in the proof can be used to solve the equation where the random time ρ is fixed and where the unknown are martingales in H 1 . For more details and resolutions of such equations, see [64].
Corollary 8. 35. Under the assumptions of Theorem 8.32, Z ρ t = 1 − A ρ t is a decreasing process. Furthermore, if ρ avoids stopping times, then (Z ρ t ) is continuous.
Proof. The follows from the fact that 36. In fact, we shall see in next section, that under condition (C), ρ is a pseudo-stopping time if and only if (Z ρ t ) is a predictable decreasing process. For honest times, Azéma proved that A L follows the standard exponential law. For pseudo-stopping times, we have: Proposition 8.37 ([59]). For simplicity, we shall write (Z u ) instead of (Z ρ u ). Under condition (A), for all bounded (F t ) martingales (M t ), and all bounded Borel measurable functions f , one has: Consequently, Z ρ follows the uniform law on (0, 1).

Proof. Under our assumptions, we have
Now, we give a systematic construction for pseudo-stopping times, generalizing D. Williams's example. We assume we are given an honest time L and that conditions (CA) hold (the condition (A) holds with respect to L). Then the following holds: associated with ρ is given by As a consequence, ρ is a (F t ) pseudo-stopping time.
Proof. For simplicity, we write Z t for Z L t . (i) Let (ii) Note that for every (F t ) stopping time T , we have Consequently, we have A. Nikeghbali/The general theory of stochastic processes 394 which yields: since (Z ρ u ) and (Z u ) converge to 0 as u → ∞. We now deduce the desired result from the optional section theorem. Now, let us give some examples of pseudo stopping times. We shall use the supermartingales associated with honest times we have computed in the previous paragraph, and for simplicity, we write Z t for Z L t . 1. First let us check that we recover the example of D. Williams from the proposition. With the notations of D. Williams's example (L = σ), it is not hard to see that (see [70]) 2. Consider (R t ) t≥0 a three dimensional Bessel process, starting from zero, its filtration (F t ), and is a (F t ) pseudo-stopping time. This follows from the fact that hence (8.6) is equivalent to: ρ = sup t < L : and from the above proposition: 3. Similarly, with our previous notations on Bessel processes of dimension 2(1 − µ), µ ∈ (0, 1), define: Then, ρ is a pseudo-stopping time.

Honest times and Strong Brownian Filtrations
In this paragraph, we shall describe a very nice and very difficult recent result on Strong Brownian Filtrations. We will not go into details; the aim here is just to show a powerful application of non stopping times. We give the references for further details.
for some (F t ) predictable process (m t ).
Of course, any Strong Brownian Filtration is a Weak Brownian Filtration. The converse is not true. An example of a Weak Brownian Filtration which is not a Strong Brownian Filtration may be obtained considering Walsh's Brownian Motion. We introduce informally Walsh's Brownian Motion. Walsh's Brownian Motion (Z t ) is a Feller process taking values in N (half-lines) rays (I i ; i = 1, . . . , N ) of the plane, all meeting at 0. Informally, (Z t ) behaves like a Brownian Motion when it is away from 0, and when it reaches 0, its chooses its ray with equal probability 1/N (more generally it chooses its i th ray I i with with probability p i > 0, and n i=1 p i = 1). This description is not rigorous, since 0 is regular for itself (with respect to the Markov process (Z t )), but it may be made rigorous using excursion theory (see [15]).
Moreover, it is shown in [15] that all martingales with respect to the natural filtration F Z t of Z may be represented as stochastic integrals with respect to the Brownian Motion: where (L t ) is the local time at 0 of the reflecting Brownian Motion |Z|. It had been an open question for a long time to know whether or not F Z t is a Strong Brownian Filtration. The answer was given by Tsirelson in 1997 ([72]): is not a Strong Brownian Filtration.
Another proof of this theorem was given in 1998 by Barlow,Emery,Knight,Song and Yor ([18]). We give the logic of their proof. First, it was shown by Barlow,Pitman and Yor ([15]) that with To obtain Theorem 8.41, Barlow et alii. proved in [18] a result conjectured earlier by Barlow: Theorem 8.42 ([18]). If (F t ) is a Strong Brownian Filtration and L the end of a predictable set, then: with at most one non trivial set A ∈ F L+ .

The enlargements of filtrations
The aim of this section is to present the theory of enlargements of filtrations, and to give some applications. The main question is the following: how are semimartingales modified when considered as stochastic processes in a larger filtration (G t ) than the initial one (F t ) (i.e. for all t ≥ 0, F t ⊂ G t )? A first result in this direction (in fact in the reverse direction) is a theorem of Stricker which we shall sometimes use in the sequel: Theorem 9.1 (Stricker [71]). Let (F t ) and (G t ) be two filtrations such that for Given a filtered probability space (Ω, F , (F t ) , P), there are essentially two ways of enlarging filtrations: • initial enlargements, for which G t = F t H, i.e. the new information H is brought in at the origin of time; and • progressive enlargements, for which G t = F t H t , i.e. the new information is brought in progressively as the time t increases.
We shall try to characterize situations when every (F t ) semimartingale X remains a (G t ) semimartingale and then find the decomposition of X as a (G t ) semimartingale. This situation is described as the (H ′ ) hypothesis: Definition 9.2. We shall say that the pair of filtrations (F t , G t ) satisfies the (H ′ ) hypothesis if every (F t ) (semi)martingale is a (G t ) semimartingale.
Remark 9.3. In fact it suffices to check that every (F t ) martingale is a (G t ) semimartingale.
When the (H ′ ) hypothesis is not satisfied, we shall try to find some conditions under which an (F t ) martingale is a (G t ) semimartingale.
Of course, the problem does not have a solution in the generality presented above. For the initial enlargement case, we shall deal with the case when H is the sigma field generated by a random variable Z and for the progressive enlargement case, we shall take H t = σ {ρ ∧ t}, where ρ is a random time, so that (G t ) is the smallest filtration which contains (F t ) and which makes ρ a stopping time. All the results in the sequel originate from the works of Barlow, Jeulin, Jacod and Yor (see [42] or [45] for a complete account and more references; see also [81,68]).

Initial enlargements of filtrations
The theory of initial enlargements of filtrations is better known than the progressive enlargements of filtrations. The main results can be found in [42], [45], [81] or [68]. We first give a theoretical result of Jacod with some applications, and then we give a general method for obtaining the decomposition of a local martingale in the enlarged filtrations in some situations where Jacod's results do not apply.
Let (Ω, F , (F t ) , P) be a filtered probability space satisfying the usual assumptions. Let Z be an F measurable random variable. Define The conditional laws of Z given F t , for t ≥ 0 play a crucial role in initial enlargements.
Theorem 9.4 (Jacod's criterion). Let Z be an F measurable random variable and let Q t (ω, dx) denote the regular conditional distribution of Z given F t , t ≥ 0. Suppose that for each t ≥ 0, there exists a positive σ-finite measure η t (dx) (on (R, B (R))) such that Then every (F t ) semimartingale is a (G t ) semimartingale.
Remark 9.5. In fact this theorem still holds for random variables with values in a standard Borel space. Moreover, the existence of the σ-finite measure η t (dx) is equivalent to the existence of one positive σ-finite measure η (dx) such that Q t (ω, dx) ≪ η (dx) and in this case η can be taken to be the distribution of Z.
Now we give classical corollaries of Jacod's theorem.
Corollary 9.6. Let Z be independent of F ∞ . Then every (F t ) semimartingale is a (G t ) semimartingale.
Proof. It suffices (with the notations of Theorem 9.4) to note that Q t (ω, dx) = η (dx), where η (dx) is the law of Z.
Corollary 9.7. Let Z be a random variable taking on only a countable number of values. Then every (F t ) semimartingale is a (G t ) semimartingale.
Proof. If we note where δ x k (dx) is the Dirac measure at x k , the law of Z, then Q t (ω, dx) is absolutely continuous with respect to η with Radon-Nikodym density: where M t t≥0 denotes an F σ(A∞) t local martingale.
Proof. We can first assume that M is an L 2 martingale; the general case follows by localization. Let Λ s be an F s measurable set, and take t > s. Then, for any bounded test function G, we have: But from (9.2), we have: It now suffices to notice that (A t ) is constant after L and L is the first time when A ∞ = A t , or in other words (for example, see [28] p. 134): Let us emphasize again that the method we have used here applies to many other situations, where the theorems of Jacod do not apply. Each time the different relationships we have just mentioned between the quantities: λ t (G) ,λ t (G) , and λ t (dx) ,λ t (dx) , ρ (x, t) , hold, the above method and decomposition formula apply. Moreover, the condition (C) can be dropped and it is enough to have only a stochastic integral representation for λ t (G) (see [63] for a discussion). In the case of enlargement with A ∞ , everything is nice since every (F t ) local martingale M is an F σ(A∞) t semimartingale. Sometimes, an integrability condition is needed as is shown by the following example.
We wish to address the following question: is (B t ) a (G t ) semimartingale?
The above method applies step by step: it is easy to compute λ t (dx), since conditionally on F t , Z is gaussian, with mean m t = t 0 ϕ (s) dB s , and variance σ 2 t = t 0 ϕ 2 (s) ds. Consequently, the absolute continuity requirement (9.1) is satisfied with: ρ (x, t) = ϕ (s) x − m s σ 2 s . But here, the arguments in the proof of Theorem 9.9 (replace M with B) do not always work since the quantities involved there (equations (9.4) to (9.7)) might be infinite; hence we have to impose an integrability condition. For example, if we assume that t 0 |ϕ (s) | σ s ds < ∞, then (B t ), is a (G t ) semimartingale with canonical decomposition: where B t is a (G t ) Brownian Motion. As a particular case, we may take: Z = B t0 , for some fixed t 0 . The above formula then becomes: where B t is a (G t ) Brownian Motion. In particular, B t is independent of G 0 = σ {B t0 }, so that conditionally on B t0 = y, or equivalently, when (B t , t ≤ t 0 ) is considered under the bridge law P t0 x,y , its canonical decomposition is: where B t , t ≤ t 0 is now a P t0 x,y ; (F t ) Brownian Motion.
Example 9.11. For more examples of initial enlargements using this method, see the forthcoming book [50].

Progressive enlargements of filtrations
The theory of progressive enlargements of filtrations was originally motivated by a paper of Millar [57] on random times and decomposition theorems. It was first independently developed by Barlow [13] and Yor [77], and further developed by Jeulin and Yor [43] and Jeulin [41,42]. For further developments and details, the reader can also refer to [45] which is written in French or to [81,50] or [68] chapter VI. for an English text. Let (Ω, F , (F t ) , P) be a filtered probability space satisfying the usual assumptions, and for simplicity (and because it is always the case with practical examples), we shall assume that: Again, we will have to distinguish two cases: the case of arbitrary random times and honest times. Let ρ be random time. We enlarge the initial filtration (F t ) with the process (ρ ∧ t) t≥0 , so that the new enlarged filtration (F ρ t ) t≥0 is the smallest filtration (satisfying the usual assumptions) containing (F t ) and making ρ a stopping time (i.e. F ρ t = K o t+ , where K o t = F t σ (ρ ∧ t)). Sometimes it is more convenient to introduce the larger filtration which coincides with F ρ t before ρ and which is constant after ρ and equal to F ∞ ( [28], p. 186). In the case of an honest time L, one can show that in fact (see [41]): In the sequel, we shall only consider the filtrations (G ρ t ) and F L t : the first one when we study arbitrary random times and the second one when we consider the special case of honest times. is the (G ρ t ) dual predictable projection of 1 ρ≤t . When ρ is a pseudo-stopping time that avoids (F t ) stopping times, we have from Theorem 8.32 that the (G ρ t ) dual predictable projection of 1 ρ≤t is log 1 Z ρ t∧ρ . Now, we shall study the properties ρ as a stopping time in (G ρ t ). Proposition 9.18 ([43]).

The decomposition formula before ρ
In general, for an arbitrary random time, a local martingale is not a semimartingale in (G ρ t ). However, we have the following result: Theorem 9.19 (Jeulin-Yor [43]). Every (F t ) local martingale (M t ), stopped at ρ, is a (G ρ t ) semimartingale, with canonical decomposition: where M t is a (G ρ t ) local martingale.
We shall now give two applications of this decomposition. The first one is a refinement of Theorem 8.32, which brings a new insight to peudo-stopping times: Theorem 9.20. The following four properties are equivalent: denotes an F L t L 2 martingale. Thus, M t , which is equal to: is F L t adapted, and hence it is an L 2 bounded F L t martingale.
There are many applications of progressive enlargements of filtrations with honest times, but we do not have the place here to give them. At the end of this section, we shall give a list of applications and references. Nevertheless, we mention an extension of the BDG inequalities obtained by Yor: and C a universal constant.
Remark 9.27. If L is a stopping time, then Φ L = 1. Furthermore, for any continuous increasing function f : R + → R + , we have: with V = (1 + e) 1/2 , where e is an exponential random variable with parameter 1.
Remark 9.28. It is also possible to prove that the BDG inequalities never hold for honest times under (CA) (see [66]).

The (H) hypothesis
In this paragraph, we shall briefly mention the (H) hypothesis, which is very widely used in the models of default times in mathematical finance. Let (Ω, F , P) be a probability space satisfying the usual assumptions. Let (F t ) and (G t ) be two sub-filtrations of F , with F t ⊂ G t .
Theorem 9.29 (Brémaud and Yor [22]). The following are equivalent: 1. (H): every (F t ) martingale is a (G t ) martingale; 2. for all t ≥ 0, the sigma fields G t and F ∞ are independent conditionally on F t .
Remark 9.30. We shall also say that (F t ) is immersed in (G t ).
Now let us consider the (H) hypothesis in the framework of the progressive enlargement of some filtration (F t ) with a random time ρ. This problem was studied by Dellacherie and Meyer [25]. It is equivalent to one of the following hypothesis (see [30] for more references): 1. ∀t, the σ-algebras F ∞ and F ρ t are conditionally independent given F t . 2. For all bounded F ∞ measurable random variables F and all bounded F ρ t measurable random variables G t , we have 3. For all bounded F ρ t measurable random variables G t : 4. For all bounded F ∞ measurable random variables F, Now we come back to the general situation described in Theorem 9.29. We assume that the hypothesis (H) holds. What happens when we make an equivalent change of probability measure? Proposition 9.31 ([44]). Let Q be a probability measure which is equivalent to P (on F ). Then every (F • , Q) semimartingale is a (G • , Q) semimartingale.

Now, define:
dQ dP Ft = R t ; dQ dP Gt = R ′ t .
Jeulin and Yor [44] prove the following facts: if Y = dQ dP , then the hypothesis (H) holds under Q if and only if: A. Nikeghbali/The general theory of stochastic processes 408 In particular, when dQ dP is F ∞ measurable, R t = R ′ t and the hypothesis (H) holds under Q. Now let us give a decomposition formula: Theorem 9.32 (Jeulin-Yor [44]). If (X t ) is a (F • , Q) local martingale, then the stochastic process: is a (G • , Q) local martingale.

Concluding remarks on enlargements of filtrations
The theory of enlargements of filtrations has many applications and it is of course impossible to expose them in such an essay. I shall here simply mention some of its applications with references. Of course, the following list is far from being complete.