Hopf bifurcation in a Mean-Field model of spiking neurons

We study a family of non-linear McKean-Vlasov SDEs driven by a Poisson measure, modelling the mean-field asymptotic of a network of generalized Integrate-and-Fire neurons. We give sufficient conditions to have periodic solutions through a Hopf bifurcation. Our spectral conditions involve the location of the roots of an explicit holomorphic function. The proof relies on two main ingredients. First, we introduce a discrete time Markov Chain modeling the phases of the successive spikes of a neuron. The invariant measure of this Markov Chain is related to the shape of the periodic solutions. Secondly, we use the Lyapunov-Schmidt method to obtain self-consistent oscillations. We illustrate the result with a toy model for which all the spectral conditions can be analytically checked.


Introduction
We consider a mean-field model of spiking neurons. Let f : R + → R + , b : R + → R and let N(du, dz) be a Poisson measure on R 2 + with intensity the Lebesgue measure dudz. Consider the following McKean-Vlasov SDE Here, J ≥ 0 is a deterministic constant (it models the strength of the interactions) and the initial condition X 0 is independent of the Poisson measure. Informally, the SDE (1) can be understood in the following sense: between the jumps, X t solves the scalar ODEẊ t = b(X t ) + J E f (X t ) and X t jumps to 0 at rate f (X t ). We assume that b(0) ≥ 0 and that X 0 ≥ 0, such that the dynamics lives on R + . This SDE is non-linear in the sense of McKean-Vlasov, because of the interaction term E f (X t ) which depends on the law of X t . Let ν(t, dx) := L(X t ) be the law of X t . It solves the following non-linear Fokker-Planck equation, in the sense of measures: with the boundary condition ∀t > 0, (b(0) + Jr t ) ν(t, 0) = r t .
We study the existence of periodic solution of this non-linear Fokker-Planck equation. We give sufficient conditions for the existence of a Hopf bifurcation around a stationary solution of (2).

Associated particle system
Equations (1) and (2) appeared (see e.g. [7]) as the limit of the following networks of neurons. For each N ≥ 1, consider i.i.d. initial potentials (X i,N 0 ) i∈{1,··· ,N } with law L(X 0 ). The càdlàg process (X i,N t ) i∈{1,··· ,N } ∈ R N is a PDMP: between the jumps each X i,N t solves the ODĖ X i,N t = b(X i,N t ) and "spikes" with rate f (X i,N t ). When a spike occurs, say neuron i spikes at (random) time τ , its potential is reset to 0 while the others receive a "kick" of size J N : This completely defines the particle system. As N goes to infinity, a phenomenon of propagation of chaos occurs. In particular, each neuron, say (X 1,N t ) t≥0 , converges in law to the solution of (1). We refer to [13] for a proof of such convergence result under stronger assumptions. There is a qualitative difference between the particle system and the solution of the limit equation (1): for a fixed value of N , the particle system is Harris ergodic (see [11], where this result is proved under stronger assumptions on b and f ) and so it admits a unique, globally attractive, invariant measure. Thus, there are no stable oscillations when the number of particles is finite. For the limit equation however, the long time behavior is richer: for fixed values of the parameters there can be multiple invariant measures (see [5] and [6] for some explicit examples) and, as shown here, there can exist periodic solutions (see Figure 1).

Literature
From a mathematical point of view, this model has been first introduced by [7], after many considerations by physicists (see for instance [24], [14] and [4] and references therein). In [13], the existence of solution of (1), path-wise uniqueness and convergence of the particle system are addressed. The long time behavior of the solution to (1) is studied in [5] in the case of weak interactions: b and f being fixed, the authors prove that there exists a constantJ (depending on b and f ) such that for all J <J, (1) admits a unique globally attractive invariant measure. Finally in [6], the local stability of an invariant measure is studied with no further assumptions on the size of the interactions J. It is proved that the stability of an invariant measure is given by the location of the roots of some holomorphic function. In [22], the authors study a "metastable" behavior of the particle system. They give examples of drifts b and rate functions f where the particle system follows the long time behavior of the mean-field model for an exponential large time, before finally converging to its (unique) invariant probability measure.
The model studied in the current paper belongs to the class of generalized integrate-and-fire neurons, whose most celebrated example is the "fixed threshold" model (see for instance [2], [8] and the references therein). Many of the techniques developed here also apply to this variant. However, it would require additional work to overcome the specific difficulties due to the fixed threshold setting. In particular, there are no simple explicit expressions of the kernels introduced in the current paper.
In [10], numerical evidences are given for the existence of a Hopf bifurcation in a close setting: the dynamics between the jumps is (as in [7]) given bẏ In particular the potentials of each neuron are attracted to their common mean. This models "electrical synapses", while J E f (X t ) models the chemical synapses. Oscillations with both electrical and chemical synapses is also studied in a different model in [23]. In this work, the mean-field equation is a 2D-ODE and so the analysis of the Hopf bifurcation is standard. Finally, oscillations with multi-populations such as with both excitatory and inhibitory neurons have been extensively studied in neuroscience. For instance in [9], it is shown that multi-populations of mean-field Hawkes processes can oscillate. Again, the dynamics is reduced to a finite dimension ODE.
It is well-known that the long time behavior of McKean-Vlasov SDEs can be significantly different from markovian SDEs. In [25] and [26], the author gives simple examples of such non-linear SDEs which oscillate. Again, in these examples, the dynamics can be reduced to an ordinary differential equation. To go beyond ODEs, the framework of Delay differential equation is often used: see for instance [27] for the study of Hopf bifurcations for such equations, based on the Lyapunov-Schmidt method. In [20,21] the authors study periodic solutions of a McKean-Vlasov SDE using a slow-fast approach. Another approach is to use the center manifold theory to reduce infinite dimensional problem to manifold of finite dimension: we refer to [18] (see also [15] for an application to some McKean-Vlasov SDE). Finally, in [19] an abstract framework is presented to study Hopf bifurcations for some classes of regular PDEs. Even though our proof is not based on the PDE (2) (but on the Volterra integral equation described below), we follow the methodology of [19] to obtain our main result.

Regularity of the drift and of the jump function.
We make the following regularity assumptions on b and f .

Remark 3.
If a non-decreasing function f satisfies Assumption 2(a), there exists another constant C f such that for all x, y ≥ 0, f (x + y) ≤ C f (1 + f (x) + f (y)). Moreover, it also implies that f grows at most at a polynomial rate: there exists a constant p > 0 such that Note that for instance, for all p ≥ 1, the function f : x → x p satisfies Assumption 2. More generally, any continuous function such that f (x) ∼ x→∞ x p for some p ≥ 0 satisfies Assumption 2(a).

The Volterra integral equation
As in [5,6], we study the long time behavior of the solution of (1) through its "linearized" version: given a non-negative scalar function a ∈ L ∞ (R + ; R + ), consider the non-homogeneous linear SDE: starting with law ν at time s. That is, equation (3) is (1) where the interactions J E f (X u ) have been replaced by the "external current" a u . For all t ≥ s and for all a ∈ L ∞ (R + ; R + ), consider τ a,ν s the time of the first jump of Y a,ν after s τ a,ν We introduce the spiking rate r ν a (t, s), the survival function H ν a (t, s) and the density of the first jump K ν a (t, s) to be Notation 4. We detail our conventions and notations.
1. We use bold letters a for time dependent currents and regular greek letters α for constant currents.
2. When ν = δ x , we write r x a (t, s) := r δx a (t, s). 3. When ν = δ 0 , we remove the x superscript and write r a (t, s) := r δ0 a (t, s). 4. When a is constant and equal to α ≥ 0, it holds that r ν α (t, s) = r ν α (t − s, 0) and we simply note r ν α (t) := r ν α (t, 0). 5. Finally, we extend the function r ν a , for s > t by setting ∀s > t, r ν a (t, s) := 0. We use the same conventions for H ν a and K ν a . It is known from [5,Prop. 19] (see also [6,Prop. 6] for a shorter proof) that r ν a is the solution of the following Volterra integral equation Moreover, by [5,Lem. 17], one has Following [17], given c 1 , c 2 : R 2 → R two measurable functions, it is convenient to use the notation such that (6) and (7) simply write The invariant measures of (1).
Let α > 0, define and where γ(α) is the normalizing factor, such that By [5,Prop. 26], ν ∞ α is the unique invariant measure of the linear SDE (3) driven by the constant "external current" a ≡ α. We define here a central quantity in our work It is readily seen that ν ∞ α is an invariant measure of the non-linear equation (1) with J = J(α). Reciprocally, for a fixed value of J, the number of invariant measures of (1) is the number of solutions α ≥ 0 to the scalar equation Any such invariant measure is characterized by its corresponding value of α.

Assumption 5.
Assume that the deterministic flow is not degenerate at σ α : Recall that r x α (t) is given by (5) (with a ≡ α, ν = δ x and s = 0). Following [6], define By [6,Prop. 19], x → r x α (t) is C 1 and integrable with respect to ν ∞ α . Moreover, we have then the invariant measure ν ∞ α is locally stable. We refer to [6,Def. 16] for definition of local stability, in particular for the choice of the distance between two probability measures.
In the second case, the following lemma shows that J ′ (α 0 ) = 0. So, at least in the nondegenerate case where J ′′ (α 0 ) = 0, the function α → J(α) is not strictly monotonic in the neighborhoods of α 0 : this is a fold bifurcation which typically leads to bistability (or multistability, etc.).
This ends the proof.
The paper is structured as follows: in Section 2, we state the spectral assumptions and the main result, Theorem 15. We give a layout of its proof at the end of Section 2. In Section 3, we give the proof of Theorem 15. Finally, in Section 4, we give an explicit example of a drift b and a rate function f for which such Hopf bifurcations occur and the spectral assumptions can be analytically checked.

Definition 8.
A family of probability measures (ν(t)) t∈[0,T ] is said to be a T -periodic solution of (1) if

It holds that ν(T ) = ν(0).
In this case, we can obviously extend (ν(t)) for t ∈ R by periodicity. Considering now the solution (X t ) t≥0 of (1) defined for t ≥ 0, it remains true that ν(t) = L(X t ) for any t ≥ 0.
We study the existence of periodic solutions t → L(X t ) where (X t ) is the solution of (1), near a non-stable invariant measure ν ∞ α0 . We assume that the stability criterion (18) is not satisfied for α 0 : Assumption 9. Assume that there exists α 0 > 0 and τ 0 > 0 such that Assumption 10 (Non-resonance condition). Assume that for all n ∈ Z\{−1, 1}, Remark 11 (Local uniqueness of the invariant measure in the neighborhood of α 0 ). Under Assumption 10, we have J(α 0 ) Θ α (0) = 1 and so, by Lemma 7, it holds that J ′ (α 0 ) = 0. Fix J in the neighborhood of J(α 0 ). Recall that the values of α such that ν ∞ α is an invariant measure of (1) are precisely the solutions of J(α) = J. So, in the neighborhood of α = α 0 , the invariant measure of (1) is unique.

Assumption 13.
Assume that α → Z 0 (α) crosses the imaginary part with non-vanishing speed, that is

Remark 14.
Using (25), Assumption 13 is equivalent to Our main result is the following.

The curve passes through
is continuous and 2πτ v -periodic. Moreover, its mean over one period is α v : Every other periodic solution in a neighborhood of ν ∞ α0 is obtained from a phase-shift of one such ν v . More precisely, there exists small enough constants ǫ 0 , ǫ 1 > 0 (only depending on b, f, α 0 and τ 0 ) such that if (ν(t)) t∈R is any 2πτ -periodic solution of (1) for some value of J > 0 such that

Remark 16.
Given the "periodic current" a v defined by (26), the shape of the solution is known whereν av , defined by (53) below, is known explicitly in terms of b, f and a v .

Notation 17.
For T > 0, we denote by C 0 T the space of continuous and T -periodic functions from R to R and by C 0,0 T the subspace of centered functions We now give an outline of the proof of Theorem 15. The proof is divided in two main parts. The first part is devoted to the study of an isolated neuron subject to a periodic external current. That is, given τ > 0 and a ∈ C 0 2πτ , we study the jump rate of an isolated neuron driven by a. We give in Section 3.1 estimates on the kernels K a and H a . We want to characterize the "asymptotic" jump rate of a neuron driven by this external periodic current. That is, informally In order to characterize such limit ρ a , we introduce in Section 3.2 a discrete-time Markov Chain corresponding to the phases of the successive spikes of the neuron driven by a. We prove that this Markov Chain has a unique invariant measure, which is proportional to ρ a . This serves as a definition of ρ a . Given this periodic jump rate ρ a ∈ C 0 2πτ , we give in Section 3.3 an explicit description of the associated time-periodic probability densities, that we denote (ν a (t)) t∈ [0,2πτ ] . Consequently, to find a 2πτ -periodic solution of (1), it is equivalent to find a ∈ C 0 2πτ such that a = Jρ a .
One classical difficulty with Hopf bifurcation is that the period 2πτ itself is unknown: τ varies when the interaction parameter J varies. To address this problem, we make in Section 3.4 a change of time to only consider 2π-periodic functions. We define for all t ∈ R We shall see that this change of time has a simple probabilistic interpretation by scaling b, f and d appropriately. In Section 3.5, we prove that the function C 0 2π and α > 0 is the mean of d over one period. We prove that the mean number of spikes over one period only depends on α. The common value is obtained with the particular case h ≡ 0 and (10). Thus, we prove In the second part of the proof, we find self-consistent periodic solutions using the Lyapunov-Schmidt method. We introduce in Section 3.6 the following functional 2π . The roots of G, described by Proposition 34, match with the periodic solutions of (1). For instance if G(h, α, τ ) = 0, we set a(t) := α+h(t/τ ). This current a solves (27) with J = J(α) and so it can be used to define a periodic solution of (1). Conversely, to any periodic solution of (1), we can associate a root of G. So Theorem 15 is equivalent to Proposition 34. Sections 3.7, 3.8, 3.9 and 3.10 are then devoted to the proof of Proposition 34. In Section 3.7, we prove that the linear operator D h G(0, α, τ ) can be written using a convolution involving Θ α , given by (17). We then follow the method of [19, Ch. I.8].
In Section 3.8, we study the range and the kernel of D h G(0, α 0 , τ 0 ): we prove that under the spectral Assumptions 9 and 10, D h G(0, α 0 , τ 0 ) is a Fredholm operator of index zero, with a kernel of dimension two. The problem of finding the roots of G is a priori of infinite dimension (h belongs to C 0,0 2π ). In Section 3.9 we apply the Lyapunov-Schmidt method to obtain an equivalent problem of dimension two. Finally in Section 3.10 we study the reduced 2D-problem.

Proof of Theorem 15
Without risk of confusion, we alleviate the notation in the proofs: we no more use bold letters for small perturbations h of a constant current α 0 .

Preliminaries
For By Assumption 1, this ODE has a unique solution. Moreover, the kernels H ν a (t, s) and K ν a (t, s), defined by (5), have explicit expressions in term of the flow In particular, under the assumption (30), s → ϕ a t,s (0) is strictly decreasing on (−∞, t], for all t. Define then σ a (t) := lim Given d ∈ C 0 T and η > 0, we consider the following open balls of C 0 T : Proof. Assume first that σ α0 = ∞, and let η 0 : , which is strictly positive by assumption. Then it holds that inf t≥0 inf x≥0 b(x) + a t ≥ η 0 and so Letting s tend to −∞, we deduce that σ a (t) = +∞. Assume now that σ α0 < ∞. Using (15), we apply the implicit function theorem to In addition, we choose η 0 such that In particular σ a (t) < ∞. We prove that this function is right-continuous in t. We fix t ≥ s and is strictly decreasing, and takes value 0 when s = t, we deduce that σ a (t) > 0. More precisely, let By (15), it holds that m 0 > 0. Moreover, using (34), we deduce that and so It ends the proof.

Study of the non-homogeneous linear equation
In this section, we study the asymptotic jump rate of an "isolated" neuron driven by a periodic continuous function. Grant Assumptions 1, 2 and let α 0 > 0 such that Assumption 5 holds. Let λ 0 , η 0 > 0 be given by Lemma 19 and T > 0. Consider a ∈ B T η0 (α 0 ). Following the terminology of [5], we say that a is the "external current". Let r a be the solution of the Volterra equation r a = K a + K a * r a . We consider the following limit The goal of this section is to show that ρ a is well defined and to study some of its properties. First, (6) and (7) write Letting k → ∞, we find that ρ a has to solve Note that if ρ a is a solution of (43), then it automatically holds that the function t → t −∞ H a (t, s)ρ a (s)ds is constant (its derivative is null). In Lemma 21 below, we prove that the solutions of equation (43) form a linear space of dimension 1. Consequently (43) together with (44) have a unique solution: this will serve as the definition of ρ a .
A probabilistic interpretation of (43) and (44) Let x be a T -periodic solution of (43). We have for all t ∈ [0, T ] Note that by Lemma 19 we have normal convergence since for some constant C only depending on b, f , α 0 , η 0 and λ 0 . We deduce that x solves Using that a is T -periodic, we have Moreover, K a is a probability density so From (46) and (47), we deduce that In view of (48), K T a (·, s) can be seen as the transition probability kernel of a Markov Chain acting on the continuous space [0, T ]. The interpretation of this Markov Chain is the following. Let (Y a,ν t ) t≥0 be the solution of (3), starting at time 0 with law ν and driven by the T -periodic current a. Define (τ i ) i≥1 the times of the successive jumps of (Y a,ν That is, φ i is the phase of the i-ith jump, while ∆ i is the number of "revolutions" between τ i−1 and τ i : In other words, if one considers that a period is a "lap", ∆ i is the number of times we cross the start line of the lap between two spikes.
is Markov, with a transition probability given by In particular, (φ i ) i≥0 is Markov, with a transition probability given by K T a . With some slight abuse of notations, we also write K T a for the linear operator which maps y ∈ L 1 ([0, T ]) to and so the function K T a (y) is continuous. Note that ∀s ∈ [0, T ], K T a (T, s) = K T a (0, s), and so K T a (y) can be extended to a T -periodic function. Altogether, K T a (y) ∈ C 0 T . To prove that K T a is a compact operator, we use the Weierstrass approximation Theorem: there exists a sequence of polynomial functions (t, s) → P n (t, s) such that sup t,s∈[0,T ] P n (t, s) − K T a (t, s) → n 0 as n → ∞. For each n ∈ N, the linear operator L 1 ([0, T ]) ∋ y → P n (y) := t → T 0 P n (t, s)y(s)ds is of finite-rank. Moreover, the sequence P n converges to K T a for the norm operator, and so K T a is a compact operator (as the limit of finite-rank operators, see [1,Ch. 6]).

Lemma 21.
Let a ∈ C 0 T . The Markov Chain (φ i ) i≥0 with transition probability kernel K T a has a unique invariant probability measure π a ∈ C 0 T . Moreover, the solutions of (45) in L 1 ([0, T ]) span a vector space of dimension 1.

Proof.
Step 1: any solution of (45) has a constant sign. Let x ∈ L 1 ([0, T ]) be a solution of (45). Because the kernel K T a is strictly positive and continuous on [0, T ] 2 , it holds that We write x + for the positive part of x, x − for its negative part and define β := min(||x To obtain the right-most equality, we used that for all y ∈ L 1 ([0, T ]), y ≥ 0 yields ||K T a y|| 1 = ||y|| 1 . But the identity K T a (x) = x implies that β = 0 and so either x + or x − is a.e. null.
Step 2: existence and uniqueness of the invariant probability measure.
From (48), we deduce that 1 is an eigenvalue of (K T a ) ′ (its associated eigenvector is 1, the constant function equal to 1). Denoting by N the null space, the Fredholm alternative [1, Th.

By
Step 1, π a can be chosen positive, and by Lemma 20, π a ∈ C 0 T . Uniqueness follows directly from Step 1: if π 1 , π 2 are two invariant probability measures, then x = π 1 − π 2 solves (45) and so it has a constant sign. Because the mass of x is null, we deduce that x = 0.

Remark 22. The estimate (51) is a strong version of the Doeblin's condition. It holds that
inf where Unif is the uniform distribution on [0, T ]. A classical coupling argument shows that for all is the Markov Chain defined by (49) and || · || TV denotes the total variation distance between probability measures. This argument provides an alternative proof of the existence and uniqueness of π a .
We define for all θ ∈ R the following shift operator

Corollary 23.
Given a ∈ C 0 T , equations (43) and (44) have a unique solution ρ a ∈ C 0 T . Moreover, it holds that for all θ ∈ R, Proof. Note that there is a one to one mapping between periodic solutions of (43) and solutions of (45). So by Lemma 21, the solution ρ a of equations (43) and (44) is ρ a = πa ca , where π a is the invariant measure (on [0, T ]) of the Markov Chain with transition probability kernel K T a and c a is given by Note that c a is constant in time. Define for all t, s ∈ [0, T ]: H a (t, s − kT ).
Using the same notation that in (50), we have c a = H T a (π a ). Moreover, we have because both sides satisfy the same ODE with the same initial condition at t = s. We deduce from (32) and (33) that So S θ (ρ a ) solves (43) and (44), where the kernels are replaced by K S θ (a) and H S θ (a) . By uniqueness it follows that ρ S θ (a) = S θ (ρ a ).

Remark 24. Using that
T 0 π a (s)ds = 1, we find that the average number of spikes over one period [0, T ] is given by The probabilistic interpretation of c a is the following: remembering the Markov chain defined by (49), we have and so, if L(φ i ) = π a , we deduce that In other words, c a is the expected number of "revolutions" between two successive spikes, assuming the phase of each spike follows its invariant measure π a . We shall see in Proposition 33 that c a only depends on the mean of a. Furthermore, it holds that for a ≡ α > 0 and so for all t, ρ α (t) = γ(α).
where ρ a is the unique solution of the equations (43) and (44).
By the change of variables u = β a t (x), one obtains that for any non-negative measurable test function g Note moreover that when a is constant and equal to α > 0 (a ≡ α), (53) matches with the definition of the invariant measure ν ∞ α given by (9): The main result of this section is . It holds that (ν a (t, ·)) t is the unique T -periodic solution of (3).
Proof. Existence. We first prove thatν a (t, ·) is indeed a T -periodic solution. We follow the same strategy that [5,Prop. 26]. First note that, by (54), one has Finally, using [5,Prop. 19] and the claim, we deduce that for any non-negative measurable function g By (54) (with t = 0 and g replaced by x → g(ϕ a t,0 (x))H x a (t, 0)), the second term is equal to This ends the proof of the existence.
The function ρ is T -periodic. Moreover, it holds that for all k ≥ 0, t,−kT ) and so (6) and (7) yields Letting k goes to infinity, we deduce that ρ solves (43) and (44). By uniqueness, we deduce that for all t, ρ(t) = ρ a (t) (and so ρ is continuous). Finally define τ t the time of the last spike of Y a,ν(0) t,−kT before t (with the convention that τ t = −kT if there is no spike between −kT and t).
Consequently, for any non-negative test function g: ) and letting again k to infinity we deduce that So for all t, ν(t) ≡ν a (t).

Reduction to 2π-periodic functions
Convention: For now on, we prefer to work with the reduced period τ , such that T =: 2πτ, τ > 0.
Consider d ∈ C 0 2πτ and let a be the 2π-periodic function defined by:

∀t ∈ R, a(t) := d(τ t).
We define ∀t ∈ R, ρ a,τ (t) where ρ d is the unique solution of (43) and (44) (with kernels K d and H d ). Because ρ d is 2πτ -periodic, ρ a,τ is 2π-periodic. Note that when a ≡ α is constant we have To better understand how ρ a,τ depends on τ , consider (Y d,ν t,s ) the solution of (3), starting with law ν and driven by d. Note that for all t ≥ s  In view of this result, we deduce that ρ a,τ solves or equivalently, setting one has, using the same operator notation as in (50) ρ a,τ = K 2π a,τ (ρ a,τ ), 1 = τ H 2π a,τ (ρ a,τ ).
Note that ρ ·,τ and ρ · are linked by (28). Consequently equations (57) define a unique 2π-periodic continuous function ρ a,τ = π a,τ c a,τ , where π a,τ is the unique invariant measure of the Markov Chain with transition probability kernel K 2π a,τ and c a,τ is the constant given by c a,τ := τ H 2π a,τ (π a,τ ).

Regularity of ρ
The goal of this section is to study the regularity of ρ a,τ with respect to a and τ . For η 0 > 0, recall that B 2π The proof of Proposition 28 relies on (59) and on Lemma 31 below, which states that the function (a, τ ) → π a,τ is C 2 . Recall Notation 17 Let a ∈ B 2π η0 and τ > 0. Because 2π 0 π a,τ (u)du = 1, the space C 0 2π can be decomposed in the following way C 0 2π = Span(π a,τ ) ⊕ C 0,0 2π . We denote by K 2π a,τ C 0 2π the restriction of K 2π a,τ to C 0 2π (recall that the linear operator h → K 2π a,τ h is defined for all h ∈ L 1 ([0, 2π])). Similarly, we denote by I| C 0 2π the identity operator on C 0 2π . Given a linear operator L, we denote by N (L) its kernel (null-space) and by R(L) its range.
Lemma 29. Grant Assumptions 1 and 2, let α 0 > 0 such that Assumption 5 holds and let a ∈ B 2π η0 (α 0 ), where η 0 > 0 is given by Lemma 19. It holds that Proof. We proved in Lemma 21 that N (I − K 2π a,τ ) = Span(π a,τ ). It remains to show that 2π . The Fredholm alternative [1, Th. 6.6] yields In the proof of Lemma 21, it is shown that Finally, using that for h ∈ L 1 ([0, 2π]), one has K 2π a,τ h ∈ C 0 2π , one obtains the result for the restrictions to C 0 2π .
Proof. We only prove the result for H, the proof for K being similar. Let ǫ 0 > 0 be chosen arbitrary such that ǫ 0 < τ 0 .
Step 1. We introduce relevant Banach spaces: E denotes the set of continuous functions We define the following application Φ, Note that Φ is linear and continuous, so in particular C 2 . So, to prove the result, it suffices to show that a,τ (t, s) is explicitly given by the series (58).
Step 2. Let k ∈ N be fixed. We prove that the function To proceed, we use the explicit expression of H a,τ (t, s), given by (56). Note that we have first to show that the function (a, τ ) → ϕ a,τ t,s (0) ∈ R is C 2 . This follows (see [12, Th. 3.10.2]) from the fact that b : R + → R is C 2 and so the solution of the ODE (56) is C 2 with respect to a and τ . Moreover, we have for all h ∈ C 0 2π , A similar expression holds for d dτ ϕ a,τ t,s (0). Using that f is C 2 , we deduce that the function So, proceeding as in the proof of Lemma 19, we deduce the existence of η 0 , λ 0 , A 0 > 0 (only depending on b, f , α 0 , τ 0 and ǫ 0 ) such that for all h ∈ C 0 2π and for all τ ∈ (τ 0 − ǫ 0 , τ 0 + ǫ 0 ), it holds that sup Similar estimates hold for the second derivative with respect to a and for the first and second derivatives with respect to τ .
Step 3. We have Using [3, Th. 3.6.1], we deduce that a → (H 2π  a,τ (t, s) Note that this last series converges again normally, and so a → (H 2π  a,τ (t, s)) t,s is in fact C 1 . Applying again [3, Th. 3.6.1], we prove similarly that a → H 2π a,τ (t, s) is C 2 . The same arguments shows that τ → H 2π a,τ (t, s) is C 2 .
Step 4. It remains to prove that (a, τ ) → (H 2π a,τ (t, s)) t,s ∈ E 0 is C 2 (we have proved the result for E, not E 0 , in the previous step). Let t, s ∈ [0, 2π] be fixed, define The application E t s is linear and continuous. Moreover, we have seen that H 2π a,τ ∈ E 0 , so Differentiating with respect to a, we deduce that for all h ∈ C 0 2π , H 2π  a,τ · h), and so D a H 2π a,τ ∈ L(C 0 2π , E 0 ). The same results holds for the second derivative with respect to a and the two derivatives with respect to τ . This ends the proof.
Remark 32. Recall that π a,τ is the unique invariant measure of the Markov Chain having K 2π a,τ has kernel transition probability. So, we study the smoothness of the invariant measure with respect to the parameters (a, τ ), knowing the smoothness of the transition probability kernel (a, τ ) → K 2π a,τ . We refer to [16] for such sensibility result in the setting of finite discrete-time Markov Chains. Our approach is different and based on the implicit function theorem. In this proof, we consider independent functions a and h (that is we do not have a = α 0 + h).
As a first application of this result, we prove that the mean number of spikes of a neuron driven by a periodic input only depends on the mean of the input current.
Proposition 33. Grant Assumptions 1, 2 and let α 0 > 0 such that Assumption 5 holds. Let τ 0 > 0 and consider η 0 be given by Proposition 28 We denote by c α0 this last quantity. In particular, the mean number of spikes per period: only depends on α 0 (which is the mean of the external current Proof. Let a ∈ B 2π η0 (α 0 ). We prove that We have c a,τ0 = τ 0 H 2π a,τ0 (π a,τ0 ). Differentiating with respect to a, one gets D a c a,τ0 · h = τ 0 D a H 2π a,τ0 · h (π a,τ0 ) + τ 0 H 2π a,τ0 D a π a,τ0 · h.
Those are the trivial roots of G. To construct the periodic solutions to (1), we find the non-trivial roots of G. In fact, Theorem 15 is deduced from the following proposition.

Moreover, it holds that
In particular, h v ≡ 0 for v = 0.

For all
We here prove that our main result is a consequence of this proposition.
Proof that Proposition 34 implies Theorem 15. Let (h v , α v , τ v ) be the continuous curve given by Proposition 34. Define a v ∀t ∈ R, a v (t) : The function a v is 2πτ v -periodic and continuous. From G(h v , α v , τ v ) = 0, we deduce that a v solves (27): Considerν av defined by (53). By Proposition 26, (ν av (t)) is a 2πτ v -periodic solution of (1) and (ν av , α v , τ v ) satisfies all the properties stated in Theorem 15: this gives the existence part of the proof. We now prove uniqueness. Let ǫ 0 > 0 small enough such that (τ 0 − ǫ 0 , τ 0 + ǫ 0 ) ⊂ V τ0 , V τ0 being given by Proposition 34. Let J, τ > 0 be fixed, consider ν(t) a 2πτ -periodic solution of (1) such that |τ − τ 0 | < ǫ 0 and sup for some constant ǫ 1 > 0 to be specified later. Define a The function a is 2πτ -periodic. Let (X t ) t≥0 be the solution of the non-linear equation (1), starting with the initial condition ν(0) ∈ M(f 2 ). The arguments of [5,Lem. 24] show that, under Assumptions 1 and 2, the function t → E f (X t ) is continuous, and so a ∈ C 0 2πτ . We write for some constant α and some h ∈ C 0,0 2π . Because ν(t) is a periodic solution of (1), it holds that a = Jρ a , or equivalently, We have by assumption Recall that α 0 satisfies Assumption 5. By Lemma 18 and using the continuity of b ′ , we can assume that ǫ 1 is small enough such that Assumption 5 is also satisfied by α. Let η 0 be given by Proposition 28 (η 0 only depends on b, f, α 0 and τ 0 ). Provided that ǫ 1 ≤ η 0 , we can apply Proposition 33 at (α, τ ). It holds that 1 2π So, we deduce that a(t) = α v + h v t+θ τv and J = J(α v ). This ends the proof.
It remains to prove Proposition 34.

Linearization of G.
Define: where Θ α is given by (17). The main result of this section is the following.

Proposition 35.
Let h ∈ C 0,0 2π . It holds that The proof of this proposition relies on Lemmas 36 and 37 below.
Proof of Proposition 35. We use Lemma 37 together with (70). For all h ∈ C 0,0 2π , one obtains Finally, we have It ends the proof.

The Lyapunov-Schmidt reduction method
The problem of finding the roots of G defined by (63) is an infinite dimensional problem. We use the method of Lyapunov-Schmidt to obtain an equivalent problem of finite-dimension -here of dimension 2. The equation G = 0 is equivalent to where the projector Q is defined by (72). Define the following function W : ) which is bijective with continuous inverse. The implicit function theorem applies: there exists a C 1 function ψ : Again, the neighborhoods U 2 , W 2 , V τ0 , V α0 may be shrunk in this construction. We deduce that G(h, α, τ ) = 0 for (h, α, τ ) ∈ X × V α0 × V τ0 is equivalent to (73) QG(Qh + ψ(Qh, α, τ ), α, τ ) = 0.  (G(h, α, τ )).
Moreover, it is clear that the projection Q commutes with S θ (for all θ ∈ R, S θ Q = QS θ ) and by the local uniqueness of the implicit function theorem, we deduce that , α, τ )).
Using that any element Qh ∈ N (B 0 ) can be written for some c ∈ C and using the definition of Q, we deduce that (73) is equivalent to the complex equation: and V 0 is an open neighborhood of 0 in C. We have moreover ∀θ ∈ R,Φ(ce iθ , α, τ ) = e iθΦ (c, α, τ ), and so (73) is equivalent toΦ Note thatΦ(−v, α, τ ) = −Φ(v, α, τ ) and in particular This is coherent with (64). In order to eliminate these trivial solutions, following [19], we set To summarize, we have proved that The next section is devoted to the study of this reduced problem.

Study of the reduced 2D-problem
We denote by cos the cosinus function, such that ve 0 + vē 0 = 2v cos .
The proof of Proposition 34 then follows immediately from this result and Lemma 40. This ends the proof of Theorem 15.
Moreover, for x ∈ [0, 1] and t > t * Remark 44. In fact this analysis can be easily extended to any linear drift b(x) = κ(m − x), with κ, m ∈ R. Indeed, adapting slightly the proof of [6,Th. 21] when κ ≤ 0, it holds that f + b ′ ≥ 0 and so the unique non trivial invariant measure is locally stable: there is no Hopf bifurcation. If on the other hand κ > 0, by setting κ = 1,α = α κ ,m = mβ = κβ, we can easily reduce the problem to κ = 1.
We now make the following change of variable Proof. Squaring the two equations of (82) and summing the result, one gets Note that if β = 0, for fixed values of δ, β, there is a unique y satisfying this equation. This proves that all the multiple points are located on the axis β = 0. When β = 0, the equation becomes (1 − δe −ω ) 2 = (1 − δ) 2 , whose solutions are δ = 0 and δ = 2 1 + e −ω . Those are indeed multiple points. For (0, 0) for instance, it suffices to consider y = 2πk ω , k ∈ N * . This ends the proof.