Long signal change-point detection

The detection of change-points in a spatially or time ordered data sequence is an important problem in many fields such as genetics and finance. We derive the asymptotic distribution of a statistic recently suggested for detecting change-points. Simulation of its estimated limit distribution leads to a new and computationally efficient change-point detection algorithm, which can be used on very long signals. We assess the algorithm via simulations and on previously benchmarked real-world data sets.


I. Introduction
When met with a real-world data set ordered by time or space, it is often important to predict when or where something "changed" as we move temporally or spatially through it.In biology, for example, changes in an array Comparative Genomic Hybridization (aCGH) or Chip-Seq data signal as one moves across the genome can represent an event such as a change in genomic copy number, which is extremely important in cancer gene detection [16,20].In the financial world, detecting changes in multivariate time-series data is important for decision-making [25].Change-point detection can also be used to detect financial anomalies [4] and significant changes in a sequence of images [12].
Change-point detection analysis is a well-studied field and there are numerous approaches to the problem.Its extensive literature ranges from parametric methods using log-likelihood functions [5,14] to nonparametric ones based on Wilcoxon-type statistics, U-statistics and sequential ranks.The reader is referred to the monograph [6] for an in-depth treatment of these methods.
In change-point modeling it is generally supposed that we are dealing with a random process evolving in time or space.The aim is to develop a method to search for a point where possible changes occur in the mean, variance, distribution, etc. of the process.All in all, this comes down to finding ways to decide whether a given signal can be considered homogeneous in a statistical (stochastic) sense.
The present article builds upon an interesting nonparametric change-point detection method that was recently proposed by Matteson and James [15].It uses U-statistics (see [10]) as the basis of its change-point test.Its interest lies in its ability to detect quite general types of change in distribution.Several theoretical results are presented in [15] to highlight some of the mathematical foundations of their method.These in turn lead to a simple and useful datadriven statistical test for change-point detection.The authors then apply this test successfully to simulated and real-world data.
There are however several weaknesses in [15] both from theoretical and practical points of view.Certain fundamental theoretical considerations are lightly treated, especially the assertion that a limit distribution exists for the important statistic, upon which the rest of the approach hangs.On the practical side, the method is computationally prohibitive for signals of more than a few thousand points, which is unfortunate because real-world signals can be typically much longer.
Our paper has two main objectives.First, it fills in missing theoretical results in [15] including a derivation of the limit distribution of the statistic.This requires the effective application of large sample theory techniques, which were developed to study degenerate U-statistics.Second, we provide a method to simulate from an approximate version of the limit distribution.This leads to a new computationally efficient strategy for change-point detection that can be run on much longer signals.
The article is structured as follows.In Section II we provide some context and present the main theoretical results.In Section III we show how to approximate the limit distribution of the statistic, which leads to a new test strategy for change-point detection.We then show how to extend the method to much longer sequences.Simulations and real-data examples are provided in Section IV.A short discussion follows in Section V, and a proof of the paper's main result is given in Section VI.Some important technical results are detailed in the Appendix.

I. Measuring differences between multivariate distributions
Let us first briefly describe the origins of the nonparametric change-point detection method described in [15].For random variables Y, Z taking values in R d , d ≥ 1, let φ Y and φ Z denote their respective characteristic functions.A measure of the divergence (or "difference") between the distributions of Y and Z is as follows: where w(t) is an arbitrary positive weight function for which this integral exists.It turns out that for the specific weight function , which depends on a β ∈ (0, Theorem 2 of [23] yields that where we have written D(Y, Z; β) instead of D(Y, Z) to highlight dependence on β.Therefore (1) implies that E (Y, Z; β) ∈ [0, ∞).Furthermore Theorem 2 of [23] says that E (Y, Z; β) = 0 if and only if Y and Z have the same distribution.This remarkable result leads to a simple data-driven divergence measure for distributions.Seen in the context of hypothesizing a change-point in a signal of independent observations X = X 1 , . . ., X n after the k-th observation X k , we simply calculate an empirical version of (2): Matteson and James [15] state without proof that under the null hypothesis of X 1 , . . ., X n being i.i.d.(no change-points), the sample divergence given in (3) scaled by converges in distribution to a non-degenerate random variable as long as min{k, n − k} → ∞.Furthermore, they state that if there is a change-point between two distinct i.i.d.distributions after the k-th point, the sample divergence scaled by k(n−k) n tends a.s. to infinity as long as min{k, n − k} → ∞.These claims clearly point to a useful statistical test for detecting change-points.However, we cannot find rigorous mathematical arguments to substantiate them in [15], nor in the earlier work [23].
As this is of fundamental importance to the theoretical and practical validity of this changepoint detection method, we shall show the existence of the non-degenerate random variable hinted at in [15] by deriving its distribution.Our approach relies on the asymptotic behavior of U-statistic type processes, which were introduced for the first time for change-point detection in random sequences in [7]; see also Chapter 2 of the book [6].We also show that in the presence of a change-point the correctly-scaled sample divergence indeed tends to infinity with probability 1.

II. Main result
Let us first begin in a more general setup.Let X 1 , . . ., X n be independent R d -valued random variables.For any symmetric measurable function ϕ : R d × R d → R, whenever the indices make sense we define the following four terms: Otherwise, define the term to be zero; for instance, U (1) k (ϕ) = 0 for k = n − 1 and n.Note that in the context of the change-point algorithm we have in mind, ϕ(x, y) = ϕ β (x, y) := |x − y| β , β ∈ (0, 2), but the following results are valid for the more general ϕ defined above.Notice also that the last three terms are U-statistics absent their normalization constants.Next, let us define Observe that U k,n (ϕ) is a general version of the empirical divergence given in (3).Notice that While U k,n (ϕ) is not a U-statistic, we can use (5) to express it as a linear combination of U-statistics.Indeed, we find that Therefore, we now have an expression for U k,n (ϕ) made up of U-statistics, which will be useful in the following.
Our aim is to use a test based on U k,n (ϕ) for the null hypothesis H 0 : X 1 , . . ., X n have the same distribution, versus the alternative hypothesis H 1 that there is a change-point in the sequence X 1 , . . ., X n , i.e., H 1 : There is a γ ∈ (0, 1) such that P(X and P(X nγ ≤ t 0 ) = P(X nγ +1 ≤ t 0 ) for some t 0 .
For u, v ∈ R d , u ≤ v means that each component of u is less than or equal to the corresponding component of v. Also note that for any z ∈ R, z stands for its integer part.
Let us now examine the asymptotic properties of U k,n (ϕ).We shall be using notation, methods and results from Section 5.5.2 of monograph [19] to provide the groundwork.In the following, we shall denote by F the common (unknown) distribution function of the X i under H 0 , X a generic random variable with distribution function F and X an independent copy of X.We assume that and set Θ = Eϕ(X, X ).We also denote ϕ 1 (x) = Eϕ(x, X ), and define With this notation, we see that Eh(X, X ) = −Θ, and therefore that E h2 (X, X ) = 0. Furthermore, since where ψ(x, y) := ϕ 1 (x) + ϕ 1 (y).As in Section 5.5.2 of [19], we then define the operator A on Let λ i , i ≥ 1, be the eigenvalues of this operator A with corresponding orthonormal eigenfunctions φ i , i ≥ 1.Since for all x ∈ R d , R d h2 (x, y)dF(y) = 0, we see with φ 1 := 1, Aφ 1 = 0 =: λ 1 φ 1 .Thus (0, 1) = (λ 1 , φ 1 ) is an eigenvalue and normalized eigenfunction pair of the operator A. This implies that for every eigenvalue and normalized eigenfunction pair (λ i , φ i ), i ≥ 2, where λ i is nonzero, Moreover, we have that in From this we get that For further details and theoretical justification of these claims, refer to Section 5.5.2 of [19] and both Exercise 44 on pg.1083 and Exercise 56 on pg.1087 of [8].In fact, we shall assume further that It is crucial for the change-point testing procedure that we shall propose that the function h2 (x, y) defined as in (12) with ϕ(x, y) = ϕ β (x, y) = |x − y| β , β ∈ (0, 2), satisfies ( 12) whenever (6) holds.A proof of this is given in the Appendix.
Next, for any fixed We define U (1) 1 ( h2 )/0 = 0, and U (2) n−1 ( h2 )/0 = 0 , which gives , the space of bounded measurable real-valued functions defined on [0, 1] that are right-continuous real valued functions with left-hand limits.Notice that on account of (8) we can also write •), and we will do so from now on.In the following theorem, {W (i) } i≥1 denotes a sequence of independent standard Wiener processes.
Theorem II.1 Whenever ϕ satisfies ( 6) and ( 12), Y n (ϕ, •) converges weakly in D 1 [0, 1] to the tied down mean zero continuous process Y defined on [0, 1] by In particular, The proof of this theorem is deferred to Section VI.Note that a special case of Theorem II.1 says that for each t ∈ (0, 1), To the best of our knowledge, this is the first time that the limit distribution for this statistic has been proved to exist and been identified.As suggested in [15], with the following assumption a convergence with probability 1 result can also be proved for the empirical statistic E n,k (X; β) in (3).
Lemma II.1 Whenever for a given β ∈ (0, 2) Assumption 1 holds, with probability 1 we have: The proof of this can be found in the Appendix.Next, let ϕ(x, y) = |x − y| β , β ∈ (0, 2).We see that for any γ ∈ (0, 1) for all large enough n, Thus by Lemma II.1, under Assumption 1, whenever This shows that change-point tests based on the statistic sup t∈[0,1] |Y n (ϕ, t)|, under the sequence of alternatives of the type given by Assumption 1, are consistent.This also has great practical use when looking for change-points.Intuitively, the k ∈ {1, . . ., n} that maximizes (3) would be a good candidate for a change-point location.

III. From theory to practice
Theorem II.1 and the consistency result that follows it lay a firm theoretical foundation to justify the change-point method introduced in [15].For the present article, since we are not aware of a closed form expression for the distribution function of the limit process, we may imagine that this asymptotic result is of limited practical use.Remarkably, it turns out that we can efficiently approximate via simulation the distribution of its supremum, leading to a new change-point detection algorithm with similar performance to [15] but immensely faster for longer signals.For instance, finding and testing the first change-point in a signal of length 5 000 takes eight seconds with our method and eight minutes using [15].
To simulate the process Y we need true or estimated values of the λ i .Recall that these are the eigenvalues of the operator A defined in (9).Following [13], the (usually infinite) spectrum of A can be consistently approximated by the (finite) spectrum of the empirical n × n matrix Hn whose (i, j)-th entry is given by Hn where µ is the vector of row means (excluding the diagonal entry) of matrix ϕ(X i , X j ) and η the mean of its upper-diagonal elements.
In our experience, the λ i estimated in this way tend to be quite accurate for even small n.We assert this because upon simulating longer and longer i.i.d.signals, rapid convergence of the λ i is clear.Furthermore, as there is an exponential drop-off in their magnitude, working with only a small number (say 20 or 50) of the largest ones appears to be sufficient for obtaining good results.We illustrate these claims in Section IV.Let us now present our basic algorithm for detecting and testing for one potential change-point.

Algorithm for detecting and testing one change-point
1. Given signal X 1 , . . ., X n , n ≥ 4, find the 2 ≤ k ≤ n − 2 that maximizes the original empirical divergence given in (3) multiplied by the correct normalization given in (13), i.e., k 2 (n − k) 2 /n 2 (n − 1), and denote the value of this maximum t .
3. Simulate R times the m-truncated version of Y(t) using the m eigenvalues from the previous step.Record the R values s 1 , . . ., s R of the (absolute) supremum of the process obtained.
4. Reject the null hypothesis of no distributional change (at level α) if t crit ≤ α, where In this case, we deduce a change-point at the k at which t is found.Typically, we set α = .05.

Remark III.1
The E-divisive algorithm described in [15] follows a similar logic except that t crit is calculated via permutation (see [17]).Instead of steps 2 and 3, the order of the n data is permuted R times and for the r-th permuted signal, 1 ≤ r ≤ R, step 1 is performed to obtain the absolute maximum s r .The same step 4 is then used to accept or reject the change-point.
Note that the above algorithm can be used to find a putative change-point in any contiguous subset of the original signal.This motivates the following bisection algorithm for finding all change-points in a signal.

Algorithm for detecting and testing many change-points.
1. Run the one change-point algorithm on the whole signal.If the null hypothesis of no distributional change is not rejected, stop.Otherwise, we suppose have a first changepoint at k and two resulting subsignals X 1 , . . ., X k and X k+1 , . . ., X n , which we put in a waiting list.
2. While the waiting list is non-empty, run the one change-point algorithm on the longestwaiting subsignal.If the null hypothesis of no distributional change is not rejected, stop.Otherwise, accept a change-point at the location of t * and add the two resulting subsignals to the waiting list.In either of these cases, return to the waiting list (i.e., repeat the present step until the waiting list is empty).
3. Output the list of change-points found.
The permutation approach (E-divisive) of [15] is effective for short signals.Indeed, [11] showed that if one can perform all possible permutations, the method produces a test that is level α.However, a signal with n = 10 points already implies more than three million permutations, so a Monte Carlo strategy (i.e., subsampling permutations with replacement) becomes necessary, typically with R = 499.This also gives a test that is theoretically level α (see [17]) but with much-diminished power.
One could propose increasing the value of R but there is an unfortunate computational bottleneck in the approach.Usually, one stores in memory the matrix of |X i − X j | β in order to efficiently permute rows/columns and therefore recalculate t each time.But for more than a few thousand points, manipulating this matrix is slow if not impossible due to memory constraints.The only alternative to storing and permuting this matrix is simply to recalculate it each time for each permutation, but this is very computationally expensive as n increases.Consequently, the E-divisive approach is only useful for signals up to a few thousand points.
In contrast to this, our algorithm, based on an asymptotic result, risks underperforming on extremely short signals, and its performance will also depend on our ability to estimate well the set of largest λ i .In reality though, it works quite well, even on short signals.The matrix with entries |X i − X j | β needs only to be stored once in memory, and all standard mathematical software (such as Matlab and R) have efficient functions for finding its largest m eigenvalues (the eigs function in Matlab and the eigs function in the R package rARPACK).Each iteration of the algorithm's simulation step requires summing the columns of an m × T matrix of standard normal variables, where m is the number of λ i retained and T the number of grid points over which we approximate the Wiener processes between 0 and 1.Each row of the matrix corresponds to one λ i and a discretized Wiener process across T grid points.For m = 50 and T = 1 000 it takes about one second to perform this R = 499 times, and is independent of the number of points in the signal.In contrast, the E-divisive method takes about ten seconds for n = 1 000, one minute for n = 2 000, eight minutes for n = 5 000, etc.One clearly sees the advantage of our approach for longer signals when there are several change-points to find.

I. Simulated and real data examples
It is very important to start with the simplest possible case in order to demonstrate the fundamental validity of the new method.A basis for comparison is the E-divisive method from [15].Here, we consider signals of length n ∈ {10, 100, 1 000, 10 000} for which either the whole signal is i.i.d.N (0, 1) or else there is a change-point of height c ∈ {0.1, 0.2, 0.5, 1, 2, 5} after the (n/2)-th point, i.e., the second half of the signal is i.i.d.N (c, 1).
In the former case, we look at the behavior of the Type I error, i.e., the probability of detecting a change-point when there was none.We have fixed α = .05and want to see how close each method is to this as n increases.In the latter case, we look at the power of the test associated to each method, i.e., the probability that an actual change-point is correctly detected as n and c increase.We averaged over 1 000 trials.In the following, unless otherwise mentioned we fix β = 1.For the asymptotic method, the Wiener processes were simulated 499 times; similarly, for E-divisive we permuted 499 times.Both null distributions were therefore estimated using the same number of repeats.Note that we did not test the E-divisive method for n = 10 000 because each of the 1 000 trials would have taken around two hours to run.All times given in this paper are with respect to a laptop with a 2.13 GHz Intel Core 2 Duo processor with 4Gb of memory.Results are presented in Figure 1.For the Type I error, we see that both methods hover around the intended value of .05,except for extremely short signals (n = 10).As for the statistical power, it increases as n and c increase.Furthermore, the asymptotic method rapidly reaches a similar performance as E-divisive: for n = 10, E-divisive is better (but still with quite poor power), for n = 100 the asymptotic method has almost caught up, and somewhere between n = 100 and n = 1 000 the results become essentially identical; the asymptotic method has a slight edge at n = 1 000.
Let us now see to what extent our method is able to detect changes in variance and tail shape.We considered Gaussian signals of length n ∈ {10, 100, 1 000, 10 000} for which there is a change-point after the (n/2)-th point, i.e., the first half of the signal is i.i.d.N (0, 1) and the second half either i.i.d.N (0, σ 2 ) for σ 2 ∈ {2, 5, 10} or i.i.d.Student's t v distributions with v ∈ {2, 8, 16}.Results were averaged over 1 000 trials and are shown in Figure 2. As before, the statistical power tends to increase as n increases and either σ 2 increases or v decreases.The asymptotic method matches or beats the performance of E-divisive starting somewhere between n = 100 and n = 1 000.
Next, we considered a real aCGH data set of 57 bladder tumor samples presented in [21] and previously examined in [3] and [15].Each aCGH profile gives the relative quantity  of DNA for 2 215 probes.Samples with more than 7% missing values were removed and remaining missing values imputed as the average of their neighbors as in [15].This left 43 of the original 57.The first goal is simply to show that our algorithm works meaningfully on multi-dimensional signals (here, 43-dimensional); the biological goal itself is to find locations at which many samples change mean simultaneously, as these may be important copy-number changes in bladder cancer.We found 99 such change-points in a time of four minutes.It seems that several algorithms overestimate the number of useful shared change-points (97 in [3] and [15]).For instance, using our method with β = 1 seems to find a "shared" change in mean even when only a small number of signals do seem to have that particular change-point; this is fine in theory but not necessarily helpful for the true biological goal mentioned above.However, we have noticed an intriguing property of our algorithm.If β → 0, the number of change-points selected also decreases to zero, broadly in a way that change-points found later in the algorithm are removed first.Consequently, the choice of a small β seems to acts to select "important" variables (here, shared change-points).For instance, with β = .001,only one change-point is selected, at location 1 724.This is where nearly every profile (at least 40 of the 43) has a visually obvious change-point (or amplification-another type of genetic event, here seen as an isolated point far from the signal).Raising β to .002adds three further change-points (811, 1 268 and 1 907) and to .005five more (182, 428, 1 141, 1 534, and 2 044).These are shown for four individuals in Figure 3 by solid, dashed and dotted lines respectively.These change-point locations are indeed shared by many signals, on average around 18/43 from a simple visual check.It is therefore an interesting open question as to whether our algorithm might be used to perform model selection, i.e., selection of the "best" β for a given task, but this is out of the scope here.

II. Algorithm for long signals
Remember that as it currently stands, the longest signal that we can treat depends on the largest matrix that can be stored, which depends in turn on the memory of a given computer (memory problems for simply storing a matrix on a standard PC typically start to occur around We tested this strategy on 3-dimensional simulated signals of length 10 3 , 10 4 , 10 5 and 10 6 with ten change-points.After randomly selecting change-point locations, each segment between consecutive change-points was randomly assigned (coin toss) to be either multidimensional Gaussian or Student's t-distributions, with random means (uniform µ ∈ [−1, 1] 3 ), variancecovariance matrices (positive-definite with lower-diagonal elements uniformly generated from [0, 1] and mirrored to the upper-diagonal) and degrees of freedom (one of 2, 8 or 16 with equal probability) depending on the type of distribution randomly selected.
The exact performance of the method in correctly finding change-point locations depends on how much we allow the distribution to change after each change-point; in effect, the method will perform almost perfectly if the data is "easy" enough.For this reason, here we are much more interested in simply measuring its ability to run with extremely long signals in a short period of time.Figure 4 (left) shows a typical trial with 10 000 points in a 3-dimensional signaleach plot is one of the three dimensions.Notice that some change-points are easily spotted by eye, others not so.For simplicity, in the top plot we show the location of the true change-points (solid lines) and in the other two plots, the algorithm's predicted change-points (dotted lines, shared of course across the three dimensions/plots).Clearly the actual performance of the algorithm on moderately difficult data is not in question here.Figure 4 (right) shows the time required to run the algorithm for long signals, averaged over 100 trials.We see that even with one million points, we typically have a result in under two minutes.

V. Discussion
We have derived the asymptotic distribution of a statistic that was previously used to build algorithms for finding change-points in signals.Our new result led to a novel way to build a practical algorithm for general change-point detection in long and potentially multidimensional signals that came from the surprising realization that it was possible to approximately simulate from this quite complicated asymptotic distribution.Furthermore, the method appears to have higher power (in the statistical sense) than previous methods based on permutation tests for signals of a thousand points or more.We have tested the algorithm on several simulated and real data sets, as well as a sub-sample variant for dealing with extremely long signals.
An interesting line of future research would be to find ways to segment the original signal without requiring stocking a matrix in memory with the same number of rows and columns as there are points in the signal, currently a bottleneck for our approach and even more so for previous permutation approaches.In addition, the question of a pertinent choice of the power β ∈ (0, 2) remains an open question.

VI. Proof of Theorem II.1
To prove Theorem II.1, we require a useful technical result.Let us begin with some notation.For each integer K ≥ 1, let D K [0, 1] denote the space of bounded measurable functions defined on [0, 1] taking values in R K that are right-continuous with left-hand limits.For each integer n ≥ 1, let V (k) n , k ≥ 1, be a sequence of processes taking values in D 1 [0, 1] such that for some For each integer K ≥ 1, define the process taking values in D K [0, 1] by n , . . ., V Assume that for each integer K ≥ 1, V n,K converges weakly as n → ∞ to the D K [0, 1]-valued process V K defined as where We shall establish the following useful result.
Proposition VI.1 With the notation and assumptions introduced above, for any choice of constants Proof.Notice that by ( 15) From this we get that with probability 1, for each n ≥ 1, which in turn implies that with probability 1, for each n ≥ 1, where Since for each n ≥ 1 and K ≥ 1, n , by completeness of D 1 [0, 1] in the supremum metric (see page 150 of monograph [2]), we infer that T n ∈ D 1 [0, 1].In the same way we get using ( 16) that lim where and thus that T ∈ D 1 [0, 1].Also, since by assumption for each integer K ≥ 1, V n,K converges weakly as n → ∞ to the D K [0, 1]-valued process V K , we get that T (K) n converges weakly in D 1 [0, 1] to T (K) , where We complete the proof by combining this with ( 17) and (18), and then appealing to Theorem 4.2 of [2].
We are now ready to prove Theorem II.1.It turns out that it is more convenient to prove the result for the following version of the process Y n , namely which is readily shown to be asymptotically equivalent to Y n ( h2 , t).Following pages 196-197 of [19], we see that n,k (t), and 2U (2) n,k (t). Thus, We also write where, for k ≥ 1, A simple application of Doob's inequality shows that there exists a constant M > 0 such that ( 15) and ( 16) hold, for V (k) n and V (k) defined as in (19) and (20).For any integer K ≥ 1, let U 1 be the random vector such that U T 1 = (φ 1 (X 1 ), . . ., φ K (X 1 )), and for any n ≥ 1 let U 1 , . . ., U n be i.i.d.U 1 .Clearly, (U 1 U T 1 ) = I K and by the multivariate central limit theorem, as n → ∞, where U n = n −1 ∑ n j=1 U j .Consider the process defined on D K [0, 1] by where for any i ≥ 1, W By Donsker's theorem the process W n,K (t) converges weakly as n → ∞ to the R K -valued Wiener process (W K (t)) 0≤t≤1 , with mean vector zero and covariance matrix (t 1 ∧ t 2 )I K , t 1 , t 2 ∈ [0, 1], where W K (t) := W (1) (t), . . ., W (K) (t) .
Using this fact along with the law of large numbers one readily verifies that for each integer n , . . ., V n ) converges weakly as n → ∞ to (V (1) , . . ., V (K) ), where V n and V (i) are defined as in (19) and (20).All the conditions for Proposition VI.1 to hold have been verified.Thus the proof of Theorem II.1 is complete.

VII. Appendix I. Proof of Lemma II.1
Notice that for each n > 1 the E n, nγ (X; β) is equal to the statistic in (3) with k = nγ .By the law of large numbers for U-statistics (see Theorem 1 of [18]) for any γ ∈ (0, 1) with probability 1, nγ 2 Next for any M > 0, write By the strong law of large numbers for generalized U-statistics given in Theorem 1 of [18], for any M > 0, with probability 1, By the c r -inequality and usual law of large numbers for each M > 0, with probability 1, In the same way we get that, with probability 1, Obviously, as M → ∞, Putting everything together we get that ( 14) holds.

II. A technical result
Let X and X be i.i.d.F and let ϕ be a symmetric measurable function from Recall the notation (7).Let A be the operator defined on L 2 (R d , F) as in (9).
Notice that Let us now introduce some useful definitions.Given β ∈ (0, 2) and ϕ β (x, y) = |x − y| β , define as in (7), The aim here is to verify that the function h2,β (x, y) satisfies the conditions of Theorem II.1 as long as Let Ãβ denote the integral operator 22) implies ( 6) with ϕ = ϕ β , which, in turn, by (11) implies where λ i , i ≥ 1, are the eigenvalues of the operator Ãβ , with corresponding orthonormal eigenfunctions φ i , i ≥ 1.
Next we shall prove that when (22) holds then the eigenvalues λ i , i ≥ 1, of this integral operator Ãβ satisfy (12).This is summarized in the following lemma, whose proof is postponed to the next paragraph.
The technical results that follow will imply that λ i ≤ 0 for all i ≥ 1 and ∑ ∞ i=1 λ i is finite, from which we can infer (12), and thus Lemma VII.1.Let us begin with two definitions.

Definition
Next, we shall be using part of Lemma 2.1 on page 74 of [1], which we state here for convenience as Lemma VII.2.
Lemma VII.2 Let K be a symmetric function on X × X .Then, for any x 0 ∈ X , the function K(x, y) = K(x, x 0 ) + K(y, x 0 ) − K(x, y) − K(x 0 , x 0 ) is positive definite if and only if K is conditionally negative definite.
The following lemma can be proved just as Corollary 2.1 in [9].
Lemma VII.3 Let H : R d × R d → R be a symmetric positive definite function in the sense of Definition VII.1.Assume that H is continuous and EH 2 (X, X ) < ∞, where X and X are i.i.d.F. Then E(g(X)H(X, X )g(X )) ≥ 0 for all g ∈ L 2 (R d , F), i.e., H is L 2 -positive definite in the sense of [9].
We recall that an operator L on L 2 (R d , F) is called positive definite if for all g ∈ L 2 (R d , F), g, Lg ≥ 0.
Proposition VII.1 Let ϕ : R d × R d → R be a symmetric continuous function that is a conditionally negative definite function in the sense of Definition VII.2.Assume that ϕ(x, x) = 0 for all x ∈ R d and Eϕ 2 (X, X ) < ∞.Then ϕ defines a positive definite operator L on L 2 (R d , F) given by Lg(x) = − R d h(x, y)g(y)dF(y), x ∈ R d , g ∈ L 2 (R d , F), where h is defined as in (7).Furthermore the operator L on L 2 (R d , F) given by Lg(x) = − R d h(x, y) + Eϕ(X, X ) g(y)dF(y), x ∈ R d , g ∈ L 2 (R d , F), is also a positive definite operator on L 2 (R d , F).
Since ϕ is assumed to be conditionally negative definite, by Lemma VII.2 we have that for any fixed u ∈ R d , ϕ(x, y, u) is positive definite in the sense of Definition VII.1.Hence, since ϕ is also assumed to be continuous, by Lemma VII.3 for all g ∈ L 2 (R d , F), E g(X)ϕ(X, X , u)g(X ) ≥ 0.
Noting that if U has distribution function F, Eϕ(x, y, U) = −h(x, y), we get, assuming that X, X and U are independent, that E g(X)ϕ(X, X , U)g(X ) = −E g(X)h(X, X )g(X ) ≥ 0.
This implies that whenever for some i ≥ 1, ( λi , φi ), with λi = 0, is an eigenvalue and normalized eigenfunction pair of the operator L, it is also an eigenvalue and normalized eigenfunction pair of the operator L.Moreover, since the integral operator L is positive definite on L 2 (R d , F) , this implies that for any such nonzero λi (where necessarily i ≥ 2) − R d R d φi (x) h(x, y) + Eϕ(X, X ) φi (y)dF(x)dF(y) = − R d R d φi (x)h(x, y) φi (y)dF(x)dF(y) = λi ≥ 0, which says that the operator L is positive definite on L 2 (R d , F).
Next, as in the proof of Proposition VII.1, any eigenvalue and normalized eigenfunction λi , φi = (−λ i , − φ i ) pair, with λi = 0, i ≥ 1, of the operator Lβ = − Ãβ is also an eigenvalue and normalized eigenfunction pair of the operator L β .
We shall apply Theorem 2 of [22] to show that uniformly on compact subsets D of R d ,

Figure 1 :
Figure 1: Statistical power of the asymptotic (solid line) and E-divisive (dotted line) methods for detecting change c in mean in a Gaussian signal of length n.The first n/2 points are i.i.d.N (0, 1) and the last n/2 points i.i.d.N (c, 1).The Type I error is also shown (c = 0).Results are averaged over 1 000 trials.

Figure 3 :
Figure 3: Shared change-point detection in a set of 43 bladder tumor aCGH profiles; four profiles are shown here.The signal is normalized to zero with respect to the normal copy-number of two.Signals above or below zero indicate gains or losses in copy number respectively.Large isolated values indicate amplificationsanother important genetic event.The only change-point found for β = .001is at location 1 724 (solid line).Further shared change-points are found when raising β to .002(dashed lines) and .005(dotted lines).

Figure 4 :
Figure 4: Long-signal change-point detection.Left: typical 3-dimensional signal with 10 000 points and ten random change-point locations.True change-point locations are shown by solid vertical lines, and the predicted change-points (joint across the three dimensions) in dotted lines.Right: time required to run the algorithm for long signals, averaged over 100 trials.