Shape-Preserving Prediction for Stationary Functional Time Series

This article presents a novel method for prediction of stationary functional time series, in particular for trajectories that share a similar pattern but display variable phases. The limitation of most of the existing prediction methodologies for functional time series is that they only consider vertical variation (amplitude, scale, or vertical shift). To overcome this limitation, we develop a shape-preserving (SP) prediction method that incorporates both vertical and horizontal variation. One major advantage of our proposed method is the ability to preserve the shape of functions. Moreover, our proposed SP method does not involve unnatural transformations and can be easily implemented using existing software packages. The utility of the SP method is demonstrated in the analysis of non-metanic hydrocarbons (NMHC) concentration. The analysis demonstrates that the prediction by the SP method captures the common pattern better than the existing prediction methods and also provides competitive prediction accuracy.


Introduction
When continuous-time records are separated into natural consecutive time intervals, such as days, weeks, or years, for which a reasonably similar behavior is expected, the resulting functions can be described as a functional time series. For functional time series data, one unit of observation is an observed trajectory. Functional time series data exhibits two types of variations: amplitude variation which corresponds to the size or scale of trajectory features, and phase variation which accounts for location variation and temporal shifts. In this paper, we analyzed curves of non-metanic hydrocarbon (NMHC) collected at road level in an Italian city. In Figure 1, 7 consecutive trajectories of NMHC concentration are displayed. Observe that each daily curve has two peaks. The precise timing of the peaks varies across days due to human activity or some other environmental reasons. The variation of the occurrence time of the peaks can be viewed as phase variation. However, existing work typically only consider the vertical variation (i.e., amplitude), but not the variation in phase. An immediate result is that, the predicted curve may not show the common underlying pattern. To overcome this serious limitation, we develop a novel method for stationary functional time series, whose trajectories share a common pattern. Our goal is not only to obtain competitive prediction from the past data by some stationary functional time series model, such as functional auto-regression model, in terms of mean squared error, but also to preserve the underlying pattern for the predicted curves. There are available prediction methods for stationary functional time series. Besse et al. [3] proposed a non-parametric kernel predictor. Antoniadis and Sapatinas [1] studied first-order functional autoregression curve prediction based on a linear wavelet method. Kargin and Onaski [18] introduced the predictive factor method. Aue et al. [2] developed a method that uses multivariate techniques in functional time series prediction. Jiao et al. [16] proposed a partial functional prediction method, for the cases where the functions to be predicted are partially observed. There are also some other prediction methods for functional time series, and these methods have their own advantages. However, their main limitation is that they incorporate only vertical variation among the curves. Hyndman and Shang [11] proposed to use weighted functional principal component regression and weighted functional partial least squares regression. One attractive feature of their method is its ability to take account for changes in the functional shapes over time. However, the geometrically decaying weights restrict the shape of each function to be close to the neighboring functions. Hence, while this method works well for processes with slowly-evolving functional shape, its disadvantage is that it might not be suitable for situations where "neighboring" functions have different shapes or shapes change quickly across curves. Compared to Hydman's method, the SP method allows fast transition of phase. Some related work assume that functions are composed of multiple components which repeat themselves over different periods of time (see e.g., Lin et al. [12] and Lin et al. [13]). The difference between their work and our proposed method is that we assume that there is only one common pattern that repeats itself over the same period of time across curves. This, we believe, is more suitable for some cases such as the environmental data that is being analyzed in this paper.
When trajectories share a common pattern and meanwhile present phase variation, a typical technique researchers usually adopt is functional registration. In functional registration, each function is decomposed as f n (t) = X n • γ n (t), where the amplitude function X n (t) accounts for the vertical variation, and the warping function γ n (t) captures the phase information. However, to the best of our knowledge, methods for functional time series prediction have not incorporated functional registration. The prediction method that we develop here involves the prediction of amplitude functions and warping functions. One of the major challenges is the prediction of warping functions, since they do not lie in a linear space, and thus ordinary linear models are not applicable. Warping functions must be monotonically increasing, and are restricted to start and end at two fixed values. There are several ways to model warping functions. Generally speaking, all these methods seek to apply linear models to non-linear objects.
It is noted that warping functions share similar properties with probability distribution functions, that is, they are all non-decreasing and have common starting and ending values. There are some papers on modeling probability density functions. These work typically apply some transformations to density functions and then employ linear models to the transformed functions. Brumback and Lindstrom [4] proposed a self-modeling method for monotone functions involving the transformation proposed by Jupp [17], which is a bijective map from the space of monotone increasing vectors to Euclidean space. Gervini [8] used the Jupp transformation to study warped functional regression. Peterson and Müller [22] proposed to use the log quantile density transformation and log hazard transformation to map a density function into a linear Hilbert space. Kokoszka et al. [20] used the same transformations to predict density functions. Another way is to study the manifold structure of warping functions. Here some of these related methods are reviewed. Cheng and Wu [6] used local linear regression models to study the scale-to-manifold regression problem, where covariates lie on an unknown manifold. Su et al. [27] employed the transported square-root vector field (TSRVF) to implement statistical analysis of trajectories on Riemannian manifolds. Dai and Müller [7] developed principal component analysis for sphere-valued functional data. They proposed to apply functional principal component analysis (fPCA) to the tangent vectors at the Fréchet mean of the sphere.
However, all these methods have some limitations. One common characteristic of the first kind of method is that the transformations all involve the "logarithm" which sometimes dictates the need of another re-scaling step (e.g., log(f (Q(t))) and log(f (t)/{1 − F (t)}), where F (t) and Q(t) are the cumulative distribution function and quantile function of the density function f (t), see Peterson and Müller [22]). A major limitation of the logarithm function is that it will either shrink the variation of large values or exaggerate the variation of values close to 0. In addition, density functions (and warping functions) lie in a nonlinear space, and it is always unnatural to use linear models directly. Regarding the second framework, one may consider applying linear models to the tangent space of the manifold composed of the square root of slope functions (SRSF) of warping functions γ(t), defined as γ(t). The SRSFs of warping functions lie on an infinite dimensional sphere, and thus the tangent space has clear and simple representation. However, the SRSFs of warping functions form only the positive orthant of the sphere (− γ(t) is not included), and the predicted SRSFs by linear models may lie on the negative orthant. In addition, this approach still seeks to transform nonlinear space to linear space, and thus also change the original variation. All of these problems motivate us to develop a new methodology to predict the stochastic process composed of warping functions.
We develop a novel method that can jointly predict the amplitude and warping functions. The major advantage of our method is that it does not require any unnatural transformations and it retains the predicted warping functions strictly in their original non-linear space. To be more specific, we develop a state-space model where warping functions are driven by hidden states, and consequently, there is no need to transform warping functions between linear and non-linear spaces. We first implement functional registration to obtain amplitude and warping functions. To predict warping functions, we propose a state-space model, in which the states are driven by a Markov chain. Spherical K-means clustering, which is a popular technique for dimension reduction of non-linear space, can reveal the potential low dimensionality of warping functions. In the model, finite prototypes are employed to represent the nonlinear space of warping functions, where each warping function is assumed to be the sum of its corresponding prototype and a random error function.
For the prediction of the amplitude function, we develop a varying coefficient operator functional auto-regression model. Varying-coefficient models have been extended to functional data. Sentürk and Müller [24] generalized the functional varying coefficient models to incorporate the influence of recent past values of predictors on current response. Further improvements were reported in Sentürk and Müller [25] which proposed a new representation for varying coefficient functions and introduced a smooth history index function to model the dependence of the response on the recent past values of predictors. Krafty et al. [21] employed a varying coefficient model in the analysis of tumor growth curves. Our varying coefficient model is fully functional, and the states of previous warping functions influence the current coefficient operators. The predicted warping functions and amplitude functions are combined to obtain the final prediction.
In this article, the following issues will be addressed: 1. Since the real states in the state-space model are unknown in practice, the transition probability matrix of the hidden Markov chain has to be estimated through the estimated states instead of the real states. The large-sample behavior of the estimator will be investigated in this paper.

2.
A method for determining the dimension and order of the varying coefficient operator functional auto-regression model will be developed.
3. We will develop a measure to evaluate the performance of our proposed method in preserving the common pattern across curves.
The rest of the paper is organized as follows. In Section 2, we formulate the model for the stochastic process of warping functions and amplitude functions, the joint prediction procedure of amplitude and phase variation, and discuss how to measure shape similarity. In Section 3, we derive the asymptotic properties of the least squares estimator of the transition matrix in the state-space model. Section 4 displays the results of the simulation study comparing the prediction performance of the SP method and some other competitor methods. In Section 5, we report the results of the analysis on the NMHC concentration. Section 6 concludes the article.
2 Models, Algorithms, and Shape Similarity

Amplitude and phase variation
In this section, we formulate the models for amplitude and phase variation. Let {f n (t) : n ∈ N} be an arbitrary stationary functional time series defined on a common probability space (Ω, A, P ), where the function index n is discrete and the time index t is continuous. Assume the following decomposition f n (t) = X n • γ n (t). In this decomposition, X n (t) is the amplitude function and γ n (t) is the warping function.  f n , f n < ∞. Define the mean function and covariance function of the amplitude functions as follows In practice, µ(t) and K(t, s) are always unknown and need to be estimated from samples {X 1 (t), . . . , X N (t)} as follows: By Mercer's theorem, K(t, s) and K(t, s) admit the following decomposition where ν m 1 , ν m 2 = 0 (m 1 = m 2 ) and ν m 2 = 1, m ∈ N + . The warping functions γ n : H → H have the following property: γ n (0) = 0, γ n (1) = 1, γ n is invertible, both γ n and γ −1 n are continuous, and assume that the first order derivative exists and satisfieṡ γ n (t) < ∞ for all t ∈ [0, 1]. Let Γ denote the set of all such functions. The square root of slope function (SRSF) of γ n (t) is defined as s n (t) = S(γ n (t)) = γ n (t), and a SRSF s n (t) can be transformed back into a warping function γ n (t) by applying where S(·) is a bijective map. It can be shown that s n (t) = 1, thus {s n (t) : n ∈ N} lie on an infinite-dimensional sphere. In practice, only f n is observed, and we propose to apply functional registration algorithm to obtain X n and γ n . In the following, it is assumed that both {X n (t) : n ∈ N} and {γ n (t) : n ∈ N} are already obtained by functional registration (see e.g., Ramsay and Silverman [28], Kneip & Ramsay [19], and Srivastava & Klassen [26]).

Functional auto-regression model for amplitude functions
One way to model amplitude functions is by a FAR model. A FAR(q) process is defined by the stochastic recursion where { n (t) : n ∈ Z} are centered, independent and identically distributed innovations in L 2 [0, 1] and Φ j (·) : H → H is a bounded linear operator for j = 1, . . . , q, and are defined so that the above recursive equation has a unique causal solution (see [10], pp. 236). Horváth and Kokoszka [10] has developed a sufficient condition for causality of FAR(1) process, and the result can be extended to FAR(q) process (q > 1) with the state-space form of FAR(q) process.
The FAR model is easy-to-implement for the prediction of functional time series. One approach of estimation is to first project functions onto a finite dimensional sub-space spanned by some functional basis, e.g., functional principal components (fPC), then multivariate techniques can be applied without much loss of information (see Aue et al. [2]).

State-space model for warping functions
Phase variation, which pertains to the variation of locations of curve features, is captured by the warping functions {γ n (t) : n ∈ N}. Since {γ n (t) : n ∈ N} are defined in an infinite dimensional non-linear manifold, linear methods are not appropriate for the prediction of {γ n (t) : n ∈ N}. Note that it is computationally intractable to predict warping functions in the infinite dimensional manifold Γ. Hence, we propse to employ non-linear dimensional reduction techniques, and develop the state-space model with the following assumptions.
• The process {γ n (t) : n ∈ N} is driven by a Markov chain, which is irreducible, ergodic, and of finite states. Each state c n of the Markov chain is associated with a fixed prototype warping function b cn (t). γ n (t) can be expressed as the summation of one prototype and a random error function u n (t).
• The random error functions {u n (t) : n ∈ N} are of mean zero, and given c n , u n (t) is independent of c m and u m (t), m = n, and are such that the resulting function γ n (t) is still a warping function.
Suppose the Markov chain has g states, then each state c n can be represented by a state-indicating row vector ω n , which is g-dimensional satisfying ω n,cn = 1 and ω n,i = 0, for i = c n . Denote P as the transition probability matrix. The state-space model is specified as follows Remark 1. In this state-space model, warping functions are driven by hidden states and it is not needed to employ linear models to account for its functional variability, and consequently, there is no need to transform warping functions between linear and non-linear spaces.
Remark 2. One possible concern is identifiability, say, X • γ 1 and X • (γ −1 0 • γ 0 ) • γ 1 are the same, where γ 0 and γ 1 are two arbitrary different warping functions. However, once the template is fixed (e.g., sample mean) for functional registration, it will not be a problem. Specifically, if the template X 0 (t) is fixed for {f n : n ∈ N}, the warping function γ n = arg min γ∈Γ f n −X 0 •γ is unique for arbitrary n, where · is some metric employed for functional registration, and in this paper, we propose to use Fisher-Rao metric which will be discussed later.

Estimation of the state-space model
Since the hidden states and transition probability matrix are unknown in practice, we need to first estimate b j 's, ω n 's, and then P . We apply spherical K-means clustering, which is a widely-accepted dimension reduction technique for non-linear space, to the SRSFs of warping functions, and use the cluster centroids as the estimators of the SRSFs of b j 's. The estimators of b j 's can be obtained by applying S −1 (·) to the cluster centroids,b wherep j (t) is the centroid of the j-th cluster of SRSFs. The classified categories of {s n (t) : n ∈ N} are considered as the estimated states of {γ n (t) : n ∈ N}. More details are discussed below.
The standard spherical K-means clustering aims to minimize over all assignments of objects n to cluster c n ∈ {1, . . . , g} and over all SRSF representations of prototype warping functions p 1 , . . . , p g . A typical projection and minimization procedure is repeated to obtainĉ n 's andp j 's.
Letω n denote a g-dimensional vector where only theĉ n -th element is 1 and the rest elements are zeros. Then P is estimated by the least squares method, where ω n is replaced withω n , say, The number of hidden states is unknown in practice, and we propose a cross-validation method in Section 2.4.2 to select g. We assume the selected g is correct, and will not distinguish between the selected g and the real number of states. Note that, using the R package skmeans, spherical K-means clustering algorithm can be implemented by the R function skmeans (see Hornik et al. [9]). The estimation procedure is summarized in Algorithm 1:

Algorithm 1 Estimation of the state-space model
Step 1 Obtain the SRSFs of warping functions, s n = S(γ n ).
Step 2 Fix the number of states g, apply spherical K-means clustering to {s n : n ∈ N}, and obtain the cluster centroids {p j : j = 1, . . . , g} and the classified categories {ĉ n : n ∈ N}.

Joint Prediction Methodology
After separating amplitude and phase components, it is natural to consider how to predict the two components jointly, as they are not necessarily independent of each other. Because warping functions and amplitude functions are defined in two different spaces, it is necessary to find a common space for these two kinds of functions in the joint prediction. To be more specific, we assume the amplitude and warping functions are jointly driven by a Markov process. The joint modeling procedure is discussed below.

Prediction of warping function
We convert the stochastic process of warping functions into a Markov chain by applying spherical K-means clustering to their corresponding SRSFs, as has been discussed in Section 2.3. In order to incorporate the correlation between phase and amplitude variation, we assume the same kind of state-space model for amplitude functions, and apply K-means clustering to estimate the hidden states of amplitude functions. Similarly, the classified categories are treated as the estimated hidden states. Figure 2 shows the framework, where ω represents the true state andω represents the estimated state, and superscripts (a) and (f ) refer to amplitude and phase variation respectively. The two categorical sequences are combined to obtain a new sequence,ω n = (ω n ), where ⊗ signifies the Kronecker product. Then apply the least squares method to estimate the transition matrix P of this combined estimated Markov chain, where P is a g × g matrix, g is the number of states of phase variation and is the number of states of amplitude variation. Then the predicted state isω N +1 =ω N P . Supposeω N +1,jb j (t). As a side note,ω N +1 is not a stateindicating vector, but a vector of probabilities of which the sum is one. It can be easily shown thatγ N +1 (t) is a warping function since Remark 3. When the sample size is small, some ad-hoc adjustments might be needed to let P satisfy the constraints of a transition matrix. One approach is to obtain the transition matrix by solving the optimization problem P = arg min P M is the set of all probability transition matrices, · F is Frobenius norm, and P LS is the original least squares estimator of P .

Data-driven selection of the number of states
To the best of our knowledge, there is no widely accepted procedure for order selection of hidden Markov models. The selection of state numbers is a trade-off between bias and variance. A large number of states will decrease the approximation error by prototype warping functions, but will increase variance since there will be more parameters to be estimated. Considering that our purpose is prediction, we propose an approach based on prediction error. The prediction performance is evaluated by 2 mean squared error and amplitude distance (Section 2.5.2). Assume that there is a large test data-set D test which is an independent copy of the dataset used for model fitting. Then use the first 80% curves in D test to fit a model with g phase states and amplitude states, and predict the rest 20% curves with the fitted model, and then calculate the mean squared error and the average amplitude distance between the predicted curves and the curves to be predicted. Then refer to these two errors for the order selection. In practice, the sample size may be limited, and it is not possible to reserve a large fraction of data for the testing set. In this case, Monte-Carlo cross-validation is a good alternative approach. A fraction of consecutive curves are selected as training set and the rest curves are used for testing. This procedure is repeated multiple times where the partitions are randomly chosen on each run. A group of candidate state numbers are preset, and the two average errors are computed for models with different candidates. The state numbers are selected such that both errors are decent.

Prediction of amplitude function
We now develop the FAR model with varying coefficient operators for the prediction of amplitude functions. The coefficient operator is determined by the state of the previous warping function. Define Y n (t) = X n (t) − µ(t) and let c (f ) n be the hidden state of γ n . The proposed model has the following representation where { n : n ∈ N} are centered, independent and indentically distributed innovations in L 2 [0, 1], and {Φ By simple decomposition, where N k is the number of Y n+1 so that γ n is in state k. Then minimize the following quantity to obtain the estimation of {Φ After projecting all functional elements onto the sub-eigenspace spanned by the finite major functional principal components of {X n (t) : n ∈ N + }, the multivariate technique can be applied to estimate {Φ The entire joint prediction procedure is summarized in Algorithm 2.

Algorithm 2 Joint prediction algorithm (one-step ahead)
Step 1 Apply functional registration algorithm to obtain the amplitude and warping functions.
Step 2 Apply spherical K-means clustering algorithm resp. K-means clustering algorithm to the SRSFs of the warping functions resp. the amplitude functions to obtain the estimated states. Combine the two state sequences, fit a Markov model, and obtain the prediction of the next warping functionγ N +1 by the state space model.
Step 3 Obtain the prediction of the next amplitude function, Y N +1 , based on a FAR model with varying coefficient operators.
Step 4 Warp Y N +1 +μ byγ N +1 to obtain the final prediction, Remark 4. The final expression is binary. In practice, the weighted predictor can also be considered, The weighted predictor typically have smaller variance but larger bias. The probabilities of states P (ĉ N = k) need to be estimated under some model, for example,

Parameter selection
Now we develop the functional final prediction error (fFPE) criterion to select the order and dimension of the sub-eigenspace for the prediction of amplitude functions. Create the d-variate fPC score vector Y n = (y n,1 , . . . , y n,d ) , where y n,m = Y n , ν m = X n − µ, ν m . Since the eigenfunctions are orthogonal and the fPC scores are uncorrelated for each Y n , the mean squared prediction error can be decomposed as where · denotes the 2 -norm, andŷ N +1,m is the prediction of y N +1,m from the past d-variate fPC score vectors. As for the first summand, assume {Y n : n ∈ N} follows a dvariate VAR(q) process (see Aue et al. [2] for the justification of the VAR process) with where Z n is the error term. For any state of warping function k, it can be shown that (see, e.g., Lütkepohl [14]) q ] ) andβ k is the least squares estimator of β k , Σ d Z,k is the covariance matrix of {Z n+1 : n ∈ N} and Γ q,k = var(vec([Y q , . . . , Y 1 ])) as c Assuming the classification is correct, it follows that is the unbiased estimator of Σ d Z,k , the fFPE criterion is given by, We propose to select q and d by minimizing fFPE(q, d).

Functional shape space
One of the main questions considered in this article is: what is a good measurement of shape similarity? In order to compare the shapes of different trajectories, we need to formally define the functional shape space E and to evaluate shape similarity. Here, we shall follow the convention that shape is independent of scale and location. We first rescale and relocate functions, so that they are of unit norm, and start at the same value. Then we study the shape difference of the thus obtained set. This resulting space is termed pre-shape space.
Suppose there are two functions f 1 and f 2 , with the corresponding transformations in the pre-shape space asf 1 andf 2 . We propose the principle that, iff 1 can be warped intof 2 , the two functions f 1 and f 2 are considered to be of the same shape. This idea is motivated by shape data analysis (see e.g., Srivastava and Klassen [26]). To be specific, stretching, rotating, or relocating do not change the shape of planar shape objects. As a motivating example in shape data analysis, suppose that there is a planar contour delineating a human hand, stretching, rotating, and relocating the contour will not change the shape of human hand.
In the functional shape space, we unify the shape representations, that is, obtain the unification of all points in pre-shape space representing the same shape. Therefore, the functional shape space E is defined as the quotient space of L 2 [0, 1] with respect to relocating, rescaling and warping. We define the equivalence relation ≡ on E as follows: letf 1 ,f 2 be the pre-shape elements of two functions f 1 , f 2 , then f 1 ≡ f 2 if there exists a warping function γ such thatf 1 =f 2 • γ. For any function f 0 , the set of all functions, of which transformations in the pre-shape space can be warped intof 0 , is considered as an object in the functional shape space E, that is, dt.
One important property of Fisher-Rao metric is invariance of simultaneous warping: We shall use the amplitude distance (2-1), which has been shown to be a proper distance in the functional shape space, to measure shape similarity, If two functions are of the same shape, then the amplitude distance between the two functions is zero. The geodesic distance under the Fisher-Rao metric is invariant to simultaneous warpings. Therefore, the effect of phase variation does not influence the amplitude distance between two functions, say, for any two different warping functions , and thus the amplitude distance between two shape objects is unique. (see [26], pp. 85-88) Remark 5. In this paper, we use both the amplitude distance and the Euclidean distance to evaluate the prediction. Neither of the distance can evaluate the prediction individually, as we consider both amplitude and phase variation.

Theoretical Results
The least squares method is employed to estimate the unknown transition probabilities, and aim to find the asymptotic properties of the estimator. It is known that the least squares estimator of the transition matrix of a Markov chain is consistent and asymptotically normal (see van der Plas [29]). However, since the real hidden states need to be estimated, the least squares estimator of the transition matrix P is not necessarily consistent with P . To find the matrix that P is consistent with, the following assumptions are needed.
A1. The Markov chain {ω n : n ∈ N} is stationary and ergodic, and has finite states; A2. The estimated prototypes are obtained from an independent copy of observations, and thus the estimated stateω , and σ(X) signifies the σ-algebra induced by X; Note that Assumption (A2) is compatible with the assumption on the error term u n of the state-space model. Based on the model assumption, the estimated stateω n is only relevant to the real state ω n and the random error u n , so Assumption (A2) is a natural consequence of the assumption on u n . Assumption (A2) means, given the corresponding real state, the estimated state is independent of all other states. This is a reasonable assumption, since as the sample size grows large enough, the estimated prototype functions tend to be uncorrelated with any individual function. Assumption (A3) guarantees a constant transition probability matrix of the estimate states.
The Bayesian theorem implies the following proposition.
. Remark 6. Proposition 1 implies the transition probability of the estimated Markov chain.
Next we show that the least squares estimator P is consistent with Notationally, let L N (P ) = N −1 N n=2 ω n −ω n−1 P 2 , then we develop the following theorem for the least squares estimator P N , which is a generalization of the result of van der Plas [29]. In order to establish the asymptotic normality of the least squares estimator P N , we make one additional assumption as follows, and introduce the following notations, The asymptotic normality of the least squares estimator P N is established in Theorem 2.

Remark 7.
The estimation of the transition probability matrix is consistent and asymptotically normal. Therefore, it is safe to use the SP method for prediction, as the estimation behaves stably with large sample size.

Simulations
Finite sample simulations were implemented to illustrate the effectiveness of the SP method. The method was tested on a FAR(1) process with phase variation. In each simulation run, 300 (or 600) functions were simulated, and the first 90% of the simulated trajectories were used for model fitting to do one-step ahead prediction for the remaining 10% of trajectories by a moving-window approach. Each simulation run was repeated 200 times. We compared our method with the prediction method of Aue et al. [2], which does not incorporate functional registration, through two kinds of distance, the 2 Euclidean distance ( 2 ) and the amplitude distance (FR). We also compared our proposed state-space model with the transformation methods on the prediction of warping functions.

Simulation of warping function
Based on the properties of B-splines (de Boor [5]), we develop the following procedure to simulate warping functions. We first generated four prototype warping functions with 7 B-splines. The B-spline scores of the four prototypes were generated through the following procedure: 1. Four 6-variate vectors with positive elements, ξ i = (ξ i1 , . . . , ξ i6 ), i = 1, 2, 3, 4, were specified as follows: 2. The vectors obtained in the first step were transformed as follows: then concatenate a zero to each of the vectors (φ i2 , . . . , φ i7 ) for i = 1, 2, 3, 4 to finalize the score vectors of prototype warping functions.
Following the above procedure, the four score vectors φ i = (φ i1 , . . . , φ i7 ) were constrained to satisfy φ i1 = 0, φ i7 = 1 and φ i1 < φ i2 < . . . < φ i7 . The prototypes were generated with 7 B-splines b i (t) = 7 j=1 φ ij B j (t), t ∈ [0, 1]. The independent error warping functions, denoted by γ e n , were simulated through the same procedure, say, first simulate a 6-dimensional vector ξ e n , then transform it to φ e n and take the B-spline expansion. The elements in ξ e n independently follow the uniform distribution U [1,2]. The state of warping functions were simulated under a Markov process with four states, and the probability transition matrix has the representation Each state is associated with a prototype. The final warping functions were obtained by γ n (t) = (1 − τ )b c (f ) n (t) + τ γ e n (t), where 0 < τ < 1 determining the proportion of signal, and c (f ) n is the simulated state of the n-th warping function. Figure 3 displays the simulated warping functions and the prototypes.

Simulation of amplitude function
Amplitude functions were simulated with the same 7 B-splines, where the scores of the third and the fifth B-splines are significantly larger than those of the other B-splines.   Table 1 and 2 display the average 2 prediction error ( 2 , defined as n f n − f n 2 /(0.1N )) and amplitude difference (FR) between the predicted functions and the corresponding actual functions being predicted for p = 0.5, 0.7, 0.9 and N = 300, 600.

Second simulation setup
In the second setup, the amplitude functions were simulated similarly with one coefficient matrix Φ = 0.8I 2 . The major difference is the simulation of warping functions. In this simulation setup, the same procedure is applied to simulate a sequence of independent warping    Table 3 shows the average 2 prediction error and amplitude distance between the predicted curves and the corresponding real curves for different values of β and N .

Discussion on the simulations
In the first simulation setting, the optimal number of hidden states of warping functions is 4. Therefore, as g changes from 3 to 4, the performance of the SP method is significantly improved. Table 1-3 show that (1.) The SP method preserves the common pattern better after incorporating functional registration into prediction, and (2.) The performance of the SP method is robust to the selection of the number of hidden states. When the phase variation is difficult to predict, the prediction by the SP method may not be as accurate as the prediction without functional registration. However, if the shape of the curve to be predicted is of major concern, the SP method is still a good alternative.

Comparison with logarithm transformation methods
As has been discussed in the introduction, one feasible prediction approach for warping functions is predicting the transformed warping functions. Such methods typically transform highly constrained warping functions to unconstrained functions, and then linear models are employed to predict the transformed functions. The transformations in such methods always incorporate "logarithm" and the original variation is shrunk or exaggerated. To show the superiority of the state-space model approach, it was compared with two transformation methods.
This method typically requires fine grids so that the discretized vectors capture the major features of warping functions.
The second method is a functional approach. Similar to those transformations employed in Petersen and Müller [22], the transformation applied in this method has strict inverse only modulo the quotient space, and specifically in this method, two functions f 1 (t) and f 2 (t) defined over [0,1] are equivalent if f 1 (t)/f 1 (1) = f 2 (t)/f 2 (1). The transformation ψ(·) and its inverse are given as follows: The prediction method proposed by Aue et al. [2] was applied to predict the future transformed functions, and the prediction was then transformed back to a warping function with ψ −1 (·).
Here, 500 warping functions were simulated. In the Jupp transformation method (JP), the warping functions were evaluated at 10 equally-spaced grids between 0 and 1. In the functional transformation approach (FT), 10 functional principal components were employed to represent the functions {r n (t) : n = 1, . . . , 500}. In our state space model method (MC), 10 prototypes were selected. The prediction error ofγ(t) was evaluated with the spherical geodesic distance d(γ(t), γ(t)) = cos −1 ( S(γ(t)), S(γ(t)) ). Table 4 displays the average prediction errors of different methods under different q's.  As q is large (close to 1), the middle part of the simulated warping functions is more likely to be flat, say,γ n ≈ 0, and the "logarithm" transformations are more likely to exaggerate the original variation. Table 4 shows that the MC method is superior to the other two competitor methods, especially when the warping functions are flat over some intervals.

Analysis of Pollution Concentration Trajectories
The SP method was applied to predict the air quality trajectories (De Vito et al. [23]). The data is available at the UCI Machine Learning Repository. The dataset contains hourly averaged observations collected from 5 metal oxide chemical sensors embedded in an Air Quality chemical multi-sensor device. The device was placed at road level in a significantly polluted area in an Italian city. The pollution concentration was recorded from March 2004 to February 2005 (one year). Here we analyzed the Non Metanic Hydrocarbons concentration data since it contains less missing values.
The pollution concentration is highly influenced by the traffic flow, so the trajectories share a common two-peak pattern (one peak in the morning and one peak in the afternoon). However, humidity, wind speed, temperature and other environmental factors can also influence the concentration, thus the trajectories display phase variation. Figure 5 displays the smoothed NMHC concentration trajectories and the registered trajectories. The trajectories of the weekdays are marked in black; those of Saturdays are marked in blue; and those of Sundays are marked in red. Figure 6 displays the (prototype) warping functions. As the trajectories for the weekdays share a different mean from that of weekends, the amplitude functions were centralized with the means of each day of the week. After removing the days with too many missing values, there are 357 trajectories in total. We found that the SP method produced the overall best prediction when the amplitude and warping functions are predicted separately. The first 300 curves were used to train the model for predicting the rest 57 curves, and FAR(1) models were fitted to predict the amplitude functions of the trajectories to be predicted. The SP method was compared with the prediction method without functional registration (Aue et al. [2]). Table 5 displays the average 2 prediction error and amplitude distance between the predicted curves and the corresponding smoothed curves under different numbers of states g and dimensions of eigen-space d. It is noted that the SP method sacrifices marginal prediction accuracy to preserve the common two-peak pattern of the functional time series. The best SP prediction is achieved when d = 3, g = 10, while   the competitor method reaches its best performance when d = 7. This is because the functional registration step assures that less fPCs are needed to capture most of the vertical variation.

Conclusions
In this paper, we develop a new prediction method for stationary functional time series that display a common pattern. To the best of our knowledge, our SP method is the first to incorporate functional registration into prediction of functional time series. The prediction algorithm jointly predicts the amplitude and phase components. These two predicted components are then combined to form the final prediction.
The SP method has two main advantages. First, if the curves displayed a common pattern and significant phase variation, considering vertical variation only would lead to the loss of main features. Comparatively, the new methodology separates amplitude and phase components first, thus the SP method can preserve the pattern better. Second, the SP method is "natural" in the following sense: (1.) S(·) is a bijective transformation, thus no further adjustments are needed to transform a SRSF back to a warping function, which avoids bias; (2.) The method does not directly apply linear models to non-linear objects, making the prediction natural and avoiding extremely small values resulting from the "logarithm". The simulation study and real data analysis of Non Metanic Hydrocarbons Concentration data show that the SP method is superior to the prediction methods without registration in capturing the common pattern of trajectories, and meanwhile produce predictions with competitive prediction accuracy. In this paper, it is assumed that the pattern (shape) repeats with a fixed period. However, in some cases, such as some biomedical or physical signals, a signal may be composed of multiple components, and each component repeats itself at different rate. The extension of the SP method to such cases is a research topic that will be pursued in the future.