Registered users receive a variety of benefits including the ability to customize email alerts, create favorite journals list, and save searches.
Please note that a Project Euclid web account does not automatically grant access to full-text content. An institutional or society member subscription is required to view non-Open Access content.
Contact email@example.com with any questions.
A general set-up is proposed to study stochastic volatility models. We consider here a two-dimensional diffusion process and assume that only is observed at discrete times with regular sampling interval . The unobserved coordinate is an ergodic diffusion which rules the diffusion coefficient (or volatility) of . The following asymptotic framework is used: the sampling interval tends to , while the number of observations and the length of the observation time tend to infinity. We study the empirical distribution associated with the observed increments of . We prove that it converges in probability to a variance mixture of Gaussian laws and obtain a central limit theorem. Examples of models widely used in finance, and included in this framework, are given.
The block bootstrap for time series consists in randomly resampling blocks of consecutive values of the given data and aligning these blocks into a bootstrap sample. Here we suggest improving the performance of this method by aligning with higher likelihood those blocks which match at their ends. This is achieved by resampling the blocks according to a Markov chain whose transitions depend on the data. The matching algorithms that we propose take some of the dependence structure of the data into account. They are based on a kernel estimate of the conditional lag one distribution or on a fitted autoregression of small order. Numerical and theoretical analysis in the case of estimating the variance of the sample mean show that matching reduces bias and, perhaps unexpectedly, has relatively little effect of variance. Our theory extends to the case of smooth functions of a vector mean.
This paper, which we dedicate to Lucien Le Cam for his seventieth birthday, has been written in the spirit of his pioneering works on the relationships between the metric structure of the parameter space and the rate of convergence of optimal estimators. It has been written in his honour as a contribution to his theory. It contains further developments of the theory of minimum contrast estimators elaborated in a previous paper. We focus on minimum contrast estimators on sieves. By a `sieve' we mean some approximating space of the set of parameters. The sieves which are commonly used in practice are D-dimensional linear spaces generated by some basis: piecewise polynomials, wavelets, Fourier, etc. It was recently pointed out that nonlinear sieves should also be considered since they provide better spatial adaptation (think of histograms built from any partition of D subintervals of [0,1] as a typical example). We introduce some metric assumptions which are closely related to the notion of finite-dimensional metric space in the sense of Le Cam. These assumptions are satisfied by the examples of practical interest and allow us to compute sharp rates of convergence for minimum contrast estimators.
For a -variate measure a convex, compact set in , its lift zonoid, is constructed. This yields an embedding of the class of -variate measures having finite absolute first moments into the space of convex, compact sets in . The embedding is continuous, positive homogeneous and additive and has useful applications to the analysis and comparison of random vectors. The lift zonoid is related to random convex sets and to the convex hull of a multivariate random sample. For an arbitrary sampling distribution, bounds are derived on the expected volume of the random convex hull. The set inclusion of lift zonoids defines an ordering of random vectors that reflects their variability. The ordering is investigated in detail and, as an application, inequalities for random determinants are given.
A multinomial rule of succession is derived with conditioning on partial past information. The probability computation is done with a Markov chain on tables of non-negative integers using Gröbner bases as described by Diaconis and Sturmfels.