Registered users receive a variety of benefits including the ability to customize email alerts, create favorite journals list, and save searches.
Please note that a Project Euclid web account does not automatically grant access to full-text content. An institutional or society member subscription is required to view non-Open Access content.
Contact email@example.com with any questions.
In this paper we solve the hedge fund manager's optimization problem in a model that allows for investors to enter and leave the fund over time depending on its performance. The manager's payoff at the end of the year will then depend not just on the terminal value of the fund level, but also on the lowest and the highest value reached over that time. We establish equivalence to an optimal stopping problem for Brownian motion; by approximating this problem with the corresponding optimal stopping problem for a random walk we are led to a simple and efficient numerical scheme to find the solution, which we then illustrate with some examples.
We consider a continuum percolation model consisting of two types of nodes, namely legitimate and eavesdropper nodes, distributed according to independent Poisson point processes in R2 of intensities λ and λE, respectively. A directed edge from one legitimate node A to another legitimate node B exists provided that the strength of the signal transmitted from node A that is received at node B is higher than that received at any eavesdropper node. The strength of the signal received at a node from a legitimate node depends not only on the distance between these nodes, but also on the location of the other legitimate nodes and an interference suppression parameter γ. The graph is said to percolate when there exists an infinitely connected component. We show that for any finite intensity λE of eavesdropper nodes, there exists a critical intensity λc < ∞ such that for all λ > λc the graph percolates for sufficiently small values of the interference parameter. Furthermore, for the subcritical regime, we show that there exists a λ0 such that for all λ < λ0 ≤ λc a suitable graph defined over eavesdropper node connections percolates that precludes percolation in the graphs formed by the legitimate nodes.
Identifiability of evolutionary tree models has been a recent topic of discussion and some models have been shown to be nonidentifiable. A coalescent-based rooted population tree model, originally proposed by Nielsen et al. (1998), has been used by many authors in the last few years and is a simple tool to accurately model the changes in allele frequencies in the tree. However, the identifiability of this model has never been proven. Here we prove this model to be identifiable by showing that the model parameters can be expressed as functions of the probability distributions of subsamples, assuming that there are at least two (haploid) individuals sampled from each population. This a step toward proving the consistency of the maximum likelihood estimator of the population tree based on this model.
We derive multivariate moment generating functions for the conditional and stationary distributions of a discrete sample path of n observations of a square-root diffusion (CIR) process, X(t). For any fixed vector of observation times t1,...,tn, we find the conditional joint distribution of (X(t1),...,X(tn)) is a multivariate noncentral chi-squared distribution and the stationary joint distribution is a Krishnamoorthy-Parthasarathy multivariate gamma distribution. Multivariate cumulants of the stationary distribution have a simple and computationally tractable expression. We also obtain the moment generating function for the increment X(t + δ) - X(t), and show that the increment is equivalent in distribution to a scaled difference of two independent draws from a gamma distribution.
The stochastic sequential assignment problem assigns distinct workers to sequentially arriving tasks with stochastic parameters. In this paper the assignments are performed so as to minimize the threshold probability, which is the probability of the long-run reward per task failing to achieve a target value (threshold). As the number of tasks approaches infinity, the problem is studied for independent and identically distributed (i.i.d.) tasks with a known distribution function and also for tasks that are derived from r distinct unobservable distributions (governed by a Markov chain). Stationary optimal policies are presented, which simultaneously minimize the threshold probability and achieve the optimal long-run expected reward per task.
This paper considers the average optimality for a continuous-time Markov decision process in Borel state and action spaces, and with an arbitrarily unbounded nonnegative cost rate. The existence of a deterministic stationary optimal policy is proved under the conditions that allow the following; the controlled process can be explosive, the transition rates are weakly continuous, and the multifunction defining the admissible action spaces can be neither compact-valued nor upper semicontinuous.
This paper studies a special type of binomial splitting process. Such a process can be used to model a high dimensional corner parking problem as well as determining the depth of random PATRICIA (practical algorithm to retrieve information coded in alphanumeric) tries, which are a special class of digital tree data structures. The latter also has natural interpretations in terms of distinct values in independent and identically distributed geometric random variables and the occupancy problem in urn models. The corresponding distribution is marked by a logarithmic mean and a bounded variance, which is oscillating, if the binomial parameter p is not equal to ½, and asymptotic to one in the unbiased case. Also, the limiting distribution does not exist as a result of the periodic fluctuations.
In this paper we derive mixture representations for the reliability functions of the conditional residual life and inactivity time of a coherent system with n independent and identically distributed components. Based on these mixture representations we carry out stochastic comparisons on the conditional residual life, and the inactivity time of two coherent systems with independent and identical components.
This paper is an investigation into the reliability and stochastic properties of three-state networks. We consider a single-step network consisting of n links and we assume that the links are subject to failure. We assume that the network can be in three states, up (K = 2), partial performance (K = 1), and down (K = 0). Using the concept of the two-dimensional signature, we study the residual lifetimes of the networks under different scenarios on the states and the number of failed links of the network. In the process of doing so, we define variants of the concept of the dynamic signature in a bivariate setting. Then, we obtain signature based mixture representations of the reliability of the residual lifetimes of the network states under the condition that the network is in state K = 2 (or K = 1) and exactly k links in the network have failed. We prove preservation theorems showing that stochastic orderings and dependence between the elements of the dynamic signatures (which relies on the network structure) are preserved by the residual lifetimes of the states of the network (which relies on the network ageing). Various illustrative examples are also provided.
In this paper we consider a one dimensional stochastic system described by an elliptic equation. A spatially varying random coefficient is introduced to account for uncertainty or imprecise measurements. We model the logarithm of this coefficient by a Gaussian process and provide asymptotic approximations of the tail probabilities of the derivative of the solution.
We construct random fields with Pólya-type autocorrelation function and dampened Pólya cross-correlation function. The marginal distribution of the random fields may be taken as any infinitely divisible distribution with finite variance, and the random fields are fully characterized in terms of their joint characteristic function. This makes available a new class of non-Gaussian random fields with flexible correlation structure for use in modeling and estimation.
The main aim of this paper is to prove the quenched central limit theorem for reversible random walks in a stationary random environment on Z without having the integrability condition on the conductance and without using any martingale. The method shown here is particularly simple and was introduced by Depauw and Derrien . More precisely, for a given realization ω of the environment, we consider the Poisson equation (Pω - I)g = f, and then use the pointwise ergodic theorem in  to treat the limit of solutions and then the central limit theorem will be established by the convergence of moments. In particular, there is an analogue to a Markov process with discrete space and the diffusion in a stationary random environment.
Simple random walks on a partially directed version of Z2 are considered. More precisely, vertical edges between neighbouring vertices of Z2 can be traversed in both directions (they are undirected) while horizontal edges are one-way. The horizontal orientation is prescribed by a random perturbation of a periodic function; the perturbation probability decays according to a power law in the absolute value of the ordinate. We study the type of simple random walk that is recurrent or transient, and show that there exists a critical value of the decay power, above which it is almost surely recurrent and below which it is almost surely transient.
This paper provides tools for the study of the Dirichlet random walk in Rd. We compute explicitly, for a number of cases, the distribution of the random variable W using a form of Stieltjes transform of W instead of the Laplace transform, replacing the Bessel functions with hypergeometric functions. This enables us to simplify some existing results, in particular, some of the proofs by Le Caër (2010), (2011). We extend our results to the study of the limits of the Dirichlet random walk when the number of added terms goes to ∞, interpreting the results in terms of an integral by a Dirichlet process. We introduce the ideas of Dirichlet semigroups and Dirichlet infinite divisibility and characterize these infinite divisible distributions in the sense of Dirichlet when they are concentrated on the unit sphere of Rd.
We observe that the technique of Markov contraction can be used to establish measure concentration for a broad class of noncontracting chains. In particular, geometric ergodicity provides a simple and versatile framework. This leads to a short, elementary proof of a general concentration inequality for Markov and hidden Markov chains, which supersedes some of the known results and easily extends to other processes such as Markov trees. As applications, we provide a Dvoretzky-Kiefer-Wolfowitz-type inequality and a uniform Chernoff bound. All of our bounds are dimension-free and hold for countably infinite state spaces.
A lumping of a Markov chain is a coordinatewise projection of the chain. We characterise the entropy rate preservation of a lumping of an aperiodic and irreducible Markov chain on a finite state space by the random growth rate of the cardinality of the realisable preimage of a finite-length trajectory of the lumped chain and by the information needed to reconstruct original trajectories from their lumped images. Both are purely combinatorial criteria, depending only on the transition graph of the Markov chain and the lumping function. A lumping is strongly k-lumpable, if and only if the lumped process is a kth-order Markov chain for each starting distribution of the original Markov chain. We characterise strong k-lumpability via tightness of stationary entropic bounds. In the sparse setting, we give sufficient conditions on the lumping to both preserve the entropy rate and be strongly k-lumpable.
The extremes of a univariate Markov chain with regularly varying stationary marginal distribution and asymptotically linear behavior are known to exhibit a multiplicative random walk structure called the tail chain. In this paper we extend this fact to Markov chains with multivariate regularly varying marginal distributions in Rd. We analyze both the forward and the backward tail process and show that they mutually determine each other through a kind of adjoint relation. In a broader setting, we will show that even for non-Markovian underlying processes a Markovian forward tail chain always implies that the backward tail chain is also Markovian. We analyze the resulting class of limiting processes in detail. Applications of the theory yield the asymptotic distribution of both the past and the future of univariate and multivariate stochastic difference equations conditioned on an extreme event.
Consider a one-sided Markov additive process with an upper and a lower barrier, where each can be either reflecting or terminating. For both defective and nondefective processes, and all possible scenarios, we identify the corresponding potential measures, which help to generalize a number of results for one-sided Lévy processes. The resulting rather neat formulae have various applications in risk and queueing theories, and, in particular, they lead to quasistationary distributions of the corresponding processes.
In this paper we introduce an insurance ruin model with an adaptive premium rate, henceforth referred to as restructuring/refraction, in which classical ruin and bankruptcy are distinguished. In this model the premium rate is increased as soon as the wealth process falls into the red zone and is brought back to its regular level when the wealth process recovers. The analysis is focused mainly on the time a refracted Lévy risk process spends in the red zone (analogous to the duration of the negative surplus). Building on results from  and , we identify the distribution of various functionals related to occupation times of refracted spectrally negative Lévy processes. For example, these results are used to compute both the probability of bankruptcy and the probability of Parisian ruin in this model with restructuring.
This short note investigates convergence of adaptive Markov chain Monte Carlo algorithms, i.e. algorithms which modify the Markov chain update probabilities on the fly. We focus on the containment condition introduced Roberts and Rosenthal (2007). We show that if the containment condition is not satisfied, then the algorithm will perform very poorly. Specifically, with positive probability, the adaptive algorithm will be asymptotically less efficient then any nonadaptive ergodic MCMC algorithm. We call such algorithms AdapFail, and conclude that they should not be used.