Brazilian Journal of Probability and Statistics Articles (Project Euclid)
http://projecteuclid.org/euclid.bjps
The latest articles from Brazilian Journal of Probability and Statistics on Project Euclid, a site for mathematics and statistics resources.en-usCopyright 2010 Cornell University LibraryEuclid-L@cornell.edu (Project Euclid Team)Thu, 05 Aug 2010 15:41 EDTThu, 31 Mar 2011 09:13 EDThttp://projecteuclid.org/collection/euclid/images/logo_linking_100.gifProject Euclid
http://projecteuclid.org/
An estimation method for latent traits and population parameters in Nominal Response Model
http://projecteuclid.org/euclid.bjps/1280754493
<strong>Caio L. N. Azevedo</strong>, <strong>Dalton F. Andrade</strong><p><strong>Source: </strong>Braz. J. Probab. Stat., Volume 24, Number 3, 415--433.</p><p><strong>Abstract:</strong><br/>
The nominal response model (NRM) was proposed by Bock [ Psychometrika 37 (1972) 29–51] in order to improve the latent trait (ability) estimation in multiple choice tests with nominal items. When the item parameters are known, expectation a posteriori or maximum a posteriori methods are commonly employed to estimate the latent traits, considering a standard symmetric normal distribution as the latent traits prior density. However, when this item set is presented to a new group of examinees, it is not only necessary to estimate their latent traits but also the population parameters of this group. This article has two main purposes: first, to develop a Monte Carlo Markov Chain algorithm to estimate both latent traits and population parameters concurrently. This algorithm comprises the Metropolis–Hastings within Gibbs sampling algorithm (MHWGS) proposed by Patz and Junker [ Journal of Educational and Behavioral Statistics 24 (1999b) 346–366]. Second, to compare, in the latent trait recovering, the performance of this method with three other methods: maximum likelihood, expectation a posteriori and maximum a posteriori. The comparisons were performed by varying the total number of items (NI), the number of categories and the values of the mean and the variance of the latent trait distribution. The results showed that MHWGS outperforms the other methods concerning the latent traits estimation as well as it recoveries properly the population parameters. Furthermore, we found that NI accounts for the highest percentage of the variability in the accuracy of latent trait estimation.
</p>projecteuclid.org/euclid.bjps/1280754493_Thu, 05 Aug 2010 15:41 EDTThu, 05 Aug 2010 15:41 EDTThe equivalence of dynamic and static asset allocations under the uncertainty caused by Poisson processeshttps://projecteuclid.org/euclid.bjps/1547456492<strong>Yong-Chao Zhang</strong>, <strong>Na Zhang</strong>. <p><strong>Source: </strong>Brazilian Journal of Probability and Statistics, Volume 33, Number 1, 184--191.</p><p><strong>Abstract:</strong><br/>
We investigate the equivalence of dynamic and static asset allocations in the case where the price process of a risky asset is driven by a Poisson process. Under some mild conditions, we obtain a necessary and sufficient condition for the equivalence of dynamic and static asset allocations. In addition, we provide a simple sufficient condition for the equivalence.
</p>projecteuclid.org/euclid.bjps/1547456492_20190114040156Mon, 14 Jan 2019 04:01 ESTSimple tail index estimation for dependent and heterogeneous data with missing valueshttps://projecteuclid.org/euclid.bjps/1547456493<strong>Ivana Ilić</strong>, <strong>Vladica M. Veličković</strong>. <p><strong>Source: </strong>Brazilian Journal of Probability and Statistics, Volume 33, Number 1, 192--203.</p><p><strong>Abstract:</strong><br/>
Financial returns are known to be nonnormal and tend to have fat-tailed distribution. Also, the dependence of large values in a stochastic process is an important topic in risk, insurance and finance. In the presence of missing values, we deal with the asymptotic properties of a simple “median” estimator of the tail index based on random variables with the heavy-tailed distribution function and certain dependence among the extremes. Weak consistency and asymptotic normality of the proposed estimator are established. The estimator is a special case of a well-known estimator defined in Bacro and Brito [ Statistics & Decisions 3 (1993) 133–143]. The advantage of the estimator is its robustness against deviations and compared to Hill’s, it is less affected by the fluctuations related to the maximum of the sample or by the presence of outliers. Several examples are analyzed in order to support the proofs.
</p>projecteuclid.org/euclid.bjps/1547456493_20190114040156Mon, 14 Jan 2019 04:01 ESTBayesian robustness to outliers in linear regression and ratio estimationhttps://projecteuclid.org/euclid.bjps/1551690032<strong>Alain Desgagné</strong>, <strong>Philippe Gagnon</strong>. <p><strong>Source: </strong>Brazilian Journal of Probability and Statistics, Volume 33, Number 2, 205--221.</p><p><strong>Abstract:</strong><br/>
Whole robustness is a nice property to have for statistical models. It implies that the impact of outliers gradually vanishes as they approach plus or minus infinity. So far, the Bayesian literature provides results that ensure whole robustness for the location-scale model. In this paper, we make two contributions. First, we generalise the results to attain whole robustness in simple linear regression through the origin, which is a necessary step towards results for general linear regression models. We allow the variance of the error term to depend on the explanatory variable. This flexibility leads to the second contribution: we provide a simple Bayesian approach to robustly estimate finite population means and ratios. The strategy to attain whole robustness is simple since it lies in replacing the traditional normal assumption on the error term by a super heavy-tailed distribution assumption. As a result, users can estimate the parameters as usual, using the posterior distribution.
</p>projecteuclid.org/euclid.bjps/1551690032_20190304040045Mon, 04 Mar 2019 04:00 ESTA brief review of optimal scaling of the main MCMC approaches and optimal scaling of additive TMCMC under non-regular caseshttps://projecteuclid.org/euclid.bjps/1551690033<strong>Kushal K. Dey</strong>, <strong>Sourabh Bhattacharya</strong>. <p><strong>Source: </strong>Brazilian Journal of Probability and Statistics, Volume 33, Number 2, 222--266.</p><p><strong>Abstract:</strong><br/>
Transformation based Markov Chain Monte Carlo (TMCMC) was proposed by Dutta and Bhattacharya ( Statistical Methodology 16 (2014) 100–116) as an efficient alternative to the Metropolis–Hastings algorithm, especially in high dimensions. The main advantage of this algorithm is that it simultaneously updates all components of a high dimensional parameter using appropriate move types defined by deterministic transformation of a single random variable. This results in reduction in time complexity at each step of the chain and enhances the acceptance rate.
In this paper, we first provide a brief review of the optimal scaling theory for various existing MCMC approaches, comparing and contrasting them with the corresponding TMCMC approaches.The optimal scaling of the simplest form of TMCMC, namely additive TMCMC , has been studied extensively for the Gaussian proposal density in Dey and Bhattacharya (2017a). Here, we discuss diffusion-based optimal scaling behavior of additive TMCMC for non-Gaussian proposal densities—in particular, uniform, Student’s $t$ and Cauchy proposals. Although we could not formally prove our diffusion result for the Cauchy proposal, simulation based results lead us to conjecture that at least the recipe for obtaining general optimal scaling and optimal acceptance rate holds for the Cauchy case as well. We also consider diffusion based optimal scaling of TMCMC when the target density is discontinuous. Such non-regular situations have been studied in the case of Random Walk Metropolis Hastings (RWMH) algorithm by Neal and Roberts ( Methodology and Computing in Applied Probability 13 (2011) 583–601) using expected squared jumping distance (ESJD), but the diffusion theory based scaling has not been considered.
We compare our diffusion based optimally scaled TMCMC approach with the ESJD based optimally scaled RWM with simulation studies involving several target distributions and proposal distributions including the challenging Cauchy proposal case, showing that additive TMCMC outperforms RWMH in almost all cases considered.
</p>projecteuclid.org/euclid.bjps/1551690033_20190304040045Mon, 04 Mar 2019 04:00 ESTThe coreset variational Bayes (CVB) algorithm for mixture analysishttps://projecteuclid.org/euclid.bjps/1551690034<strong>Qianying Liu</strong>, <strong>Clare A. McGrory</strong>, <strong>Peter W. J. Baxter</strong>. <p><strong>Source: </strong>Brazilian Journal of Probability and Statistics, Volume 33, Number 2, 267--279.</p><p><strong>Abstract:</strong><br/>
The pressing need for improved methods for analysing and coping with big data has opened up a new area of research for statisticians. Image analysis is an area where there is typically a very large number of data points to be processed per image, and often multiple images are captured over time. These issues make it challenging to design methodology that is reliable and yet still efficient enough to be of practical use. One promising emerging approach for this problem is to reduce the amount of data that actually has to be processed by extracting what we call coresets from the full dataset; analysis is then based on the coreset rather than the whole dataset. Coresets are representative subsamples of data that are carefully selected via an adaptive sampling approach. We propose a new approach called coreset variational Bayes (CVB) for mixture modelling; this is an algorithm which can perform a variational Bayes analysis of a dataset based on just an extracted coreset of the data. We apply our algorithm to weed image analysis.
</p>projecteuclid.org/euclid.bjps/1551690034_20190304040045Mon, 04 Mar 2019 04:00 ESTModified information criterion for testing changes in skew normal modelhttps://projecteuclid.org/euclid.bjps/1551690035<strong>Khamis K. Said</strong>, <strong>Wei Ning</strong>, <strong>Yubin Tian</strong>. <p><strong>Source: </strong>Brazilian Journal of Probability and Statistics, Volume 33, Number 2, 280--300.</p><p><strong>Abstract:</strong><br/>
In this paper, we study the change point problem for the skew normal distribution model from the view of model selection problem. The detection procedure based on the modified information criterion (MIC) for change problem is proposed. Such a procedure has advantage in detecting the changes in early and late stage of a data comparing to the one based on the traditional Schwarz information criterion which is well known as Bayesian information criterion (BIC) by considering the complexity of the models. Due to the difficulty in deriving the analytic asymptotic distribution of the test statistic based on the MIC procedure, the bootstrap simulation is provided to obtain the critical values at the different significance levels. Simulations are conducted to illustrate the comparisons of performance between MIC, BIC and likelihood ratio test (LRT). Such an approach is applied on two stock market data sets to indicate the detection procedure.
</p>projecteuclid.org/euclid.bjps/1551690035_20190304040045Mon, 04 Mar 2019 04:00 ESTFailure rate of Birnbaum–Saunders distributions: Shape, change-point, estimation and robustnesshttps://projecteuclid.org/euclid.bjps/1551690036<strong>Emilia Athayde</strong>, <strong>Assis Azevedo</strong>, <strong>Michelli Barros</strong>, <strong>Víctor Leiva</strong>. <p><strong>Source: </strong>Brazilian Journal of Probability and Statistics, Volume 33, Number 2, 301--328.</p><p><strong>Abstract:</strong><br/>
The Birnbaum–Saunders (BS) distribution has been largely studied and applied. A random variable with BS distribution is a transformation of another random variable with standard normal distribution. Generalized BS distributions are obtained when the normally distributed random variable is replaced by another symmetrically distributed random variable. This allows us to obtain a wide class of positively skewed models with lighter and heavier tails than the BS model. Its failure rate admits several shapes, including the unimodal case, with its change-point being able to be used for different purposes. For example, to establish the reduction in a dose, and then in the cost of the medical treatment. We analyze the failure rates of generalized BS distributions obtained by the logistic, normal and Student-t distributions, considering their shape and change-point, estimating them, evaluating their robustness, assessing their performance by simulations, and applying the results to real data from different areas.
</p>projecteuclid.org/euclid.bjps/1551690036_20190304040045Mon, 04 Mar 2019 04:00 ESTA new log-linear bimodal Birnbaum–Saunders regression model with application to survival datahttps://projecteuclid.org/euclid.bjps/1551690037<strong>Francisco Cribari-Neto</strong>, <strong>Rodney V. Fonseca</strong>. <p><strong>Source: </strong>Brazilian Journal of Probability and Statistics, Volume 33, Number 2, 329--355.</p><p><strong>Abstract:</strong><br/>
The log-linear Birnbaum–Saunders model has been widely used in empirical applications. We introduce an extension of this model based on a recently proposed version of the Birnbaum–Saunders distribution which is more flexible than the standard Birnbaum–Saunders law since its density may assume both unimodal and bimodal shapes. We show how to perform point estimation, interval estimation and hypothesis testing inferences on the parameters that index the regression model we propose. We also present a number of diagnostic tools, such as residual analysis, local influence, generalized leverage, generalized Cook’s distance and model misspecification tests. We investigate the usefulness of model selection criteria and the accuracy of prediction intervals for the proposed model. Results of Monte Carlo simulations are presented. Finally, we also present and discuss an empirical application.
</p>projecteuclid.org/euclid.bjps/1551690037_20190304040045Mon, 04 Mar 2019 04:00 ESTNecessary and sufficient conditions for the convergence of the consistent maximal displacement of the branching random walkhttps://projecteuclid.org/euclid.bjps/1551690038<strong>Bastien Mallein</strong>. <p><strong>Source: </strong>Brazilian Journal of Probability and Statistics, Volume 33, Number 2, 356--373.</p><p><strong>Abstract:</strong><br/>
Consider a supercritical branching random walk on the real line. The consistent maximal displacement is the smallest of the distances between the trajectories followed by individuals at the $n$th generation and the boundary of the process. Fang and Zeitouni, and Faraud, Hu and Shi proved that under some integrability conditions, the consistent maximal displacement grows almost surely at rate $\lambda^{*}n^{1/3}$ for some explicit constant $\lambda^{*}$. We obtain here a necessary and sufficient condition for this asymptotic behaviour to hold.
</p>projecteuclid.org/euclid.bjps/1551690038_20190304040045Mon, 04 Mar 2019 04:00 ESTHierarchical modelling of power law processes for the analysis of repairable systems with different truncation times: An empirical Bayes approachhttps://projecteuclid.org/euclid.bjps/1551690039<strong>Rodrigo Citton P. dos Reis</strong>, <strong>Enrico A. Colosimo</strong>, <strong>Gustavo L. Gilardoni</strong>. <p><strong>Source: </strong>Brazilian Journal of Probability and Statistics, Volume 33, Number 2, 374--396.</p><p><strong>Abstract:</strong><br/>
In the data analysis from multiple repairable systems, it is usual to observe both different truncation times and heterogeneity among the systems. Among other reasons, the latter is caused by different manufacturing lines and maintenance teams of the systems. In this paper, a hierarchical model is proposed for the statistical analysis of multiple repairable systems under different truncation times. A reparameterization of the power law process is proposed in order to obtain a quasi-conjugate bayesian analysis. An empirical Bayes approach is used to estimate model hyperparameters. The uncertainty in the estimate of these quantities are corrected by using a parametric bootstrap approach. The results are illustrated in a real data set of failure times of power transformers from an electric company in Brazil.
</p>projecteuclid.org/euclid.bjps/1551690039_20190304040045Mon, 04 Mar 2019 04:00 ESTA temporal perspective on the rate of convergence in first-passage percolation under a moment conditionhttps://projecteuclid.org/euclid.bjps/1551690040<strong>Daniel Ahlberg</strong>. <p><strong>Source: </strong>Brazilian Journal of Probability and Statistics, Volume 33, Number 2, 397--401.</p><p><strong>Abstract:</strong><br/>
We study the rate of convergence in the celebrated Shape Theorem in first-passage percolation, obtaining the precise asymptotic rate of decay for the probability of linear order deviations under a moment condition. Our results are presented from a temporal perspective and complement previous work by the same author, in which the rate of convergence was studied from the standard spatial perspective.
</p>projecteuclid.org/euclid.bjps/1551690040_20190304040045Mon, 04 Mar 2019 04:00 ESTInfluence measures for the Waring regression modelhttps://projecteuclid.org/euclid.bjps/1551690041<strong>Luisa Rivas</strong>, <strong>Manuel Galea</strong>. <p><strong>Source: </strong>Brazilian Journal of Probability and Statistics, Volume 33, Number 2, 402--424.</p><p><strong>Abstract:</strong><br/>
In this paper, we present a regression model where the response variable is a count data that follows a Waring distribution. The Waring regression model allows for analysis of phenomena where the Geometric regression model is inadequate, because the probability of success on each trial, $p$, is different for each individual and $p$ has an associated distribution. Estimation is performed by maximum likelihood, through the maximization of the $Q$-function using EM algorithm. Diagnostic measures are calculated for this model. To illustrate the results, an application to real data is presented. Some specific details are given in the Appendix of the paper.
</p>projecteuclid.org/euclid.bjps/1551690041_20190304040045Mon, 04 Mar 2019 04:00 ESTA rank-based Cramér–von-Mises-type test for two sampleshttps://projecteuclid.org/euclid.bjps/1560153846<strong>Jamye Curry</strong>, <strong>Xin Dang</strong>, <strong>Hailin Sang</strong>. <p><strong>Source: </strong>Brazilian Journal of Probability and Statistics, Volume 33, Number 3, 425--454.</p><p><strong>Abstract:</strong><br/>
We study a rank based univariate two-sample distribution-free test. The test statistic is the difference between the average of between-group rank distances and the average of within-group rank distances. This test statistic is closely related to the two-sample Cramér–von Mises criterion. They are different empirical versions of a same quantity for testing the equality of two population distributions. Although they may be different for finite samples, they share the same expected value, variance and asymptotic properties. The advantage of the new rank based test over the classical one is its ease to generalize to the multivariate case. Rather than using the empirical process approach, we provide a different easier proof, bringing in a different perspective and insight. In particular, we apply the Hájek projection and orthogonal decomposition technique in deriving the asymptotics of the proposed rank based statistic. A numerical study compares power performance of the rank formulation test with other commonly-used nonparametric tests and recommendations on those tests are provided. Lastly, we propose a multivariate extension of the test based on the spatial rank.
</p>projecteuclid.org/euclid.bjps/1560153846_20190610040413Mon, 10 Jun 2019 04:04 EDTL-Logistic regression models: Prior sensitivity analysis, robustness to outliers and applicationshttps://projecteuclid.org/euclid.bjps/1560153847<strong>Rosineide F. da Paz</strong>, <strong>Narayanaswamy Balakrishnan</strong>, <strong>Jorge Luis Bazán</strong>. <p><strong>Source: </strong>Brazilian Journal of Probability and Statistics, Volume 33, Number 3, 455--479.</p><p><strong>Abstract:</strong><br/>
Tadikamalla and Johnson [ Biometrika 69 (1982) 461–465] developed the $L_{B}$ distribution to variables with bounded support by considering a transformation of the standard Logistic distribution. In this manuscript, a convenient parametrization of this distribution is proposed in order to develop regression models. This distribution, referred to here as L-Logistic distribution, provides great flexibility and includes the uniform distribution as a particular case. Several properties of this distribution are studied, and a Bayesian approach is adopted for the parameter estimation. Simulation studies, considering prior sensitivity analysis, recovery of parameters and comparison of algorithms, and robustness to outliers are all discussed showing that the results are insensitive to the choice of priors, efficiency of the algorithm MCMC adopted, and robustness of the model when compared with the beta distribution. Applications to estimate the vulnerability to poverty and to explain the anxiety are performed. The results to applications show that the L-Logistic regression models provide a better fit than the corresponding beta regression models.
</p>projecteuclid.org/euclid.bjps/1560153847_20190610040413Mon, 10 Jun 2019 04:04 EDTFractional backward stochastic variational inequalities with non-Lipschitz coefficienthttps://projecteuclid.org/euclid.bjps/1560153848<strong>Katarzyna Jańczak-Borkowska</strong>. <p><strong>Source: </strong>Brazilian Journal of Probability and Statistics, Volume 33, Number 3, 480--497.</p><p><strong>Abstract:</strong><br/>
We prove the existence and uniqueness of the solution of backward stochastic variational inequalities with respect to fractional Brownian motion and with non-Lipschitz coefficient. We assume that $H>1/2$.
</p>projecteuclid.org/euclid.bjps/1560153848_20190610040413Mon, 10 Jun 2019 04:04 EDTSpatially adaptive Bayesian image reconstruction through locally-modulated Markov random field modelshttps://projecteuclid.org/euclid.bjps/1560153849<strong>Salem M. Al-Gezeri</strong>, <strong>Robert G. Aykroyd</strong>. <p><strong>Source: </strong>Brazilian Journal of Probability and Statistics, Volume 33, Number 3, 498--519.</p><p><strong>Abstract:</strong><br/>
The use of Markov random field (MRF) models has proven to be a fruitful approach in a wide range of image processing applications. It allows local texture information to be incorporated in a systematic and unified way and allows statistical inference theory to be applied giving rise to novel output summaries and enhanced image interpretation. A great advantage of such low-level approaches is that they lead to flexible models, which can be applied to a wide range of imaging problems without the need for significant modification.
This paper proposes and explores the use of conditional MRF models for situations where multiple images are to be processed simultaneously, or where only a single image is to be reconstructed and a sequential approach is taken. Although the coupling of image intensity values is a special case of our approach, the main extension over previous proposals is to allow the direct coupling of other properties, such as smoothness or texture. This is achieved using a local modulating function which adjusts the influence of global smoothing without the need for a fully inhomogeneous prior model. Several modulating functions are considered and a detailed simulation study, motivated by remote sensing applications in archaeological geophysics, of conditional reconstruction is presented. The results demonstrate that a substantial improvement in the quality of the image reconstruction, in terms of errors and residuals, can be achieved using this approach, especially at locations with rapid changes in the underlying intensity.
</p>projecteuclid.org/euclid.bjps/1560153849_20190610040413Mon, 10 Jun 2019 04:04 EDTDensity for solutions to stochastic differential equations with unbounded drifthttps://projecteuclid.org/euclid.bjps/1560153850<strong>Christian Olivera</strong>, <strong>Ciprian Tudor</strong>. <p><strong>Source: </strong>Brazilian Journal of Probability and Statistics, Volume 33, Number 3, 520--531.</p><p><strong>Abstract:</strong><br/>
Via a special transform and by using the techniques of the Malliavin calculus, we analyze the density of the solution to a stochastic differential equation with unbounded drift.
</p>projecteuclid.org/euclid.bjps/1560153850_20190610040413Mon, 10 Jun 2019 04:04 EDTA Jackson network under general regimehttps://projecteuclid.org/euclid.bjps/1560153851<strong>Yair Y. Shaki</strong>. <p><strong>Source: </strong>Brazilian Journal of Probability and Statistics, Volume 33, Number 3, 532--548.</p><p><strong>Abstract:</strong><br/>
We consider a Jackson network in a general heavy traffic diffusion regime with the $\alpha$-parametrization . We also assume that each customer may abandon the system while waiting. We show that in this regime the queue-length process converges to a multi-dimensional regulated Ornstein–Uhlenbeck process.
</p>projecteuclid.org/euclid.bjps/1560153851_20190610040413Mon, 10 Jun 2019 04:04 EDTFake uniformity in a shape inversion formulahttps://projecteuclid.org/euclid.bjps/1560153852<strong>Christian Rau</strong>. <p><strong>Source: </strong>Brazilian Journal of Probability and Statistics, Volume 33, Number 3, 549--557.</p><p><strong>Abstract:</strong><br/>
We revisit a shape inversion formula derived by Panaretos in the context of a particle density estimation problem with unknown rotation of the particle. A distribution is presented which imitates, or “fakes”, the uniformity or Haar distribution that is part of that formula.
</p>projecteuclid.org/euclid.bjps/1560153852_20190610040413Mon, 10 Jun 2019 04:04 EDTStochastic monotonicity from an Eulerian viewpointhttps://projecteuclid.org/euclid.bjps/1560153853<strong>Davide Gabrielli</strong>, <strong>Ida Germana Minelli</strong>. <p><strong>Source: </strong>Brazilian Journal of Probability and Statistics, Volume 33, Number 3, 558--585.</p><p><strong>Abstract:</strong><br/>
Stochastic monotonicity is a well-known partial order relation between probability measures defined on the same partially ordered set. Strassen theorem establishes equivalence between stochastic monotonicity and the existence of a coupling compatible with respect to the partial order. We consider the case of a countable set and introduce the class of finitely decomposable flows on a directed acyclic graph associated to the partial order. We show that a probability measure stochastically dominates another probability measure if and only if there exists a finitely decomposable flow having divergence given by the difference of the two measures. We illustrate the result with some examples.
</p>projecteuclid.org/euclid.bjps/1560153853_20190610040413Mon, 10 Jun 2019 04:04 EDTUnions of random walk and percolation on infinite graphshttps://projecteuclid.org/euclid.bjps/1560153854<strong>Kazuki Okamura</strong>. <p><strong>Source: </strong>Brazilian Journal of Probability and Statistics, Volume 33, Number 3, 586--637.</p><p><strong>Abstract:</strong><br/>
We consider a random object that is associated with both random walks and random media, specifically, the superposition of a configuration of subcritical Bernoulli percolation on an infinite connected graph and the trace of the simple random walk on the same graph. We investigate asymptotics for the number of vertices of the enlargement of the trace of the walk until a fixed time, when the time tends to infinity. This process is more highly self-interacting than the range of random walk, which yields difficulties. We show a law of large numbers on vertex-transitive transient graphs. We compare the process on a vertex-transitive graph with the process on a finitely modified graph of the original vertex-transitive graph and show their behaviors are similar. We show that the process fluctuates almost surely on a certain non-vertex-transitive graph. On the two-dimensional integer lattice, by investigating the size of the boundary of the trace, we give an estimate for variances of the process implying a law of large numbers. We give an example of a graph with unbounded degrees on which the process behaves in a singular manner. As by-products, some results for the range and the boundary, which will be of independent interest, are obtained.
</p>projecteuclid.org/euclid.bjps/1560153854_20190610040413Mon, 10 Jun 2019 04:04 EDTEstimation of parameters in the $\operatorname{DDRCINAR}(p)$ modelhttps://projecteuclid.org/euclid.bjps/1560153855<strong>Xiufang Liu</strong>, <strong>Dehui Wang</strong>. <p><strong>Source: </strong>Brazilian Journal of Probability and Statistics, Volume 33, Number 3, 638--673.</p><p><strong>Abstract:</strong><br/>
This paper discusses a $p$th-order dependence-driven random coefficient integer-valued autoregressive time series model ($\operatorname{DDRCINAR}(p)$). Stationarity and ergodicity properties are proved. Conditional least squares, weighted least squares and maximum quasi-likelihood are used to estimate the model parameters. Asymptotic properties of the estimators are presented. The performances of these estimators are investigated and compared via simulations. In certain regions of the parameter space, simulative analysis shows that maximum quasi-likelihood estimators perform better than the estimators of conditional least squares and weighted least squares in terms of the proportion of within-$\Omega$ estimates. At last, the model is applied to two real data sets.
</p>projecteuclid.org/euclid.bjps/1560153855_20190610040413Mon, 10 Jun 2019 04:04 EDTA note on monotonicity of spatial epidemic modelshttps://projecteuclid.org/euclid.bjps/1560153856<strong>Achillefs Tzioufas</strong>. <p><strong>Source: </strong>Brazilian Journal of Probability and Statistics, Volume 33, Number 3, 674--684.</p><p><strong>Abstract:</strong><br/>
The epidemic process on a graph is considered for which infectious contacts occur at rate which depends on whether a susceptible is infected for the first time or not. We show that the Vasershtein coupling extends if and only if secondary infections occur at rate which is greater than that of initial ones. Nonetheless we show that, with respect to the probability of occurrence of an infinite epidemic, the said proviso may be dropped regarding the totally asymmetric process in one dimension, thus settling in the affirmative this special case of the conjecture for arbitrary graphs due to [ Ann. Appl. Probab. 13 (2003) 669–690].
</p>projecteuclid.org/euclid.bjps/1560153856_20190610040413Mon, 10 Jun 2019 04:04 EDTPrefacehttps://projecteuclid.org/euclid.bjps/1566806426<p><strong>Source: </strong>Brazilian Journal of Probability and Statistics, Volume 33, Number 4, 685--685.</p>projecteuclid.org/euclid.bjps/1566806426_20190826040045Mon, 26 Aug 2019 04:00 EDTSpatiotemporal point processes: regression, model specifications and future directionshttps://projecteuclid.org/euclid.bjps/1566806428<strong>Dani Gamerman</strong>. <p><strong>Source: </strong>Brazilian Journal of Probability and Statistics, Volume 33, Number 4, 686--705.</p><p><strong>Abstract:</strong><br/>
Point processes are one of the most commonly encountered observation processes in Spatial Statistics. Model-based inference for them depends on the likelihood function. In the most standard setting of Poisson processes, the likelihood depends on the intensity function, and can not be computed analytically. A number of approximating techniques have been proposed to handle this difficulty. In this paper, we review recent work on exact solutions that solve this problem without resorting to approximations. The presentation concentrates more heavily on discrete time but also considers continuous time. The solutions are based on model specifications that impose smoothness constraints on the intensity function. We also review approaches to include a regression component and different ways to accommodate it while accounting for additional heterogeneity. Applications are provided to illustrate the results. Finally, we discuss possible extensions to account for discontinuities and/or jumps in the intensity function.
</p>projecteuclid.org/euclid.bjps/1566806428_20190826040045Mon, 26 Aug 2019 04:00 EDTKeeping the balance—Bridge sampling for marginal likelihood estimation in finite mixture, mixture of experts and Markov mixture modelshttps://projecteuclid.org/euclid.bjps/1566806429<strong>Sylvia Frühwirth-Schnatter</strong>. <p><strong>Source: </strong>Brazilian Journal of Probability and Statistics, Volume 33, Number 4, 706--733.</p><p><strong>Abstract:</strong><br/>
Finite mixture models and their extensions to Markov mixture and mixture of experts models are very popular in analysing data of various kind. A challenge for these models is choosing the number of components based on marginal likelihoods. The present paper suggests two innovative, generic bridge sampling estimators of the marginal likelihood that are based on constructing balanced importance densities from the conditional densities arising during Gibbs sampling. The full permutation bridge sampling estimator is derived from considering all possible permutations of the mixture labels for a subset of these densities. For the double random permutation bridge sampling estimator, two levels of random permutations are applied, first to permute the labels of the MCMC draws and second to randomly permute the labels of the conditional densities arising during Gibbs sampling. Various applications show very good performance of these estimators in comparison to importance and to reciprocal importance sampling estimators derived from the same importance densities.
</p>projecteuclid.org/euclid.bjps/1566806429_20190826040045Mon, 26 Aug 2019 04:00 EDTThe limiting distribution of the Gibbs sampler for the intrinsic conditional autoregressive modelhttps://projecteuclid.org/euclid.bjps/1566806430<strong>Marco A. R. Ferreira</strong>. <p><strong>Source: </strong>Brazilian Journal of Probability and Statistics, Volume 33, Number 4, 734--744.</p><p><strong>Abstract:</strong><br/>
We study the limiting behavior of the one-at-a-time Gibbs sampler for the intrinsic conditional autoregressive model with centering on the fly. The intrinsic conditional autoregressive model is widely used as a prior for random effects in hierarchical models for spatial modeling. This model is defined by full conditional distributions that imply an improper joint “density” with a multivariate Gaussian kernel and a singular precision matrix. To guarantee propriety of the posterior distribution, usually at the end of each iteration of the Gibbs sampler the random effects are centered to sum to zero in what is widely known as centering on the fly. While this works well in practice, this informal computational way to recenter the random effects obscures their implied prior distribution and prevents the development of formal Bayesian procedures. Here we show that the implied prior distribution, that is, the limiting distribution of the one-at-a-time Gibbs sampler for the intrinsic conditional autoregressive model with centering on the fly is a singular Gaussian distribution with a covariance matrix that is the Moore–Penrose inverse of the precision matrix. This result has important implications for the development of formal Bayesian procedures such as reference priors and Bayes-factor-based model selection for spatial models.
</p>projecteuclid.org/euclid.bjps/1566806430_20190826040045Mon, 26 Aug 2019 04:00 EDTBayesian hypothesis testing: Reduxhttps://projecteuclid.org/euclid.bjps/1566806431<strong>Hedibert F. Lopes</strong>, <strong>Nicholas G. Polson</strong>. <p><strong>Source: </strong>Brazilian Journal of Probability and Statistics, Volume 33, Number 4, 745--755.</p><p><strong>Abstract:</strong><br/>
Bayesian hypothesis testing is re-examined from the perspective of an a priori assessment of the test statistic distribution under the alternative. By assessing the distribution of an observable test statistic, rather than prior parameter values, we revisit the seminal paper of Edwards, Lindman and Savage ( Psychol. Rev. 70 (1963) 193–242). There are a number of important take-aways from comparing the Bayesian paradigm via Bayes factors to frequentist ones. We provide examples where evidence for a Bayesian strikingly supports the null, but leads to rejection under a classical test. Finally, we conclude with directions for future research.
</p>projecteuclid.org/euclid.bjps/1566806431_20190826040045Mon, 26 Aug 2019 04:00 EDTTime series of count data: A review, empirical comparisons and data analysishttps://projecteuclid.org/euclid.bjps/1566806432<strong>Glaura C. Franco</strong>, <strong>Helio S. Migon</strong>, <strong>Marcos O. Prates</strong>. <p><strong>Source: </strong>Brazilian Journal of Probability and Statistics, Volume 33, Number 4, 756--781.</p><p><strong>Abstract:</strong><br/>
Observation and parameter driven models are commonly used in the literature to analyse time series of counts. In this paper, we study the characteristics of a variety of models and point out the main differences and similarities among these procedures, concerning parameter estimation, model fitting and forecasting. Alternatively to the literature, all inference was performed under the Bayesian paradigm. The models are fitted with a latent AR($p$) process in the mean, which accounts for autocorrelation in the data. An extensive simulation study shows that the estimates for the covariate parameters are remarkably similar across the different models. However, estimates for autoregressive coefficients and forecasts of future values depend heavily on the underlying process which generates the data. A real data set of bankruptcy in the United States is also analysed.
</p>projecteuclid.org/euclid.bjps/1566806432_20190826040045Mon, 26 Aug 2019 04:00 EDTBayesian modelling of the abilities in dichotomous IRT models via regression with missing values in the covariateshttps://projecteuclid.org/euclid.bjps/1566806433<strong>Flávio B. Gonçalves</strong>, <strong>Bárbara C. C. Dias</strong>. <p><strong>Source: </strong>Brazilian Journal of Probability and Statistics, Volume 33, Number 4, 782--800.</p><p><strong>Abstract:</strong><br/>
Educational assessment usually considers a contextual questionnaire to extract relevant information from the applicants. This may include items related to socio-economical profile as well as items to extract other characteristics potentially related to applicant’s performance in the test. A careful analysis of the questionnaires jointly with the test’s results may evidence important relations between profiles and test performance. The most coherent way to perform this task in a statistical context is to use the information from the questionnaire to help explain the variability of the abilities in a joint model-based approach. Nevertheless, the responses to the questionnaire typically present missing values which, in some cases, may be missing not at random. This paper proposes a statistical methodology to model the abilities in dichotomous IRT models using the information of the contextual questionnaires via linear regression. The proposed methodology models the missing data jointly with the all the observed data, which allows for the estimation of the former. The missing data modelling is flexible enough to allow the specification of missing not at random structures. Furthermore, even if those structures are not assumed a priori, they can be estimated from the posterior results when assuming missing (completely) at random structures a priori. Statistical inference is performed under the Bayesian paradigm via an efficient MCMC algorithm. Simulated and real examples are presented to investigate the efficiency and applicability of the proposed methodology.
</p>projecteuclid.org/euclid.bjps/1566806433_20190826040045Mon, 26 Aug 2019 04:00 EDTOption pricing with bivariate risk-neutral density via copula and heteroscedastic model: A Bayesian approachhttps://projecteuclid.org/euclid.bjps/1566806434<strong>Lucas Pereira Lopes</strong>, <strong>Vicente Garibay Cancho</strong>, <strong>Francisco Louzada</strong>. <p><strong>Source: </strong>Brazilian Journal of Probability and Statistics, Volume 33, Number 4, 801--825.</p><p><strong>Abstract:</strong><br/>
Multivariate options are adequate tools for multi-asset risk management. The pricing models derived from the pioneer Black and Scholes method under the multivariate case consider that the asset-object prices follow a Brownian geometric motion. However, the construction of such methods imposes some unrealistic constraints on the process of fair option calculation, such as constant volatility over the maturity time and linear correlation between the assets. Therefore, this paper aims to price and analyze the fair price behavior of the call-on-max (bivariate) option considering marginal heteroscedastic models with dependence structure modeled via copulas. Concerning inference, we adopt a Bayesian perspective and computationally intensive methods based on Monte Carlo simulations via Markov Chain (MCMC). A simulation study examines the bias, and the root mean squared errors of the posterior means for the parameters. Real stocks prices of Brazilian banks illustrate the approach. For the proposed method is verified the effects of strike and dependence structure on the fair price of the option. The results show that the prices obtained by our heteroscedastic model approach and copulas differ substantially from the prices obtained by the model derived from Black and Scholes. Empirical results are presented to argue the advantages of our strategy.
</p>projecteuclid.org/euclid.bjps/1566806434_20190826040045Mon, 26 Aug 2019 04:00 EDTBayesian approach for the zero-modified Poisson–Lindley regression modelhttps://projecteuclid.org/euclid.bjps/1566806435<strong>Wesley Bertoli</strong>, <strong>Katiane S. Conceição</strong>, <strong>Marinho G. Andrade</strong>, <strong>Francisco Louzada</strong>. <p><strong>Source: </strong>Brazilian Journal of Probability and Statistics, Volume 33, Number 4, 826--860.</p><p><strong>Abstract:</strong><br/>
The primary goal of this paper is to introduce the zero-modified Poisson–Lindley regression model as an alternative to model overdispersed count data exhibiting inflation or deflation of zeros in the presence of covariates. The zero-modification is incorporated by considering that a zero-truncated process produces positive observations and consequently, the proposed model can be fitted without any previous information about the zero-modification present in a given dataset. A fully Bayesian approach based on the g-prior method has been considered for inference concerns. An intensive Monte Carlo simulation study has been conducted to evaluate the performance of the developed methodology and the maximum likelihood estimators. The proposed model was considered for the analysis of a real dataset on the number of bids received by $126$ U.S. firms between 1978–1985, and the impact of choosing different prior distributions for the regression coefficients has been studied. A sensitivity analysis to detect influential points has been performed based on the Kullback–Leibler divergence. A general comparison with some well-known regression models for discrete data has been presented.
</p>projecteuclid.org/euclid.bjps/1566806435_20190826040045Mon, 26 Aug 2019 04:00 EDTSubjective Bayesian testing using calibrated prior probabilitieshttps://projecteuclid.org/euclid.bjps/1566806436<strong>Dan J. Spitzner</strong>. <p><strong>Source: </strong>Brazilian Journal of Probability and Statistics, Volume 33, Number 4, 861--893.</p><p><strong>Abstract:</strong><br/>
This article proposes a calibration scheme for Bayesian testing that coordinates analytically-derived statistical performance considerations with expert opinion. In other words, the scheme is effective and meaningful for incorporating objective elements into subjective Bayesian inference. It explores a novel role for default priors as anchors for calibration rather than substitutes for prior knowledge. Ideas are developed for use with multiplicity adjustments in multiple-model contexts, and to address the issue of prior sensitivity of Bayes factors. Along the way, the performance properties of an existing multiplicity adjustment related to the Poisson distribution are clarified theoretically. Connections of the overall calibration scheme to the Schwarz criterion are also explored. The proposed framework is examined and illustrated on a number of existing data sets related to problems in clinical trials, forensic pattern matching, and log-linear models methodology.
</p>projecteuclid.org/euclid.bjps/1566806436_20190826040045Mon, 26 Aug 2019 04:00 EDTBayesian inference on power Lindley distribution based on different loss functionshttps://projecteuclid.org/euclid.bjps/1566806437<strong>Abbas Pak</strong>, <strong>M. E. Ghitany</strong>, <strong>Mohammad Reza Mahmoudi</strong>. <p><strong>Source: </strong>Brazilian Journal of Probability and Statistics, Volume 33, Number 4, 894--914.</p><p><strong>Abstract:</strong><br/>
This paper focuses on Bayesian estimation of the parameters and reliability function of the power Lindley distribution by using various symmetric and asymmetric loss functions. Assuming suitable priors on the parameters, Bayes estimates are derived by using squared error, linear exponential (linex) and general entropy loss functions. Since, under these loss functions, Bayes estimates of the parameters do not have closed forms we use lindley’s approximation technique to calculate the Bayes estimates. Moreover, we obtain the Bayes estimates of the parameters using a Markov Chain Monte Carlo (MCMC) method. Simulation studies are conducted in order to evaluate the performances of the proposed estimators under the considered loss functions. Finally, analysis of a real data set is presented for illustrative purposes.
</p>projecteuclid.org/euclid.bjps/1566806437_20190826040045Mon, 26 Aug 2019 04:00 EDTA message from the editorial boardhttps://projecteuclid.org/euclid.bjps/1580720418<p><strong>Source: </strong>Brazilian Journal of Probability and Statistics, Volume 34, Number 1, 1--1.</p>projecteuclid.org/euclid.bjps/1580720418_20200203040037Mon, 03 Feb 2020 04:00 ESTSimple step-stress models with a cure fractionhttps://projecteuclid.org/euclid.bjps/1580720419<strong>Nandini Kannan</strong>, <strong>Debasis Kundu</strong>. <p><strong>Source: </strong>Brazilian Journal of Probability and Statistics, Volume 34, Number 1, 2--17.</p><p><strong>Abstract:</strong><br/>
In this article, we consider models for time-to-event data obtained from experiments in which stress levels are altered at intermediate stages during the observation period. These experiments, known as step-stress tests, belong to the larger class of accelerated tests used extensively in the reliability literature. The analysis of data from step-stress tests largely relies on the popular cumulative exposure model. However, despite its simple form, the utility of the model is limited, as it is assumed that the hazard function of the underlying distribution is discontinuous at the points at which the stress levels are changed, which may not be very reasonable. Due to this deficiency, Kannan et al. ( Journal of Applied Statistics 37 (2010b) 1625–1636) introduced the cumulative risk model, where the hazard function is continuous. In this paper, we propose a class of parametric models based on the cumulative risk model assuming the underlying population contains long-term survivors or ‘cured’ fraction. An EM algorithm to compute the maximum likelihood estimators of the unknown parameters is proposed. This research is motivated by a study on altitude decompression sickness. The performance of different parametric models will be evaluated using data from this study.
</p>projecteuclid.org/euclid.bjps/1580720419_20200203040037Mon, 03 Feb 2020 04:00 ESTBootstrap-based testing inference in beta regressionshttps://projecteuclid.org/euclid.bjps/1580720420<strong>Fábio P. Lima</strong>, <strong>Francisco Cribari-Neto</strong>. <p><strong>Source: </strong>Brazilian Journal of Probability and Statistics, Volume 34, Number 1, 18--34.</p><p><strong>Abstract:</strong><br/>
We address the issue of performing testing inference in small samples in the class of beta regression models. We consider the likelihood ratio test and its standard bootstrap version. We also consider two alternative resampling-based tests. One of them uses the bootstrap test statistic replicates to numerically estimate a Bartlett correction factor that can be applied to the likelihood ratio test statistic. By doing so, we avoid estimation of quantities located in the tail of the likelihood ratio test statistic null distribution. The second alternative resampling-based test uses a fast double bootstrap scheme in which a single second level bootstrapping resample is performed for each first level bootstrap replication. It delivers accurate testing inferences at a computational cost that is considerably smaller than that of a standard double bootstrapping scheme. The Monte Carlo results we provide show that the standard likelihood ratio test tends to be quite liberal in small samples. They also show that the bootstrap tests deliver accurate testing inferences even when the sample size is quite small. An empirical application is also presented and discussed.
</p>projecteuclid.org/euclid.bjps/1580720420_20200203040037Mon, 03 Feb 2020 04:00 ESTA joint mean-correlation modeling approach for longitudinal zero-inflated count datahttps://projecteuclid.org/euclid.bjps/1580720421<strong>Weiping Zhang</strong>, <strong>Jiangli Wang</strong>, <strong>Fang Qian</strong>, <strong>Yu Chen</strong>. <p><strong>Source: </strong>Brazilian Journal of Probability and Statistics, Volume 34, Number 1, 35--50.</p><p><strong>Abstract:</strong><br/>
Longitudinal zero-inflated count data are widely encountered in many fields, while modeling the correlation between measurements for the same subject is more challenge due to the lack of suitable multivariate joint distributions. This paper studies a novel mean-correlation modeling approach for longitudinal zero-inflated regression model, solving both problems of specifying joint distribution and parsimoniously modeling correlations with no constraint. The joint distribution of zero-inflated discrete longitudinal responses is modeled by a copula model whose correlation parameters are innovatively represented in hyper-spherical coordinates. To overcome the computational intractability in maximizing the full likelihood function of the model, we further propose a computationally efficient pairwise likelihood approach. We then propose separated mean and correlation regression models to model these key quantities, such modeling approach can also handle irregularly and possibly subject-specific times points. The resulting estimators are shown to be consistent and asymptotically normal. Data example and simulations support the effectiveness of the proposed approach.
</p>projecteuclid.org/euclid.bjps/1580720421_20200203040037Mon, 03 Feb 2020 04:00 ESTRobust Bayesian model selection for heavy-tailed linear regression using finite mixtureshttps://projecteuclid.org/euclid.bjps/1580720422<strong>Flávio B. Gonçalves</strong>, <strong>Marcos O. Prates</strong>, <strong>Victor Hugo Lachos</strong>. <p><strong>Source: </strong>Brazilian Journal of Probability and Statistics, Volume 34, Number 1, 51--70.</p><p><strong>Abstract:</strong><br/>
In this paper, we present a novel methodology to perform Bayesian model selection in linear models with heavy-tailed distributions. We consider a finite mixture of distributions to model a latent variable where each component of the mixture corresponds to one possible model within the symmetrical class of normal independent distributions. Naturally, the Gaussian model is one of the possibilities. This allows for a simultaneous analysis based on the posterior probability of each model. Inference is performed via Markov chain Monte Carlo—a Gibbs sampler with Metropolis–Hastings steps for a class of parameters. Simulated examples highlight the advantages of this approach compared to a segregated analysis based on arbitrarily chosen model selection criteria. Examples with real data are presented and an extension to censored linear regression is introduced and discussed.
</p>projecteuclid.org/euclid.bjps/1580720422_20200203040037Mon, 03 Feb 2020 04:00 ESTEffects of gene–environment and gene–gene interactions in case-control studies: A novel Bayesian semiparametric approachhttps://projecteuclid.org/euclid.bjps/1580720423<strong>Durba Bhattacharya</strong>, <strong>Sourabh Bhattacharya</strong>. <p><strong>Source: </strong>Brazilian Journal of Probability and Statistics, Volume 34, Number 1, 71--89.</p><p><strong>Abstract:</strong><br/>
Present day bio-medical research is pointing towards the fact that cognizance of gene–environment interactions along with genetic interactions may help prevent or detain the onset of many complex diseases like cardiovascular disease, cancer, type2 diabetes, autism or asthma by adjustments to lifestyle.
In this regard, we propose a Bayesian semiparametric model to detect not only the roles of genes and their interactions, but also the possible influence of environmental variables on the genes in case-control studies. Our model also accounts for the unknown number of genetic sub-populations via finite mixtures composed of Dirichlet processes. An effective parallel computing methodology, developed by us harnesses the power of parallel processing technology to increase the efficiencies of our conditionally independent Gibbs sampling and Transformation based MCMC (TMCMC) methods.
Applications of our model and methods to simulation studies with biologically realistic genotype datasets and a real, case-control based genotype dataset on early onset of myocardial infarction (MI) have yielded quite interesting results beside providing some insights into the differential effect of gender on MI.
</p>projecteuclid.org/euclid.bjps/1580720423_20200203040037Mon, 03 Feb 2020 04:00 ESTOn the Nielsen distributionhttps://projecteuclid.org/euclid.bjps/1580720425<strong>Fredy Castellares</strong>, <strong>Artur J. Lemonte</strong>, <strong>Marcos A. C. Santos</strong>. <p><strong>Source: </strong>Brazilian Journal of Probability and Statistics, Volume 34, Number 1, 90--111.</p><p><strong>Abstract:</strong><br/>
We introduce a two-parameter discrete distribution that may have a zero vertex and can be useful for modeling overdispersion. The discrete Nielsen distribution generalizes the Fisher logarithmic (i.e., logarithmic series) and Stirling type I distributions in the sense that both can be considered displacements of the Nielsen distribution. We provide a comprehensive account of the structural properties of the new discrete distribution. We also show that the Nielsen distribution is infinitely divisible. We discuss maximum likelihood estimation of the model parameters and provide a simple method to find them numerically. The usefulness of the proposed distribution is illustrated by means of three real data sets to prove its versatility in practical applications.
</p>projecteuclid.org/euclid.bjps/1580720425_20200203040037Mon, 03 Feb 2020 04:00 ESTNonparametric discrimination of areal functional datahttps://projecteuclid.org/euclid.bjps/1580720426<strong>Ahmad Younso</strong>. <p><strong>Source: </strong>Brazilian Journal of Probability and Statistics, Volume 34, Number 1, 112--126.</p><p><strong>Abstract:</strong><br/>
We consider a new nonparametric rule of classification, inspired from the classical moving window rule, that allows for the classification of spatially dependent functional data containing some completely missing curves. We investigate the consistency of this classifier under mild conditions. The practical use of the classifier will be illustrated through simulation studies.
</p>projecteuclid.org/euclid.bjps/1580720426_20200203040037Mon, 03 Feb 2020 04:00 ESTA primer on the characterization of the exchangeable Marshall–Olkin copula via monotone sequenceshttps://projecteuclid.org/euclid.bjps/1580720427<strong>Natalia Shenkman</strong>. <p><strong>Source: </strong>Brazilian Journal of Probability and Statistics, Volume 34, Number 1, 127--135.</p><p><strong>Abstract:</strong><br/>
While derivations of the characterization of the $d$-variate exchangeable Marshall–Olkin copula via $d$-monotone sequences relying on basic knowledge in probability theory exist in the literature, they contain a myriad of unnecessary relatively complicated computations. We revisit this issue and provide proofs where all undesired artefacts are removed, thereby exposing the simplicity of the characterization. In particular, we give an insightful analytical derivation of the monotonicity conditions based on the monotonicity properties of the survival probabilities.
</p>projecteuclid.org/euclid.bjps/1580720427_20200203040037Mon, 03 Feb 2020 04:00 ESTMultivariate normal approximation of the maximum likelihood estimator via the delta methodhttps://projecteuclid.org/euclid.bjps/1580720428<strong>Andreas Anastasiou</strong>, <strong>Robert E. Gaunt</strong>. <p><strong>Source: </strong>Brazilian Journal of Probability and Statistics, Volume 34, Number 1, 136--149.</p><p><strong>Abstract:</strong><br/>
We use the delta method and Stein’s method to derive, under regularity conditions, explicit upper bounds for the distributional distance between the distribution of the maximum likelihood estimator (MLE) of a $d$-dimensional parameter and its asymptotic multivariate normal distribution. Our bounds apply in situations in which the MLE can be written as a function of a sum of i.i.d. $t$-dimensional random vectors. We apply our general bound to establish a bound for the multivariate normal approximation of the MLE of the normal distribution with unknown mean and variance.
</p>projecteuclid.org/euclid.bjps/1580720428_20200203040037Mon, 03 Feb 2020 04:00 ESTApplication of weighted and unordered majorization orders in comparisons of parallel systems with exponentiated generalized gamma componentshttps://projecteuclid.org/euclid.bjps/1580720429<strong>Abedin Haidari</strong>, <strong>Amir T. Payandeh Najafabadi</strong>, <strong>Narayanaswamy Balakrishnan</strong>. <p><strong>Source: </strong>Brazilian Journal of Probability and Statistics, Volume 34, Number 1, 150--166.</p><p><strong>Abstract:</strong><br/>
Consider two parallel systems, say $A$ and $B$, with respective lifetimes $T_{1}$ and $T_{2}$ wherein independent component lifetimes of each system follow exponentiated generalized gamma distribution with possibly different exponential shape and scale parameters. We show here that $T_{2}$ is smaller than $T_{1}$ with respect to the usual stochastic order (reversed hazard rate order) if the vector of logarithm (the main vector) of scale parameters of System $B$ is weakly weighted majorized by that of System $A$, and if the vector of exponential shape parameters of System $A$ is unordered mojorized by that of System $B$. By means of some examples, we show that the above results can not be extended to the hazard rate and likelihood ratio orders. However, when the scale parameters of each system divide into two homogeneous groups, we verify that the usual stochastic and reversed hazard rate orders can be extended, respectively, to the hazard rate and likelihood ratio orders. The established results complete and strengthen some of the known results in the literature.
</p>projecteuclid.org/euclid.bjps/1580720429_20200203040037Mon, 03 Feb 2020 04:00 ESTOn estimating the location parameter of the selected exponential population under the LINEX loss functionhttps://projecteuclid.org/euclid.bjps/1580720430<strong>Mohd Arshad</strong>, <strong>Omer Abdalghani</strong>. <p><strong>Source: </strong>Brazilian Journal of Probability and Statistics, Volume 34, Number 1, 167--182.</p><p><strong>Abstract:</strong><br/>
Suppose that $\pi_{1},\pi_{2},\ldots ,\pi_{k}$ be $k(\geq2)$ independent exponential populations having unknown location parameters $\mu_{1},\mu_{2},\ldots,\mu_{k}$ and known scale parameters $\sigma_{1},\ldots,\sigma_{k}$. Let $\mu_{[k]}=\max \{\mu_{1},\ldots,\mu_{k}\}$. For selecting the population associated with $\mu_{[k]}$, a class of selection rules (proposed by Arshad and Misra [ Statistical Papers 57 (2016) 605–621]) is considered. We consider the problem of estimating the location parameter $\mu_{S}$ of the selected population under the criterion of the LINEX loss function. We consider three natural estimators $\delta_{N,1},\delta_{N,2}$ and $\delta_{N,3}$ of $\mu_{S}$, based on the maximum likelihood estimators, uniformly minimum variance unbiased estimator (UMVUE) and minimum risk equivariant estimator (MREE) of $\mu_{i}$’s, respectively. The uniformly minimum risk unbiased estimator (UMRUE) and the generalized Bayes estimator of $\mu_{S}$ are derived. Under the LINEX loss function, a general result for improving a location-equivariant estimator of $\mu_{S}$ is derived. Using this result, estimator better than the natural estimator $\delta_{N,1}$ is obtained. We also shown that the estimator $\delta_{N,1}$ is dominated by the natural estimator $\delta_{N,3}$. Finally, we perform a simulation study to evaluate and compare risk functions among various competing estimators of $\mu_{S}$.
</p>projecteuclid.org/euclid.bjps/1580720430_20200203040037Mon, 03 Feb 2020 04:00 ESTA note on the “L-logistic regression models: Prior sensitivity analysis, robustness to outliers and applications”https://projecteuclid.org/euclid.bjps/1580720431<strong>Saralees Nadarajah</strong>, <strong>Yuancheng Si</strong>. <p><strong>Source: </strong>Brazilian Journal of Probability and Statistics, Volume 34, Number 1, 183--187.</p><p><strong>Abstract:</strong><br/>
Da Paz, Balakrishnan and Bazan [Braz. J. Probab. Stat. 33 (2019), 455–479] introduced the L-logistic distribution, studied its properties including estimation issues and illustrated a data application. This note derives a closed form expression for moment properties of the distribution. Some computational issues are discussed.
</p>projecteuclid.org/euclid.bjps/1580720431_20200203040037Mon, 03 Feb 2020 04:00 EST$W^{1,p}$-Solutions of the transport equation by stochastic perturbationhttps://projecteuclid.org/euclid.bjps/1580720432<strong>David A. C. Mollinedo</strong>. <p><strong>Source: </strong>Brazilian Journal of Probability and Statistics, Volume 34, Number 1, 188--201.</p><p><strong>Abstract:</strong><br/>
We consider the stochastic transport equation with a possibly unbounded Hölder continuous vector field. Well-posedness is proved, namely, we show existence, uniqueness and strong stability of $W^{1,p}$-weak solutions.
</p>projecteuclid.org/euclid.bjps/1580720432_20200203040037Mon, 03 Feb 2020 04:00 ESTA message from the editorial boardhttps://projecteuclid.org/euclid.bjps/1588579217<p><strong>Source: </strong>Brazilian Journal of Probability and Statistics, Volume 34, Number 2, 203--203.</p>projecteuclid.org/euclid.bjps/1588579217_20200504040031Mon, 04 May 2020 04:00 EDTRecent developments in complex and spatially correlated functional datahttps://projecteuclid.org/euclid.bjps/1588579218<strong>Israel Martínez-Hernández</strong>, <strong>Marc G. Genton</strong>. <p><strong>Source: </strong>Brazilian Journal of Probability and Statistics, Volume 34, Number 2, 204--229.</p><p><strong>Abstract:</strong><br/>
As high-dimensional and high-frequency data are being collected on a large scale, the development of new statistical models is being pushed forward. Functional data analysis provides the required statistical methods to deal with large-scale and complex data by assuming that data are continuous functions, for example, realizations of a continuous process (curves) or continuous random field (surfaces), and that each curve or surface is considered as a single observation. Here, we provide an overview of functional data analysis when data are complex and spatially correlated. We provide definitions and estimators of the first and second moments of the corresponding functional random variable. We present two main approaches: The first assumes that data are realizations of a functional random field, that is, each observation is a curve with a spatial component. We call them spatial functional data . The second approach assumes that data are continuous deterministic fields observed over time. In this case, one observation is a surface or manifold, and we call them surface time series . For these two approaches, we describe software available for the statistical analysis. We also present a data illustration, using a high-resolution wind speed simulated dataset, as an example of the two approaches. The functional data approach offers a new paradigm of data analysis, where the continuous processes or random fields are considered as a single entity. We consider this approach to be very valuable in the context of big data.
</p>projecteuclid.org/euclid.bjps/1588579218_20200504040031Mon, 04 May 2020 04:00 EDTAgnostic tests can control the type I and type II errors simultaneouslyhttps://projecteuclid.org/euclid.bjps/1588579219<strong>Victor Coscrato</strong>, <strong>Rafael Izbicki</strong>, <strong>Rafael B. Stern</strong>. <p><strong>Source: </strong>Brazilian Journal of Probability and Statistics, Volume 34, Number 2, 230--250.</p><p><strong>Abstract:</strong><br/>
Despite its common practice, statistical hypothesis testing presents challenges in interpretation. For instance, in the standard frequentist framework there is no control of the type II error. As a result, the non-rejection of the null hypothesis $(H_{0})$ cannot reasonably be interpreted as its acceptance. We propose that this dilemma can be overcome by using agnostic hypothesis tests, since they can control the type I and II errors simultaneously. In order to make this idea operational, we show how to obtain agnostic hypothesis in typical models. For instance, we show how to build (unbiased) uniformly most powerful agnostic tests and how to obtain agnostic tests from standard p-values. Also, we present conditions such that the above tests can be made logically coherent. Finally, we present examples of consistent agnostic hypothesis tests.
</p>projecteuclid.org/euclid.bjps/1588579219_20200504040031Mon, 04 May 2020 04:00 EDTRandom environment binomial thinning integer-valued autoregressive process with Poisson or geometric marginalhttps://projecteuclid.org/euclid.bjps/1588579220<strong>Zhengwei Liu</strong>, <strong>Qi Li</strong>, <strong>Fukang Zhu</strong>. <p><strong>Source: </strong>Brazilian Journal of Probability and Statistics, Volume 34, Number 2, 251--272.</p><p><strong>Abstract:</strong><br/>
To predict time series of counts with small values and remarkable fluctuations, an available model is the $r$ states random environment process based on the negative binomial thinning operator and the geometric marginal. However, we argue that the aforementioned model may suffer from the following two drawbacks. First, under the condition of no prior information, the overdispersed property of the geometric distribution may cause the predictions fluctuate greatly. Second, because of the constraints on the model parameters, some estimated parameters are close to zero in real-data examples, which may not objectively reveal the correlation relationship. For the first drawback, an $r$ states random environment process based on the binomial thinning operator and the Poisson marginal is introduced. For the second drawback, we propose a generalized $r$ states random environment integer-valued autoregressive model based on the binomial thinning operator to model fluctuations of data. Yule–Walker and conditional maximum likelihood estimates are considered and their performances are assessed via simulation studies. Two real-data sets are conducted to illustrate the better performances of the proposed models compared with some existing models.
</p>projecteuclid.org/euclid.bjps/1588579220_20200504040031Mon, 04 May 2020 04:00 EDTSymmetrical and asymmetrical mixture autoregressive processeshttps://projecteuclid.org/euclid.bjps/1588579221<strong>Mohsen Maleki</strong>, <strong>Arezo Hajrajabi</strong>, <strong>Reinaldo B. Arellano-Valle</strong>. <p><strong>Source: </strong>Brazilian Journal of Probability and Statistics, Volume 34, Number 2, 273--290.</p><p><strong>Abstract:</strong><br/>
In this paper, we study the finite mixtures of autoregressive processes assuming that the distribution of innovations (errors) belongs to the class of scale mixture of skew-normal (SMSN) distributions. The SMSN distributions allow a simultaneous modeling of the existence of outliers, heavy tails and asymmetries in the distribution of innovations. Therefore, a statistical methodology based on the SMSN family allows us to use a robust modeling on some non-linear time series with great flexibility, to accommodate skewness, heavy tails and heterogeneity simultaneously. The existence of convenient hierarchical representations of the SMSN distributions facilitates also the implementation of an ECME-type of algorithm to perform the likelihood inference in the considered model. Simulation studies and the application to a real data set are finally presented to illustrate the usefulness of the proposed model.
</p>projecteuclid.org/euclid.bjps/1588579221_20200504040031Mon, 04 May 2020 04:00 EDTAdaptive two-treatment three-period crossover design for normal responseshttps://projecteuclid.org/euclid.bjps/1588579222<strong>Uttam Bandyopadhyay</strong>, <strong>Shirsendu Mukherjee</strong>, <strong>Atanu Biswas</strong>. <p><strong>Source: </strong>Brazilian Journal of Probability and Statistics, Volume 34, Number 2, 291--303.</p><p><strong>Abstract:</strong><br/>
In adaptive crossover design, our goal is to allocate more patients to a promising treatment sequence. The present work contains a very simple three period crossover design for two competing treatments where the allocation in period 3 is done on the basis of the data obtained from the first two periods. Assuming normality of response variables we use a reliability functional for the choice between two treatments. We calculate the allocation proportions and their standard errors corresponding to the possible treatment combinations. We also derive some asymptotic results and provide solutions on related inferential problems. Moreover, the proposed procedure is compared with a possible competitor. Finally, we use a data set to illustrate the applicability of the proposed design.
</p>projecteuclid.org/euclid.bjps/1588579222_20200504040031Mon, 04 May 2020 04:00 EDTBayesian modeling and prior sensitivity analysis for zero–one augmented beta regression models with an application to psychometric datahttps://projecteuclid.org/euclid.bjps/1588579223<strong>Danilo Covaes Nogarotto</strong>, <strong>Caio Lucidius Naberezny Azevedo</strong>, <strong>Jorge Luis Bazán</strong>. <p><strong>Source: </strong>Brazilian Journal of Probability and Statistics, Volume 34, Number 2, 304--322.</p><p><strong>Abstract:</strong><br/>
The interest on the analysis of the zero–one augmented beta regression (ZOABR) model has been increasing over the last few years. In this work, we developed a Bayesian inference for the ZOABR model, providing some contributions, namely: we explored the use of Jeffreys-rule and independence Jeffreys prior for some of the parameters, performing a sensitivity study of prior choice, comparing the Bayesian estimates with the maximum likelihood ones and measuring the accuracy of the estimates under several scenarios of interest. The results indicate, in a general way, that: the Bayesian approach, under the Jeffreys-rule prior, was as accurate as the ML one. Also, different from other approaches, we use the predictive distribution of the response to implement Bayesian residuals. To further illustrate the advantages of our approach, we conduct an analysis of a real psychometric data set including a Bayesian residual analysis, where it is shown that misleading inference can be obtained when the data is transformed. That is, when the zeros and ones are transformed to suitable values and the usual beta regression model is considered, instead of the ZOABR model. Finally, future developments are discussed.
</p>projecteuclid.org/euclid.bjps/1588579223_20200504040031Mon, 04 May 2020 04:00 EDTA Bayesian sparse finite mixture model for clustering data from a heterogeneous populationhttps://projecteuclid.org/euclid.bjps/1588579224<strong>Erlandson F. Saraiva</strong>, <strong>Adriano K. Suzuki</strong>, <strong>Luís A. Milan</strong>. <p><strong>Source: </strong>Brazilian Journal of Probability and Statistics, Volume 34, Number 2, 323--344.</p><p><strong>Abstract:</strong><br/>
In this paper, we introduce a Bayesian approach for clustering data using a sparse finite mixture model (SFMM). The SFMM is a finite mixture model with a large number of components $k$ previously fixed where many components can be empty. In this model, the number of components $k$ can be interpreted as the maximum number of distinct mixture components. Then, we explore the use of a prior distribution for the weights of the mixture model that take into account the possibility that the number of clusters $k_{\mathbf{c}}$ (e.g., nonempty components) can be random and smaller than the number of components $k$ of the finite mixture model. In order to determine clusters we develop a MCMC algorithm denominated Split-Merge allocation sampler. In this algorithm, the split-merge strategy is data-driven and was inserted within the algorithm in order to increase the mixing of the Markov chain in relation to the number of clusters. The performance of the method is verified using simulated datasets and three real datasets. The first real data set is the benchmark galaxy data, while second and third are the publicly available data set on Enzyme and Acidity, respectively.
</p>projecteuclid.org/euclid.bjps/1588579224_20200504040031Mon, 04 May 2020 04:00 EDTReliability estimation in a multicomponent stress-strength model for Burr XII distribution under progressive censoringhttps://projecteuclid.org/euclid.bjps/1588579225<strong>Raj Kamal Maurya</strong>, <strong>Yogesh Mani Tripathi</strong>. <p><strong>Source: </strong>Brazilian Journal of Probability and Statistics, Volume 34, Number 2, 345--369.</p><p><strong>Abstract:</strong><br/>
We consider estimation of the multicomponent stress-strength reliability under progressive Type II censoring under the assumption that stress and strength variables follow Burr XII distributions with a common shape parameter. Maximum likelihood estimates of the reliability are obtained along with asymptotic intervals when common shape parameter may be known or unknown. Bayes estimates are also derived under the squared error loss function using different approximation methods. Further, we obtain exact Bayes and uniformly minimum variance unbiased estimates of the reliability for the case common shape parameter is known. The highest posterior density intervals are also obtained. We perform Monte Carlo simulations to compare the performance of proposed estimates and present a discussion based on this study. Finally, two real data sets are analyzed for illustration purposes.
</p>projecteuclid.org/euclid.bjps/1588579225_20200504040031Mon, 04 May 2020 04:00 EDTMeasuring symmetry and asymmetry of multiplicative distortion measurement errors datahttps://projecteuclid.org/euclid.bjps/1588579226<strong>Jun Zhang</strong>, <strong>Yujie Gai</strong>, <strong>Xia Cui</strong>, <strong>Gaorong Li</strong>. <p><strong>Source: </strong>Brazilian Journal of Probability and Statistics, Volume 34, Number 2, 370--393.</p><p><strong>Abstract:</strong><br/>
This paper studies the measure of symmetry or asymmetry of a continuous variable under the multiplicative distortion measurement errors setting. The unobservable variable is distorted in a multiplicative fashion by an observed confounding variable. First, two direct plug-in estimation procedures are proposed, and the empirical likelihood based confidence intervals are constructed to measure the symmetry or asymmetry of the unobserved variable. Next, we propose four test statistics for testing whether the unobserved variable is symmetric or not. The asymptotic properties of the proposed estimators and test statistics are examined. We conduct Monte Carlo simulation experiments to examine the performance of the proposed estimators and test statistics. These methods are applied to analyze a real dataset for an illustration.
</p>projecteuclid.org/euclid.bjps/1588579226_20200504040031Mon, 04 May 2020 04:00 EDTStein characterizations for linear combinations of gamma random variableshttps://projecteuclid.org/euclid.bjps/1588579227<strong>Benjamin Arras</strong>, <strong>Ehsan Azmoodeh</strong>, <strong>Guillaume Poly</strong>, <strong>Yvik Swan</strong>. <p><strong>Source: </strong>Brazilian Journal of Probability and Statistics, Volume 34, Number 2, 394--413.</p><p><strong>Abstract:</strong><br/>
In this paper we propose a new, simple and explicit mechanism allowing to derive Stein operators for random variables whose characteristic function satisfies a simple ODE. We apply this to study random variables which can be represented as linear combinations of (not necessarily independent) gamma distributed random variables. The connection with Malliavin calculus for random variables in the second Wiener chaos is detailed. An application to McKay Type I random variables is also outlined.
</p>projecteuclid.org/euclid.bjps/1588579227_20200504040031Mon, 04 May 2020 04:00 EDTOriented first passage percolation in the mean field limithttps://projecteuclid.org/euclid.bjps/1588579228<strong>Nicola Kistler</strong>, <strong>Adrien Schertzer</strong>, <strong>Marius A. Schmidt</strong>. <p><strong>Source: </strong>Brazilian Journal of Probability and Statistics, Volume 34, Number 2, 414--425.</p><p><strong>Abstract:</strong><br/>
The Poisson clumping heuristic has lead Aldous to conjecture the value of the oriented first passage percolation on the hypercube in the limit of large dimensions. Aldous’ conjecture has been rigorously confirmed by Fill and Pemantle ( Ann. Appl. Probab. 3 (1993) 593–629) by means of a variance reduction trick. We present here a streamlined and, we believe, more natural proof based on ideas emerged in the study of Derrida’s random energy models.
</p>projecteuclid.org/euclid.bjps/1588579228_20200504040031Mon, 04 May 2020 04:00 EDTBranching random walks with uncountably many extinction probability vectorshttps://projecteuclid.org/euclid.bjps/1588579229<strong>Daniela Bertacchi</strong>, <strong>Fabio Zucca</strong>. <p><strong>Source: </strong>Brazilian Journal of Probability and Statistics, Volume 34, Number 2, 426--438.</p><p><strong>Abstract:</strong><br/>
Given a branching random walk on a set $X$, we study its extinction probability vectors $\mathbf{q}(\cdot,A)$. Their components are the probability that the process goes extinct in a fixed $A\subseteq X$, when starting from a vertex $x\in X$. The set of extinction probability vectors (obtained letting $A$ vary among all subsets of $X$) is a subset of the set of the fixed points of the generating function of the branching random walk. In particular here we are interested in the cardinality of the set of extinction probability vectors. We prove results which allow to understand whether the probability of extinction in a set $A$ is different from the one of extinction in another set $B$. In many cases there are only two possible extinction probability vectors and so far, in more complicated examples, only a finite number of distinct extinction probability vectors had been explicitly found. Whether a branching random walk could have an infinite number of distinct extinction probability vectors was not known. We apply our results to construct examples of branching random walks with uncountably many distinct extinction probability vectors.
</p>projecteuclid.org/euclid.bjps/1588579229_20200504040031Mon, 04 May 2020 04:00 EDT