Brazilian Journal of Probability and Statistics Articles (Project Euclid)
http://projecteuclid.org/euclid.bjps
The latest articles from Brazilian Journal of Probability and Statistics on Project Euclid, a site for mathematics and statistics resources.en-usCopyright 2010 Cornell University LibraryEuclid-L@cornell.edu (Project Euclid Team)Thu, 05 Aug 2010 15:41 EDTThu, 31 Mar 2011 09:13 EDThttp://projecteuclid.org/collection/euclid/images/logo_linking_100.gifProject Euclid
http://projecteuclid.org/
An estimation method for latent traits and population parameters in Nominal Response Model
http://projecteuclid.org/euclid.bjps/1280754493
<strong>Caio L. N. Azevedo</strong>, <strong>Dalton F. Andrade</strong><p><strong>Source: </strong>Braz. J. Probab. Stat., Volume 24, Number 3, 415--433.</p><p><strong>Abstract:</strong><br/>
The nominal response model (NRM) was proposed by Bock [ Psychometrika 37 (1972) 29–51] in order to improve the latent trait (ability) estimation in multiple choice tests with nominal items. When the item parameters are known, expectation a posteriori or maximum a posteriori methods are commonly employed to estimate the latent traits, considering a standard symmetric normal distribution as the latent traits prior density. However, when this item set is presented to a new group of examinees, it is not only necessary to estimate their latent traits but also the population parameters of this group. This article has two main purposes: first, to develop a Monte Carlo Markov Chain algorithm to estimate both latent traits and population parameters concurrently. This algorithm comprises the Metropolis–Hastings within Gibbs sampling algorithm (MHWGS) proposed by Patz and Junker [ Journal of Educational and Behavioral Statistics 24 (1999b) 346–366]. Second, to compare, in the latent trait recovering, the performance of this method with three other methods: maximum likelihood, expectation a posteriori and maximum a posteriori. The comparisons were performed by varying the total number of items (NI), the number of categories and the values of the mean and the variance of the latent trait distribution. The results showed that MHWGS outperforms the other methods concerning the latent traits estimation as well as it recoveries properly the population parameters. Furthermore, we found that NI accounts for the highest percentage of the variability in the accuracy of latent trait estimation.
</p>projecteuclid.org/euclid.bjps/1280754493_Thu, 05 Aug 2010 15:41 EDTThu, 05 Aug 2010 15:41 EDTParameter estimation for discretely observed non-ergodic fractional Ornstein–Uhlenbeck processes of the second kindhttps://projecteuclid.org/euclid.bjps/1528444871<strong>Brahim El Onsy</strong>, <strong>Khalifa Es-Sebaiy</strong>, <strong>Djibril Ndiaye</strong>. <p><strong>Source: </strong>Brazilian Journal of Probability and Statistics, Volume 32, Number 3, 545--558.</p><p><strong>Abstract:</strong><br/>
We use the least squares type estimation to estimate the drift parameter $\theta>0$ of a non-ergodic fractional Ornstein–Uhlenbeck process of the second kind defined as $dX_{t}=\theta X_{t}\,dt+dY_{t}^{(1)},X_{0}=0$, $t\geq0$, where $Y_{t}^{(1)}=\int_{0}^{t}e^{-s}\,dB_{a_{s}}$ with $a_{t}=He^{\frac{t}{H}}$, and $\{B_{t},t\geq0\}$ is a fractional Brownian motion of Hurst parameter $H\in(\frac{1}{2},1)$. We assume that the process $\{X_{t},t\geq0\}$ is observed at discrete time instants $t_{1}=\Delta_{n},\ldots,t_{n}=n\Delta_{n}$. We construct two estimators $\hat{\theta}_{n}$ and $\check{\theta}_{n}$ of $\theta$ which are strongly consistent and we prove that these estimators are $\sqrt{n\Delta_{n}}$-consistent, in the sense that the sequences $\sqrt{n\Delta_{n}}(\hat{\theta}_{n}-\theta)$ and $\sqrt{n\Delta_{n}}(\check{\theta}_{n}-\theta)$ are tight.
</p>projecteuclid.org/euclid.bjps/1528444871_20180608040117Fri, 08 Jun 2018 04:01 EDTA Bayesian approach to errors-in-variables beta regressionhttps://projecteuclid.org/euclid.bjps/1528444872<strong>Jorge Figueroa-Zúñiga</strong>, <strong>Jalmar M. F. Carrasco</strong>, <strong>Reinaldo Arellano-Valle</strong>, <strong>Silvia L. P. Ferrari</strong>. <p><strong>Source: </strong>Brazilian Journal of Probability and Statistics, Volume 32, Number 3, 559--582.</p><p><strong>Abstract:</strong><br/>
Beta regression models have been widely used for the analysis of limited-range continuous variables. Here, we consider an extension of the beta regression models that allows for explanatory variables to be measured with error. Then we propose a Bayesian treatment for errors-in-variables beta regression models. The specification of prior distributions is discussed, computational implementation via Gibbs sampling is provided, and two real data applications are presented. Additionally, Monte Carlo simulations are used to evaluate the performance of the proposed approach.
</p>projecteuclid.org/euclid.bjps/1528444872_20180608040117Fri, 08 Jun 2018 04:01 EDTSums of possibly associated multivariate indicator functions: The Conway–Maxwell-Multinomial distributionhttps://projecteuclid.org/euclid.bjps/1528444873<strong>Joseph B. Kadane</strong>, <strong>Zhi Wang</strong>. <p><strong>Source: </strong>Brazilian Journal of Probability and Statistics, Volume 32, Number 3, 583--596.</p><p><strong>Abstract:</strong><br/>
The Conway–Maxwell-Multinomial distribution is studied in this paper. Its properties are demonstrated, including sufficient statistics and conditions for the propriety of posterior distributions derived from it. An application is given using data from Mendel’s ground-breaking genetic studies.
</p>projecteuclid.org/euclid.bjps/1528444873_20180608040117Fri, 08 Jun 2018 04:01 EDTA note on weak convergence results for infinite causal triangulationshttps://projecteuclid.org/euclid.bjps/1528444874<strong>Valentin Sisko</strong>, <strong>Anatoly Yambartsev</strong>, <strong>Stefan Zohren</strong>. <p><strong>Source: </strong>Brazilian Journal of Probability and Statistics, Volume 32, Number 3, 597--615.</p><p><strong>Abstract:</strong><br/>
We discuss infinite causal triangulations and equivalence to the size biased branching process measure—the critical Galton–Watson branching process distribution conditioned on non-extinction. Using known results from the theory of branching processes, this relation is used to prove a novel weak convergence result of the joint length-area process of a infinite causal triangulations to a limiting diffusion. The diffusion equation enables us to determine the physical Hamiltonian and Green’s function from the Feynman–Kac procedure, providing us with a mathematical rigorous proof of certain scaling limits of causal dynamical triangulations.
</p>projecteuclid.org/euclid.bjps/1528444874_20180608040117Fri, 08 Jun 2018 04:01 EDTSemiparametric quantile estimation for varying coefficient partially linear measurement errors modelshttps://projecteuclid.org/euclid.bjps/1528444875<strong>Jun Zhang</strong>, <strong>Yan Zhou</strong>, <strong>Xia Cui</strong>, <strong>Wangli Xu</strong>. <p><strong>Source: </strong>Brazilian Journal of Probability and Statistics, Volume 32, Number 3, 616--656.</p><p><strong>Abstract:</strong><br/>
We study varying coefficient partially linear models when some linear covariates are error-prone, but their ancillary variables are available. After calibrating the error-prone covariates, we study quantile regression estimates for parametric coefficients and nonparametric varying coefficient functions, and we develop a semiparametric composite quantile estimation procedure. Asymptotic properties of the proposed estimators are established, and the estimators achieve their best convergence rate with proper bandwidth conditions. Simulation studies are conducted to evaluate the performance of the proposed method, and a real data set is analyzed as an illustration.
</p>projecteuclid.org/euclid.bjps/1528444875_20180608040117Fri, 08 Jun 2018 04:01 EDTWeighted sampling without replacementhttps://projecteuclid.org/euclid.bjps/1528444876<strong>Anna Ben-Hamou</strong>, <strong>Yuval Peres</strong>, <strong>Justin Salez</strong>. <p><strong>Source: </strong>Brazilian Journal of Probability and Statistics, Volume 32, Number 3, 657--669.</p><p><strong>Abstract:</strong><br/>
Comparing concentration properties of uniform sampling with and without replacement has a long history which can be traced back to the pioneer work of Hoeffding (1963). The goal of this note is to extend this comparison to the case of non-uniform weights, using a coupling between samples drawn with and without replacement. When the items’ weights are arranged in the same order as their values, we show that the induced coupling for the cumulative values is a submartingale coupling. As a consequence, the powerful Chernoff-type upper-tail estimates known for sampling with replacement automatically transfer to the case of sampling without replacement. For general weights, we use the same coupling to establish a sub-Gaussian concentration inequality. As the sample size approaches the total number of items, the variance factor in this inequality displays the same kind of sharpening as Serfling (1974) identified in the case of uniform weights. We also construct an other martingale coupling which allows us to answer a question raised by Luh and Pippenger (2014) on sampling in Polya urns with different replacement numbers.
</p>projecteuclid.org/euclid.bjps/1528444876_20180608040117Fri, 08 Jun 2018 04:01 EDTOn Hilbert’s 8th problemhttps://projecteuclid.org/euclid.bjps/1528444877<strong>Nicholas G. Polson</strong>. <p><strong>Source: </strong>Brazilian Journal of Probability and Statistics, Volume 32, Number 3, 670--678.</p><p><strong>Abstract:</strong><br/>
A Hadamard factorisation of the Riemann $\xi$-function is constructed to characterize the zeros of the zeta function.
</p>projecteuclid.org/euclid.bjps/1528444877_20180608040117Fri, 08 Jun 2018 04:01 EDTMaxima of branching random walks with piecewise constant variancehttps://projecteuclid.org/euclid.bjps/1534492897<strong>Frédéric Ouimet</strong>. <p><strong>Source: </strong>Brazilian Journal of Probability and Statistics, Volume 32, Number 4, 679--706.</p><p><strong>Abstract:</strong><br/>
This article extends the results of Fang and Zeitouni [ Electron. J. Probab. 17 (2012a) 18] on branching random walks (BRWs) with Gaussian increments in time inhomogeneous environments. We treat the case where the variance of the increments changes a finite number of times at different scales in $[0,1]$ under a slight restriction. We find the asymptotics of the maximum up to an $O_{\mathbb{P}}(1)$ error and show how the profile of the variance influences the leading order and the logarithmic correction term. A more general result was independently obtained by Mallein [ Electron. J. Probab. 20 (2015b) 40] when the law of the increments is not necessarily Gaussian. However, the proof we present here generalizes the approach of Fang and Zeitouni [ Electron. J. Probab. 17 (2012a) 18] instead of using the spinal decomposition of the BRW. As such, the proof is easier to understand and more robust in the presence of an approximate branching structure.
</p>projecteuclid.org/euclid.bjps/1534492897_20180817040200Fri, 17 Aug 2018 04:02 EDTA survival model with Birnbaum–Saunders frailty for uncensored and censored cancer datahttps://projecteuclid.org/euclid.bjps/1534492898<strong>Jeremias Leão</strong>, <strong>Víctor Leiva</strong>, <strong>Helton Saulo</strong>, <strong>Vera Tomazella</strong>. <p><strong>Source: </strong>Brazilian Journal of Probability and Statistics, Volume 32, Number 4, 707--729.</p><p><strong>Abstract:</strong><br/>
Survival models with frailty are used when additional data are non-available to explain the occurrence time of an event of interest. This non-availability may be considered as a random effect related to unobserved explanatory variables, or that cannot be measured, often attributed to environmental or genetic factors. We propose a survival model with frailty based on the Birnbaum–Saunders distribution. This distribution has been widely applied to lifetime data. The random effect is the frailty, which is assumed to follow the Birnbaum–Saunders distribution and introduced on the baseline hazard rate to control the unobservable heterogeneity of the patients. We use the maximum likelihood method to estimate the model parameters and evaluate its performance under different censoring proportions by a Monte Carlo simulation study. Two types of residuals are considered to assess the adequacy of the proposed model. Examples with uncensored and censored real-world data sets illustrate the potential applications of the proposed model.
</p>projecteuclid.org/euclid.bjps/1534492898_20180817040200Fri, 17 Aug 2018 04:02 EDTSearching for the core variables in principal components analysishttps://projecteuclid.org/euclid.bjps/1534492899<strong>Yanina Gimenez</strong>, <strong>Guido Giussani</strong>. <p><strong>Source: </strong>Brazilian Journal of Probability and Statistics, Volume 32, Number 4, 730--754.</p><p><strong>Abstract:</strong><br/>
In this article, we introduce a procedure for selecting variables in principal components analysis. It is developed to identify a small subset of the original variables that best explain the principal components through nonparametric relationships. There are usually some noisy uninformative variables in a dataset, and some variables that are strongly related to one another because of their general dependence. The procedure is designed to be used following the satisfactory initial principal components analysis with all variables, and its aim is to help to interpret the underlying structures. We analyze the asymptotic behavior of the method and provide some examples.
</p>projecteuclid.org/euclid.bjps/1534492899_20180817040200Fri, 17 Aug 2018 04:02 EDTBrazilian network of PhDs working with probability and statisticshttps://projecteuclid.org/euclid.bjps/1534492900<strong>Luciano Digiampietri</strong>, <strong>Leandro Rêgo</strong>, <strong>Filipe Costa de Souza</strong>, <strong>Raydonal Ospina</strong>, <strong>Jesús Mena-Chalco</strong>. <p><strong>Source: </strong>Brazilian Journal of Probability and Statistics, Volume 32, Number 4, 755--782.</p><p><strong>Abstract:</strong><br/>
Statistical and probabilistic reasoning enlightens our judgments about uncertainty and the chance or beliefs on the occurrence of random events in everyday life. Therefore, there are scientists working with Probability and Statistics in various fields of knowledge, what favors the formation of scientific network collaborations of researchers with different backgrounds. Here, we propose to describe the Brazilian PhDs who work with probability and statistics. In particular, we analyze national and states collaboration networks of such researchers by calculating different metrics. We show that there is a greater concentration of nodes in and around the cites which host Probability and Statistics graduate programs. Moreover, the states that host P&S Doctoral programs are the most central. We also observe a disparity in the size of the states networks. The clustering coefficient of the national network suggests that this network and regional differences especially with respect to states from South-east and North is not cohesive and, probably, it is in a maturing stage.
</p>projecteuclid.org/euclid.bjps/1534492900_20180817040200Fri, 17 Aug 2018 04:02 EDTExit time for a reaction diffusion model: Case of a one well potentialhttps://projecteuclid.org/euclid.bjps/1534492901<strong>Adrian Hinojosa</strong>. <p><strong>Source: </strong>Brazilian Journal of Probability and Statistics, Volume 32, Number 4, 783--794.</p><p><strong>Abstract:</strong><br/>
We consider a interacting particle system, the Glauber $+$ Kawasaki model. This model is the result of the combination of a fast stirring, the Kawasaki part, and a spin flip process, the Glauber part. This process has a Reaction–Diffusion equation as hydrodynamic limit, as is proven by De Masi and Presutti ( Mathematical Methods for Hydrodynamic Limits (1991) Springer). The ergodicity of these dynamics (one well potential) was proven in Brasseco et al. ( Amer. Math. Soc. Transl. Ser. 2 198 (2000) 37–49), for any dimension. In this article, we prove the asymptotic exponentiality for certain exit time from a subset of the basin of attraction of the well.
</p>projecteuclid.org/euclid.bjps/1534492901_20180817040200Fri, 17 Aug 2018 04:02 EDTOn the time-dependent Fisher information of a density functionhttps://projecteuclid.org/euclid.bjps/1534492902<strong>Omid Kharazmi</strong>, <strong>Majid Asadi</strong>. <p><strong>Source: </strong>Brazilian Journal of Probability and Statistics, Volume 32, Number 4, 795--814.</p><p><strong>Abstract:</strong><br/>
Fisher information is a very important and fundamental criterion in statistical inference especially in optimal and large sample studies in estimation theory. It also plays a key role in physics, thermodynamic, information theory and other applications. In the literature there have been defined two forms of Fisher information: one for the parameters of a distribution function and one for the density function of a distribution. In this paper, we consider a nonnegative continuous random (lifetime) variable $X$ and define a time-dependent Fisher information for density function of the residual random variable associated to $X$. We also propose a time-dependent version of Fisher information distance (relative Fisher information) between the densities of two nonnegative random variables. Several properties of the proposed measures and their relations to other statistical measures are investigated. To illustrate the results various examples are also provided.
</p>projecteuclid.org/euclid.bjps/1534492902_20180817040200Fri, 17 Aug 2018 04:02 EDTAsymptotic predictive inference with exchangeable datahttps://projecteuclid.org/euclid.bjps/1534492903<strong>Patrizia Berti</strong>, <strong>Luca Pratelli</strong>, <strong>Pietro Rigo</strong>. <p><strong>Source: </strong>Brazilian Journal of Probability and Statistics, Volume 32, Number 4, 815--833.</p><p><strong>Abstract:</strong><br/>
Let $(X_{n})$ be a sequence of random variables, adapted to a filtration $(\mathcal{G}_{n})$, and let $\mu_{n}=(1/n)\sum_{i=1}^{n}\delta_{X_{i}}$ and $a_{n}(\cdot)=P(X_{n+1}\in\cdot|\mathcal{G}_{n})$ be the empirical and the predictive measures. We focus on \begin{equation*}\Vert \mu_{n}-a_{n}\Vert =\mathop{\mathrm{sup}}_{B\in\mathcal{D}}\vert\mu_{n}(B)-a_{n}(B)\vert,\end{equation*} where $\mathcal{D}$ is a class of measurable sets. Conditions for $\Vert \mu_{n}-a_{n}\Vert \rightarrow0$, almost surely or in probability, are given. Also, to determine the rate of convergence, the asymptotic behavior of $r_{n}\Vert \mu_{n}-a_{n}\Vert $ is investigated for suitable constants $r_{n}$. Special attention is paid to $r_{n}=\sqrt{n}$ and $r_{n}=\sqrt{\frac{n}{\log\log n}}$. The sequence $(X_{n})$ is exchangeable or, more generally, conditionally identically distributed.
</p>projecteuclid.org/euclid.bjps/1534492903_20180817040200Fri, 17 Aug 2018 04:02 EDTWavelet estimation for derivative of a density in the presence of additive noisehttps://projecteuclid.org/euclid.bjps/1534492904<strong>B. L. S. Prakasa Rao</strong>. <p><strong>Source: </strong>Brazilian Journal of Probability and Statistics, Volume 32, Number 4, 834--850.</p><p><strong>Abstract:</strong><br/>
We construct a wavelet estimator for the derivative of a probability density function in the presence of an additive noise and study its $L_{p}$-consistency property.
</p>projecteuclid.org/euclid.bjps/1534492904_20180817040200Fri, 17 Aug 2018 04:02 EDTDimension reduction based on conditional multiple index density functionhttps://projecteuclid.org/euclid.bjps/1534492905<strong>Jun Zhang</strong>, <strong>Baohua He</strong>, <strong>Tao Lu</strong>, <strong>Songqiao Wen</strong>. <p><strong>Source: </strong>Brazilian Journal of Probability and Statistics, Volume 32, Number 4, 851--872.</p><p><strong>Abstract:</strong><br/>
In this paper, a dimension reduction method is proposed by using the first derivative of the conditional density function of response given predictors. To estimate the central subspace, we propose a direct methodology by taking expectation of the product of predictor and kernel function about response, which helps to capture the directions in the conditional density function. The consistency and asymptotic normality of the proposed estimation methodology are investigated. Furthermore, we conduct some simulations to evaluate the performance of our proposed method and compare with existing methods, and a real data set is analyzed for illustration.
</p>projecteuclid.org/euclid.bjps/1534492905_20180817040200Fri, 17 Aug 2018 04:02 EDTA weak version of bivariate lack of memory propertyhttps://projecteuclid.org/euclid.bjps/1534492906<strong>Nikolai Kolev</strong>, <strong>Jayme Pinto</strong>. <p><strong>Source: </strong>Brazilian Journal of Probability and Statistics, Volume 32, Number 4, 873--906.</p><p><strong>Abstract:</strong><br/>
We suggest a modification of the classical Marshall–Olkin’s bivariate exponential distribution considering a possibility of a singularity contribution along arbitrary line through the origin. It serves as a base of a new weaker version of the bivariate lack of memory property, which might be both “aging” and “non-aging” depending on the additional inclination parameter. The corresponding copula is obtained and we establish its disagreement with Lancaster’s phenomena. Characterizations and properties of the novel bivariate memory-less notion are obtained and its applications are discussed. We characterize associated weak multivariate version. The weak bivariate lack of memory property implies restrictions on the marginal distributions. Starting from pre-specified marginals we propose a procedure to build bivariate distributions possessing a weak bivariate lack of memory property and illustrate it by examples. We complement the methodology with closure properties of the new class. We finish with a discussion and suggest several related problems for future research.
</p>projecteuclid.org/euclid.bjps/1534492906_20180817040200Fri, 17 Aug 2018 04:02 EDTRetraction: On Hilbert’s 8th problemhttps://projecteuclid.org/euclid.bjps/1539361259<strong>Nicholas G. Polson</strong>. <p><strong>Source: </strong>Brazilian Journal of Probability and Statistics, Volume 33, Number 1, 1--1.</p><p><strong>Abstract:</strong><br/>
Two errata in the paper are given.
</p>projecteuclid.org/euclid.bjps/1539361259_20181012122106Fri, 12 Oct 2018 12:21 EDTBimodal extension based on the skew-$t$-normal distributionhttps://projecteuclid.org/euclid.bjps/1547456483<strong>Mehdi Amiri</strong>, <strong>Héctor W. Gómez</strong>, <strong>Ahad Jamalizadeh</strong>, <strong>Mina Towhidi</strong>. <p><strong>Source: </strong>Brazilian Journal of Probability and Statistics, Volume 33, Number 1, 2--23.</p><p><strong>Abstract:</strong><br/>
In this paper, a skew and uni-/bi-modal extension of the Student-$t$ distribution is considered. This model is more flexible and has wider ranges of skewness and kurtosis than the other skew distributions in literature. Fisher information matrix for the proposed model and some submodels are derived. With a simulation study and some real data sets, applicability of the proposed models are illustrated.
</p>projecteuclid.org/euclid.bjps/1547456483_20190114040156Mon, 14 Jan 2019 04:01 ESTExtreme-cum-median ranked set samplinghttps://projecteuclid.org/euclid.bjps/1547456484<strong>Shakeel Ahmed</strong>, <strong>Javid Shabbir</strong>. <p><strong>Source: </strong>Brazilian Journal of Probability and Statistics, Volume 33, Number 1, 24--38.</p><p><strong>Abstract:</strong><br/>
A mixture of Extreme Ranked Set Sampling (ERSS) and Median Ranked Set Sampling (MRSS) is introduced to obtain a more representative sample using three out of five number summary statistics [i.e., Minimum, Median and Maximum]. The proposed sampling scheme provides unbiased estimator of mean for symmetric population and gives moderate efficiency for both symmetric and asymmetric populations under perfect as well as imperfect rankings. Expressions for bias and asymptotic variance are presented. A simulation study is also conducted to observe the performance of the proposed estimator. Application of proposed sampling scheme is illustrated through a real life example.
</p>projecteuclid.org/euclid.bjps/1547456484_20190114040156Mon, 14 Jan 2019 04:01 ESTInventory model of type $(s,S)$ under heavy tailed demand with infinite variancehttps://projecteuclid.org/euclid.bjps/1547456486<strong>Aslı Bektaş Kamışlık</strong>, <strong>Tülay Kesemen</strong>, <strong>Tahir Khaniyev</strong>. <p><strong>Source: </strong>Brazilian Journal of Probability and Statistics, Volume 33, Number 1, 39--56.</p><p><strong>Abstract:</strong><br/>
In this study, a stochastic process $X(t)$, which describes an inventory model of type $(s,S)$ is considered in the presence of heavy tailed demands with infinite variance. The aim of this study is observing the impact of regularly varying demand distributions with infinite variance on the stochastic process $X(t)$. The main motivation of this work is, the publication by Geluk [ Proceedings of the American Mathematical Society 125 (1997) 3407–3413] where he provided a special asymptotic expansion for renewal function generated by regularly varying random variables. Two term asymptotic expansion for the ergodic distribution function of the process $X(t)$ is obtained based on the main results proposed by Geluk [ Proceedings of the American Mathematical Society 125 (1997) 3407–3413]. Finally, weak convergence theorem for the ergodic distribution of this process is proved by using Karamata theory.
</p>projecteuclid.org/euclid.bjps/1547456486_20190114040156Mon, 14 Jan 2019 04:01 ESTExploring the constant coefficient of a single-index variationhttps://projecteuclid.org/euclid.bjps/1547456487<strong>Jun Zhang</strong>, <strong>Cuizhen Niu</strong>, <strong>Gaorong Li</strong>. <p><strong>Source: </strong>Brazilian Journal of Probability and Statistics, Volume 33, Number 1, 57--86.</p><p><strong>Abstract:</strong><br/>
We consider a problem of checking whether the coefficient of the scale and location function is a constant. Both the scale and location functions are modeled as single-index models. Two test statistics based on Kolmogorov–Smirnov and Cramér–von Mises type functionals of the difference of the empirical residual processes are proposed. The asymptotic distribution of the estimator for single-index parameter is derived, and the empirical distribution function of residuals is shown to converge to a Gaussian process. Moreover, the proposed test statistics can be able to detect local alternatives that converge to zero at a parametric convergence rate. A bootstrap procedure is further proposed to calculate critical values. Simulation studies and a real data analysis are conducted to demonstrate the performance of the proposed methods.
</p>projecteuclid.org/euclid.bjps/1547456487_20190114040156Mon, 14 Jan 2019 04:01 ESTTransdimensional transformation based Markov chain Monte Carlohttps://projecteuclid.org/euclid.bjps/1547456488<strong>Moumita Das</strong>, <strong>Sourabh Bhattacharya</strong>. <p><strong>Source: </strong>Brazilian Journal of Probability and Statistics, Volume 33, Number 1, 87--138.</p><p><strong>Abstract:</strong><br/>
Variable dimensional problems, where not only the parameters, but also the number of parameters are random variables, pose serious challenge to Bayesians. Although in principle the Reversible Jump Markov Chain Monte Carlo (RJMCMC) methodology is a response to such challenges, the dimension-hopping strategies need not be always convenient for practical implementation, particularly because efficient “move-types” having reasonable acceptance rates are often difficult to devise.
In this article, we propose and develop a novel and general dimension-hopping MCMC methodology that can update all the parameters as well as the number of parameters simultaneously using simple deterministic transformations of some low-dimensional (often one-dimensional) random variable. This methodology, which has been inspired by Transformation based MCMC (TMCMC) of ( Stat. Mehodol. (2014) 16 100–116), facilitates great speed in terms of computation time and provides reasonable acceptance rates and mixing properties. Quite importantly, our approach provides a natural way to automate the move-types in variable dimensional problems. We refer to this methodology as Transdimensional Transformation based Markov Chain Monte Carlo (TTMCMC). Comparisons with RJMCMC in gamma and normal mixture examples demonstrate far superior performance of TTMCMC in terms of mixing, acceptance rate, computational speed and automation. Furthermore, we demonstrate good performance of TTMCMC in multivariate normal mixtures, even for dimension as large as $20$. To our knowledge, there exists no application of RJMCMC for such high-dimensional mixtures.
As by-products of our effort on the development of TTMCMC, we propose a novel methodology to summarize the posterior distributions of the mixture densities, providing a way to obtain the mode of the posterior distribution of the densities and the associated highest posterior density credible regions. Based on our method, we also propose a criterion to assess convergence of variable-dimensional algorithms. These methods of summarization and convergence assessment are applicable to general problems, not just to mixtures.
</p>projecteuclid.org/euclid.bjps/1547456488_20190114040156Mon, 14 Jan 2019 04:01 ESTBootstrap for correcting the mean square error of prediction and smoothed estimates in structural modelshttps://projecteuclid.org/euclid.bjps/1547456490<strong>Thiago R. dos Santos</strong>, <strong>Glaura C. Franco</strong>. <p><strong>Source: </strong>Brazilian Journal of Probability and Statistics, Volume 33, Number 1, 139--160.</p><p><strong>Abstract:</strong><br/>
It is well known that the uncertainty in the estimation of parameters produces the underestimation of the mean square error (MSE) both for in-sample and out-of-sample estimation. In the state space framework, this problem can affect confidence intervals for smoothed estimates and forecasts, which are generally built by state vector predictors that use estimated model parameters. In order to correct this problem, this paper proposes and compares parametric and nonparametric bootstrap methods based on procedures usually employed to calculate the MSE in the context of forecasting and smoothing in state space models. The comparisons are performed through an extensive Monte Carlo study which illustrates, empirically, the bias reduction in the estimation of MSE for prediction and smoothed estimates using the bootstrap approaches. The finite sample properties of the bootstrap procedures are analyzed for Gaussian and non-Gaussian assumptions of the error term. The procedures are also applied to real time series, leading to satisfactory results.
</p>projecteuclid.org/euclid.bjps/1547456490_20190114040156Mon, 14 Jan 2019 04:01 ESTFitting mixed models to messy longitudinal data: A case study involving estimation of post mortem intervalshttps://projecteuclid.org/euclid.bjps/1547456491<strong>Julio M. Singer</strong>, <strong>Francisco M. M. Rocha</strong>, <strong>Carmen D. S. André</strong>, <strong>Talita Zerbini</strong>. <p><strong>Source: </strong>Brazilian Journal of Probability and Statistics, Volume 33, Number 1, 161--183.</p><p><strong>Abstract:</strong><br/>
Non-linear mixed models are useful in many practical longitudinal data problems, especially when they are derived as solutions to differential equations generated by subject matter theoretical considerations. When this underlying rationale is not available, practitioners are faced with the dilemma of choosing a model from the numerous ones available in the literature. The situation is even worse for messy data where interpretation and computational problems are frequent. This is the case with a pilot observational study conducted at the School of Medicine of the University of São Paulo in which a new method to estimate the time since death (post-mortem interval—PMI) is proposed. In particular, the attenuation of the density of intra-cardiac hypostasis (concentration of red cells in the vascular system by gravity) obtained from a series of tomographic images was observed in the thoraces of 21 bodies of hospitalized patients with known time of death. The images were obtained at different instants and not always at the same conditions for each body, generating a set of messy data. In this context, we consider three ad hoc models to analyse the data, commenting on the advantages and caveats of each approach.
</p>projecteuclid.org/euclid.bjps/1547456491_20190114040156Mon, 14 Jan 2019 04:01 ESTThe equivalence of dynamic and static asset allocations under the uncertainty caused by Poisson processeshttps://projecteuclid.org/euclid.bjps/1547456492<strong>Yong-Chao Zhang</strong>, <strong>Na Zhang</strong>. <p><strong>Source: </strong>Brazilian Journal of Probability and Statistics, Volume 33, Number 1, 184--191.</p><p><strong>Abstract:</strong><br/>
We investigate the equivalence of dynamic and static asset allocations in the case where the price process of a risky asset is driven by a Poisson process. Under some mild conditions, we obtain a necessary and sufficient condition for the equivalence of dynamic and static asset allocations. In addition, we provide a simple sufficient condition for the equivalence.
</p>projecteuclid.org/euclid.bjps/1547456492_20190114040156Mon, 14 Jan 2019 04:01 ESTSimple tail index estimation for dependent and heterogeneous data with missing valueshttps://projecteuclid.org/euclid.bjps/1547456493<strong>Ivana Ilić</strong>, <strong>Vladica M. Veličković</strong>. <p><strong>Source: </strong>Brazilian Journal of Probability and Statistics, Volume 33, Number 1, 192--203.</p><p><strong>Abstract:</strong><br/>
Financial returns are known to be nonnormal and tend to have fat-tailed distribution. Also, the dependence of large values in a stochastic process is an important topic in risk, insurance and finance. In the presence of missing values, we deal with the asymptotic properties of a simple “median” estimator of the tail index based on random variables with the heavy-tailed distribution function and certain dependence among the extremes. Weak consistency and asymptotic normality of the proposed estimator are established. The estimator is a special case of a well-known estimator defined in Bacro and Brito [ Statistics & Decisions 3 (1993) 133–143]. The advantage of the estimator is its robustness against deviations and compared to Hill’s, it is less affected by the fluctuations related to the maximum of the sample or by the presence of outliers. Several examples are analyzed in order to support the proofs.
</p>projecteuclid.org/euclid.bjps/1547456493_20190114040156Mon, 14 Jan 2019 04:01 ESTBayesian robustness to outliers in linear regression and ratio estimationhttps://projecteuclid.org/euclid.bjps/1551690032<strong>Alain Desgagné</strong>, <strong>Philippe Gagnon</strong>. <p><strong>Source: </strong>Brazilian Journal of Probability and Statistics, Volume 33, Number 2, 205--221.</p><p><strong>Abstract:</strong><br/>
Whole robustness is a nice property to have for statistical models. It implies that the impact of outliers gradually vanishes as they approach plus or minus infinity. So far, the Bayesian literature provides results that ensure whole robustness for the location-scale model. In this paper, we make two contributions. First, we generalise the results to attain whole robustness in simple linear regression through the origin, which is a necessary step towards results for general linear regression models. We allow the variance of the error term to depend on the explanatory variable. This flexibility leads to the second contribution: we provide a simple Bayesian approach to robustly estimate finite population means and ratios. The strategy to attain whole robustness is simple since it lies in replacing the traditional normal assumption on the error term by a super heavy-tailed distribution assumption. As a result, users can estimate the parameters as usual, using the posterior distribution.
</p>projecteuclid.org/euclid.bjps/1551690032_20190304040045Mon, 04 Mar 2019 04:00 ESTA brief review of optimal scaling of the main MCMC approaches and optimal scaling of additive TMCMC under non-regular caseshttps://projecteuclid.org/euclid.bjps/1551690033<strong>Kushal K. Dey</strong>, <strong>Sourabh Bhattacharya</strong>. <p><strong>Source: </strong>Brazilian Journal of Probability and Statistics, Volume 33, Number 2, 222--266.</p><p><strong>Abstract:</strong><br/>
Transformation based Markov Chain Monte Carlo (TMCMC) was proposed by Dutta and Bhattacharya ( Statistical Methodology 16 (2014) 100–116) as an efficient alternative to the Metropolis–Hastings algorithm, especially in high dimensions. The main advantage of this algorithm is that it simultaneously updates all components of a high dimensional parameter using appropriate move types defined by deterministic transformation of a single random variable. This results in reduction in time complexity at each step of the chain and enhances the acceptance rate.
In this paper, we first provide a brief review of the optimal scaling theory for various existing MCMC approaches, comparing and contrasting them with the corresponding TMCMC approaches.The optimal scaling of the simplest form of TMCMC, namely additive TMCMC , has been studied extensively for the Gaussian proposal density in Dey and Bhattacharya (2017a). Here, we discuss diffusion-based optimal scaling behavior of additive TMCMC for non-Gaussian proposal densities—in particular, uniform, Student’s $t$ and Cauchy proposals. Although we could not formally prove our diffusion result for the Cauchy proposal, simulation based results lead us to conjecture that at least the recipe for obtaining general optimal scaling and optimal acceptance rate holds for the Cauchy case as well. We also consider diffusion based optimal scaling of TMCMC when the target density is discontinuous. Such non-regular situations have been studied in the case of Random Walk Metropolis Hastings (RWMH) algorithm by Neal and Roberts ( Methodology and Computing in Applied Probability 13 (2011) 583–601) using expected squared jumping distance (ESJD), but the diffusion theory based scaling has not been considered.
We compare our diffusion based optimally scaled TMCMC approach with the ESJD based optimally scaled RWM with simulation studies involving several target distributions and proposal distributions including the challenging Cauchy proposal case, showing that additive TMCMC outperforms RWMH in almost all cases considered.
</p>projecteuclid.org/euclid.bjps/1551690033_20190304040045Mon, 04 Mar 2019 04:00 ESTThe coreset variational Bayes (CVB) algorithm for mixture analysishttps://projecteuclid.org/euclid.bjps/1551690034<strong>Qianying Liu</strong>, <strong>Clare A. McGrory</strong>, <strong>Peter W. J. Baxter</strong>. <p><strong>Source: </strong>Brazilian Journal of Probability and Statistics, Volume 33, Number 2, 267--279.</p><p><strong>Abstract:</strong><br/>
The pressing need for improved methods for analysing and coping with big data has opened up a new area of research for statisticians. Image analysis is an area where there is typically a very large number of data points to be processed per image, and often multiple images are captured over time. These issues make it challenging to design methodology that is reliable and yet still efficient enough to be of practical use. One promising emerging approach for this problem is to reduce the amount of data that actually has to be processed by extracting what we call coresets from the full dataset; analysis is then based on the coreset rather than the whole dataset. Coresets are representative subsamples of data that are carefully selected via an adaptive sampling approach. We propose a new approach called coreset variational Bayes (CVB) for mixture modelling; this is an algorithm which can perform a variational Bayes analysis of a dataset based on just an extracted coreset of the data. We apply our algorithm to weed image analysis.
</p>projecteuclid.org/euclid.bjps/1551690034_20190304040045Mon, 04 Mar 2019 04:00 ESTModified information criterion for testing changes in skew normal modelhttps://projecteuclid.org/euclid.bjps/1551690035<strong>Khamis K. Said</strong>, <strong>Wei Ning</strong>, <strong>Yubin Tian</strong>. <p><strong>Source: </strong>Brazilian Journal of Probability and Statistics, Volume 33, Number 2, 280--300.</p><p><strong>Abstract:</strong><br/>
In this paper, we study the change point problem for the skew normal distribution model from the view of model selection problem. The detection procedure based on the modified information criterion (MIC) for change problem is proposed. Such a procedure has advantage in detecting the changes in early and late stage of a data comparing to the one based on the traditional Schwarz information criterion which is well known as Bayesian information criterion (BIC) by considering the complexity of the models. Due to the difficulty in deriving the analytic asymptotic distribution of the test statistic based on the MIC procedure, the bootstrap simulation is provided to obtain the critical values at the different significance levels. Simulations are conducted to illustrate the comparisons of performance between MIC, BIC and likelihood ratio test (LRT). Such an approach is applied on two stock market data sets to indicate the detection procedure.
</p>projecteuclid.org/euclid.bjps/1551690035_20190304040045Mon, 04 Mar 2019 04:00 ESTFailure rate of Birnbaum–Saunders distributions: Shape, change-point, estimation and robustnesshttps://projecteuclid.org/euclid.bjps/1551690036<strong>Emilia Athayde</strong>, <strong>Assis Azevedo</strong>, <strong>Michelli Barros</strong>, <strong>Víctor Leiva</strong>. <p><strong>Source: </strong>Brazilian Journal of Probability and Statistics, Volume 33, Number 2, 301--328.</p><p><strong>Abstract:</strong><br/>
The Birnbaum–Saunders (BS) distribution has been largely studied and applied. A random variable with BS distribution is a transformation of another random variable with standard normal distribution. Generalized BS distributions are obtained when the normally distributed random variable is replaced by another symmetrically distributed random variable. This allows us to obtain a wide class of positively skewed models with lighter and heavier tails than the BS model. Its failure rate admits several shapes, including the unimodal case, with its change-point being able to be used for different purposes. For example, to establish the reduction in a dose, and then in the cost of the medical treatment. We analyze the failure rates of generalized BS distributions obtained by the logistic, normal and Student-t distributions, considering their shape and change-point, estimating them, evaluating their robustness, assessing their performance by simulations, and applying the results to real data from different areas.
</p>projecteuclid.org/euclid.bjps/1551690036_20190304040045Mon, 04 Mar 2019 04:00 ESTA new log-linear bimodal Birnbaum–Saunders regression model with application to survival datahttps://projecteuclid.org/euclid.bjps/1551690037<strong>Francisco Cribari-Neto</strong>, <strong>Rodney V. Fonseca</strong>. <p><strong>Source: </strong>Brazilian Journal of Probability and Statistics, Volume 33, Number 2, 329--355.</p><p><strong>Abstract:</strong><br/>
The log-linear Birnbaum–Saunders model has been widely used in empirical applications. We introduce an extension of this model based on a recently proposed version of the Birnbaum–Saunders distribution which is more flexible than the standard Birnbaum–Saunders law since its density may assume both unimodal and bimodal shapes. We show how to perform point estimation, interval estimation and hypothesis testing inferences on the parameters that index the regression model we propose. We also present a number of diagnostic tools, such as residual analysis, local influence, generalized leverage, generalized Cook’s distance and model misspecification tests. We investigate the usefulness of model selection criteria and the accuracy of prediction intervals for the proposed model. Results of Monte Carlo simulations are presented. Finally, we also present and discuss an empirical application.
</p>projecteuclid.org/euclid.bjps/1551690037_20190304040045Mon, 04 Mar 2019 04:00 ESTNecessary and sufficient conditions for the convergence of the consistent maximal displacement of the branching random walkhttps://projecteuclid.org/euclid.bjps/1551690038<strong>Bastien Mallein</strong>. <p><strong>Source: </strong>Brazilian Journal of Probability and Statistics, Volume 33, Number 2, 356--373.</p><p><strong>Abstract:</strong><br/>
Consider a supercritical branching random walk on the real line. The consistent maximal displacement is the smallest of the distances between the trajectories followed by individuals at the $n$th generation and the boundary of the process. Fang and Zeitouni, and Faraud, Hu and Shi proved that under some integrability conditions, the consistent maximal displacement grows almost surely at rate $\lambda^{*}n^{1/3}$ for some explicit constant $\lambda^{*}$. We obtain here a necessary and sufficient condition for this asymptotic behaviour to hold.
</p>projecteuclid.org/euclid.bjps/1551690038_20190304040045Mon, 04 Mar 2019 04:00 ESTHierarchical modelling of power law processes for the analysis of repairable systems with different truncation times: An empirical Bayes approachhttps://projecteuclid.org/euclid.bjps/1551690039<strong>Rodrigo Citton P. dos Reis</strong>, <strong>Enrico A. Colosimo</strong>, <strong>Gustavo L. Gilardoni</strong>. <p><strong>Source: </strong>Brazilian Journal of Probability and Statistics, Volume 33, Number 2, 374--396.</p><p><strong>Abstract:</strong><br/>
In the data analysis from multiple repairable systems, it is usual to observe both different truncation times and heterogeneity among the systems. Among other reasons, the latter is caused by different manufacturing lines and maintenance teams of the systems. In this paper, a hierarchical model is proposed for the statistical analysis of multiple repairable systems under different truncation times. A reparameterization of the power law process is proposed in order to obtain a quasi-conjugate bayesian analysis. An empirical Bayes approach is used to estimate model hyperparameters. The uncertainty in the estimate of these quantities are corrected by using a parametric bootstrap approach. The results are illustrated in a real data set of failure times of power transformers from an electric company in Brazil.
</p>projecteuclid.org/euclid.bjps/1551690039_20190304040045Mon, 04 Mar 2019 04:00 ESTA temporal perspective on the rate of convergence in first-passage percolation under a moment conditionhttps://projecteuclid.org/euclid.bjps/1551690040<strong>Daniel Ahlberg</strong>. <p><strong>Source: </strong>Brazilian Journal of Probability and Statistics, Volume 33, Number 2, 397--401.</p><p><strong>Abstract:</strong><br/>
We study the rate of convergence in the celebrated Shape Theorem in first-passage percolation, obtaining the precise asymptotic rate of decay for the probability of linear order deviations under a moment condition. Our results are presented from a temporal perspective and complement previous work by the same author, in which the rate of convergence was studied from the standard spatial perspective.
</p>projecteuclid.org/euclid.bjps/1551690040_20190304040045Mon, 04 Mar 2019 04:00 ESTInfluence measures for the Waring regression modelhttps://projecteuclid.org/euclid.bjps/1551690041<strong>Luisa Rivas</strong>, <strong>Manuel Galea</strong>. <p><strong>Source: </strong>Brazilian Journal of Probability and Statistics, Volume 33, Number 2, 402--424.</p><p><strong>Abstract:</strong><br/>
In this paper, we present a regression model where the response variable is a count data that follows a Waring distribution. The Waring regression model allows for analysis of phenomena where the Geometric regression model is inadequate, because the probability of success on each trial, $p$, is different for each individual and $p$ has an associated distribution. Estimation is performed by maximum likelihood, through the maximization of the $Q$-function using EM algorithm. Diagnostic measures are calculated for this model. To illustrate the results, an application to real data is presented. Some specific details are given in the Appendix of the paper.
</p>projecteuclid.org/euclid.bjps/1551690041_20190304040045Mon, 04 Mar 2019 04:00 ESTA rank-based Cramér–von-Mises-type test for two sampleshttps://projecteuclid.org/euclid.bjps/1560153846<strong>Jamye Curry</strong>, <strong>Xin Dang</strong>, <strong>Hailin Sang</strong>. <p><strong>Source: </strong>Brazilian Journal of Probability and Statistics, Volume 33, Number 3, 425--454.</p><p><strong>Abstract:</strong><br/>
We study a rank based univariate two-sample distribution-free test. The test statistic is the difference between the average of between-group rank distances and the average of within-group rank distances. This test statistic is closely related to the two-sample Cramér–von Mises criterion. They are different empirical versions of a same quantity for testing the equality of two population distributions. Although they may be different for finite samples, they share the same expected value, variance and asymptotic properties. The advantage of the new rank based test over the classical one is its ease to generalize to the multivariate case. Rather than using the empirical process approach, we provide a different easier proof, bringing in a different perspective and insight. In particular, we apply the Hájek projection and orthogonal decomposition technique in deriving the asymptotics of the proposed rank based statistic. A numerical study compares power performance of the rank formulation test with other commonly-used nonparametric tests and recommendations on those tests are provided. Lastly, we propose a multivariate extension of the test based on the spatial rank.
</p>projecteuclid.org/euclid.bjps/1560153846_20190610040413Mon, 10 Jun 2019 04:04 EDTL-Logistic regression models: Prior sensitivity analysis, robustness to outliers and applicationshttps://projecteuclid.org/euclid.bjps/1560153847<strong>Rosineide F. da Paz</strong>, <strong>Narayanaswamy Balakrishnan</strong>, <strong>Jorge Luis Bazán</strong>. <p><strong>Source: </strong>Brazilian Journal of Probability and Statistics, Volume 33, Number 3, 455--479.</p><p><strong>Abstract:</strong><br/>
Tadikamalla and Johnson [ Biometrika 69 (1982) 461–465] developed the $L_{B}$ distribution to variables with bounded support by considering a transformation of the standard Logistic distribution. In this manuscript, a convenient parametrization of this distribution is proposed in order to develop regression models. This distribution, referred to here as L-Logistic distribution, provides great flexibility and includes the uniform distribution as a particular case. Several properties of this distribution are studied, and a Bayesian approach is adopted for the parameter estimation. Simulation studies, considering prior sensitivity analysis, recovery of parameters and comparison of algorithms, and robustness to outliers are all discussed showing that the results are insensitive to the choice of priors, efficiency of the algorithm MCMC adopted, and robustness of the model when compared with the beta distribution. Applications to estimate the vulnerability to poverty and to explain the anxiety are performed. The results to applications show that the L-Logistic regression models provide a better fit than the corresponding beta regression models.
</p>projecteuclid.org/euclid.bjps/1560153847_20190610040413Mon, 10 Jun 2019 04:04 EDTFractional backward stochastic variational inequalities with non-Lipschitz coefficienthttps://projecteuclid.org/euclid.bjps/1560153848<strong>Katarzyna Jańczak-Borkowska</strong>. <p><strong>Source: </strong>Brazilian Journal of Probability and Statistics, Volume 33, Number 3, 480--497.</p><p><strong>Abstract:</strong><br/>
We prove the existence and uniqueness of the solution of backward stochastic variational inequalities with respect to fractional Brownian motion and with non-Lipschitz coefficient. We assume that $H>1/2$.
</p>projecteuclid.org/euclid.bjps/1560153848_20190610040413Mon, 10 Jun 2019 04:04 EDTSpatially adaptive Bayesian image reconstruction through locally-modulated Markov random field modelshttps://projecteuclid.org/euclid.bjps/1560153849<strong>Salem M. Al-Gezeri</strong>, <strong>Robert G. Aykroyd</strong>. <p><strong>Source: </strong>Brazilian Journal of Probability and Statistics, Volume 33, Number 3, 498--519.</p><p><strong>Abstract:</strong><br/>
The use of Markov random field (MRF) models has proven to be a fruitful approach in a wide range of image processing applications. It allows local texture information to be incorporated in a systematic and unified way and allows statistical inference theory to be applied giving rise to novel output summaries and enhanced image interpretation. A great advantage of such low-level approaches is that they lead to flexible models, which can be applied to a wide range of imaging problems without the need for significant modification.
This paper proposes and explores the use of conditional MRF models for situations where multiple images are to be processed simultaneously, or where only a single image is to be reconstructed and a sequential approach is taken. Although the coupling of image intensity values is a special case of our approach, the main extension over previous proposals is to allow the direct coupling of other properties, such as smoothness or texture. This is achieved using a local modulating function which adjusts the influence of global smoothing without the need for a fully inhomogeneous prior model. Several modulating functions are considered and a detailed simulation study, motivated by remote sensing applications in archaeological geophysics, of conditional reconstruction is presented. The results demonstrate that a substantial improvement in the quality of the image reconstruction, in terms of errors and residuals, can be achieved using this approach, especially at locations with rapid changes in the underlying intensity.
</p>projecteuclid.org/euclid.bjps/1560153849_20190610040413Mon, 10 Jun 2019 04:04 EDTDensity for solutions to stochastic differential equations with unbounded drifthttps://projecteuclid.org/euclid.bjps/1560153850<strong>Christian Olivera</strong>, <strong>Ciprian Tudor</strong>. <p><strong>Source: </strong>Brazilian Journal of Probability and Statistics, Volume 33, Number 3, 520--531.</p><p><strong>Abstract:</strong><br/>
Via a special transform and by using the techniques of the Malliavin calculus, we analyze the density of the solution to a stochastic differential equation with unbounded drift.
</p>projecteuclid.org/euclid.bjps/1560153850_20190610040413Mon, 10 Jun 2019 04:04 EDTA Jackson network under general regimehttps://projecteuclid.org/euclid.bjps/1560153851<strong>Yair Y. Shaki</strong>. <p><strong>Source: </strong>Brazilian Journal of Probability and Statistics, Volume 33, Number 3, 532--548.</p><p><strong>Abstract:</strong><br/>
We consider a Jackson network in a general heavy traffic diffusion regime with the $\alpha$-parametrization . We also assume that each customer may abandon the system while waiting. We show that in this regime the queue-length process converges to a multi-dimensional regulated Ornstein–Uhlenbeck process.
</p>projecteuclid.org/euclid.bjps/1560153851_20190610040413Mon, 10 Jun 2019 04:04 EDTFake uniformity in a shape inversion formulahttps://projecteuclid.org/euclid.bjps/1560153852<strong>Christian Rau</strong>. <p><strong>Source: </strong>Brazilian Journal of Probability and Statistics, Volume 33, Number 3, 549--557.</p><p><strong>Abstract:</strong><br/>
We revisit a shape inversion formula derived by Panaretos in the context of a particle density estimation problem with unknown rotation of the particle. A distribution is presented which imitates, or “fakes”, the uniformity or Haar distribution that is part of that formula.
</p>projecteuclid.org/euclid.bjps/1560153852_20190610040413Mon, 10 Jun 2019 04:04 EDTStochastic monotonicity from an Eulerian viewpointhttps://projecteuclid.org/euclid.bjps/1560153853<strong>Davide Gabrielli</strong>, <strong>Ida Germana Minelli</strong>. <p><strong>Source: </strong>Brazilian Journal of Probability and Statistics, Volume 33, Number 3, 558--585.</p><p><strong>Abstract:</strong><br/>
Stochastic monotonicity is a well-known partial order relation between probability measures defined on the same partially ordered set. Strassen theorem establishes equivalence between stochastic monotonicity and the existence of a coupling compatible with respect to the partial order. We consider the case of a countable set and introduce the class of finitely decomposable flows on a directed acyclic graph associated to the partial order. We show that a probability measure stochastically dominates another probability measure if and only if there exists a finitely decomposable flow having divergence given by the difference of the two measures. We illustrate the result with some examples.
</p>projecteuclid.org/euclid.bjps/1560153853_20190610040413Mon, 10 Jun 2019 04:04 EDTUnions of random walk and percolation on infinite graphshttps://projecteuclid.org/euclid.bjps/1560153854<strong>Kazuki Okamura</strong>. <p><strong>Source: </strong>Brazilian Journal of Probability and Statistics, Volume 33, Number 3, 586--637.</p><p><strong>Abstract:</strong><br/>
We consider a random object that is associated with both random walks and random media, specifically, the superposition of a configuration of subcritical Bernoulli percolation on an infinite connected graph and the trace of the simple random walk on the same graph. We investigate asymptotics for the number of vertices of the enlargement of the trace of the walk until a fixed time, when the time tends to infinity. This process is more highly self-interacting than the range of random walk, which yields difficulties. We show a law of large numbers on vertex-transitive transient graphs. We compare the process on a vertex-transitive graph with the process on a finitely modified graph of the original vertex-transitive graph and show their behaviors are similar. We show that the process fluctuates almost surely on a certain non-vertex-transitive graph. On the two-dimensional integer lattice, by investigating the size of the boundary of the trace, we give an estimate for variances of the process implying a law of large numbers. We give an example of a graph with unbounded degrees on which the process behaves in a singular manner. As by-products, some results for the range and the boundary, which will be of independent interest, are obtained.
</p>projecteuclid.org/euclid.bjps/1560153854_20190610040413Mon, 10 Jun 2019 04:04 EDTEstimation of parameters in the $\operatorname{DDRCINAR}(p)$ modelhttps://projecteuclid.org/euclid.bjps/1560153855<strong>Xiufang Liu</strong>, <strong>Dehui Wang</strong>. <p><strong>Source: </strong>Brazilian Journal of Probability and Statistics, Volume 33, Number 3, 638--673.</p><p><strong>Abstract:</strong><br/>
This paper discusses a $p$th-order dependence-driven random coefficient integer-valued autoregressive time series model ($\operatorname{DDRCINAR}(p)$). Stationarity and ergodicity properties are proved. Conditional least squares, weighted least squares and maximum quasi-likelihood are used to estimate the model parameters. Asymptotic properties of the estimators are presented. The performances of these estimators are investigated and compared via simulations. In certain regions of the parameter space, simulative analysis shows that maximum quasi-likelihood estimators perform better than the estimators of conditional least squares and weighted least squares in terms of the proportion of within-$\Omega$ estimates. At last, the model is applied to two real data sets.
</p>projecteuclid.org/euclid.bjps/1560153855_20190610040413Mon, 10 Jun 2019 04:04 EDTA note on monotonicity of spatial epidemic modelshttps://projecteuclid.org/euclid.bjps/1560153856<strong>Achillefs Tzioufas</strong>. <p><strong>Source: </strong>Brazilian Journal of Probability and Statistics, Volume 33, Number 3, 674--684.</p><p><strong>Abstract:</strong><br/>
The epidemic process on a graph is considered for which infectious contacts occur at rate which depends on whether a susceptible is infected for the first time or not. We show that the Vasershtein coupling extends if and only if secondary infections occur at rate which is greater than that of initial ones. Nonetheless we show that, with respect to the probability of occurrence of an infinite epidemic, the said proviso may be dropped regarding the totally asymmetric process in one dimension, thus settling in the affirmative this special case of the conjecture for arbitrary graphs due to [ Ann. Appl. Probab. 13 (2003) 669–690].
</p>projecteuclid.org/euclid.bjps/1560153856_20190610040413Mon, 10 Jun 2019 04:04 EDTPrefacehttps://projecteuclid.org/euclid.bjps/1566806426<p><strong>Source: </strong>Brazilian Journal of Probability and Statistics, Volume 33, Number 4, 685--685.</p>projecteuclid.org/euclid.bjps/1566806426_20190826040045Mon, 26 Aug 2019 04:00 EDTSpatiotemporal point processes: regression, model specifications and future directionshttps://projecteuclid.org/euclid.bjps/1566806428<strong>Dani Gamerman</strong>. <p><strong>Source: </strong>Brazilian Journal of Probability and Statistics, Volume 33, Number 4, 686--705.</p><p><strong>Abstract:</strong><br/>
Point processes are one of the most commonly encountered observation processes in Spatial Statistics. Model-based inference for them depends on the likelihood function. In the most standard setting of Poisson processes, the likelihood depends on the intensity function, and can not be computed analytically. A number of approximating techniques have been proposed to handle this difficulty. In this paper, we review recent work on exact solutions that solve this problem without resorting to approximations. The presentation concentrates more heavily on discrete time but also considers continuous time. The solutions are based on model specifications that impose smoothness constraints on the intensity function. We also review approaches to include a regression component and different ways to accommodate it while accounting for additional heterogeneity. Applications are provided to illustrate the results. Finally, we discuss possible extensions to account for discontinuities and/or jumps in the intensity function.
</p>projecteuclid.org/euclid.bjps/1566806428_20190826040045Mon, 26 Aug 2019 04:00 EDTKeeping the balance—Bridge sampling for marginal likelihood estimation in finite mixture, mixture of experts and Markov mixture modelshttps://projecteuclid.org/euclid.bjps/1566806429<strong>Sylvia Frühwirth-Schnatter</strong>. <p><strong>Source: </strong>Brazilian Journal of Probability and Statistics, Volume 33, Number 4, 706--733.</p><p><strong>Abstract:</strong><br/>
Finite mixture models and their extensions to Markov mixture and mixture of experts models are very popular in analysing data of various kind. A challenge for these models is choosing the number of components based on marginal likelihoods. The present paper suggests two innovative, generic bridge sampling estimators of the marginal likelihood that are based on constructing balanced importance densities from the conditional densities arising during Gibbs sampling. The full permutation bridge sampling estimator is derived from considering all possible permutations of the mixture labels for a subset of these densities. For the double random permutation bridge sampling estimator, two levels of random permutations are applied, first to permute the labels of the MCMC draws and second to randomly permute the labels of the conditional densities arising during Gibbs sampling. Various applications show very good performance of these estimators in comparison to importance and to reciprocal importance sampling estimators derived from the same importance densities.
</p>projecteuclid.org/euclid.bjps/1566806429_20190826040045Mon, 26 Aug 2019 04:00 EDTThe limiting distribution of the Gibbs sampler for the intrinsic conditional autoregressive modelhttps://projecteuclid.org/euclid.bjps/1566806430<strong>Marco A. R. Ferreira</strong>. <p><strong>Source: </strong>Brazilian Journal of Probability and Statistics, Volume 33, Number 4, 734--744.</p><p><strong>Abstract:</strong><br/>
We study the limiting behavior of the one-at-a-time Gibbs sampler for the intrinsic conditional autoregressive model with centering on the fly. The intrinsic conditional autoregressive model is widely used as a prior for random effects in hierarchical models for spatial modeling. This model is defined by full conditional distributions that imply an improper joint “density” with a multivariate Gaussian kernel and a singular precision matrix. To guarantee propriety of the posterior distribution, usually at the end of each iteration of the Gibbs sampler the random effects are centered to sum to zero in what is widely known as centering on the fly. While this works well in practice, this informal computational way to recenter the random effects obscures their implied prior distribution and prevents the development of formal Bayesian procedures. Here we show that the implied prior distribution, that is, the limiting distribution of the one-at-a-time Gibbs sampler for the intrinsic conditional autoregressive model with centering on the fly is a singular Gaussian distribution with a covariance matrix that is the Moore–Penrose inverse of the precision matrix. This result has important implications for the development of formal Bayesian procedures such as reference priors and Bayes-factor-based model selection for spatial models.
</p>projecteuclid.org/euclid.bjps/1566806430_20190826040045Mon, 26 Aug 2019 04:00 EDTBayesian hypothesis testing: Reduxhttps://projecteuclid.org/euclid.bjps/1566806431<strong>Hedibert F. Lopes</strong>, <strong>Nicholas G. Polson</strong>. <p><strong>Source: </strong>Brazilian Journal of Probability and Statistics, Volume 33, Number 4, 745--755.</p><p><strong>Abstract:</strong><br/>
Bayesian hypothesis testing is re-examined from the perspective of an a priori assessment of the test statistic distribution under the alternative. By assessing the distribution of an observable test statistic, rather than prior parameter values, we revisit the seminal paper of Edwards, Lindman and Savage ( Psychol. Rev. 70 (1963) 193–242). There are a number of important take-aways from comparing the Bayesian paradigm via Bayes factors to frequentist ones. We provide examples where evidence for a Bayesian strikingly supports the null, but leads to rejection under a classical test. Finally, we conclude with directions for future research.
</p>projecteuclid.org/euclid.bjps/1566806431_20190826040045Mon, 26 Aug 2019 04:00 EDTTime series of count data: A review, empirical comparisons and data analysishttps://projecteuclid.org/euclid.bjps/1566806432<strong>Glaura C. Franco</strong>, <strong>Helio S. Migon</strong>, <strong>Marcos O. Prates</strong>. <p><strong>Source: </strong>Brazilian Journal of Probability and Statistics, Volume 33, Number 4, 756--781.</p><p><strong>Abstract:</strong><br/>
Observation and parameter driven models are commonly used in the literature to analyse time series of counts. In this paper, we study the characteristics of a variety of models and point out the main differences and similarities among these procedures, concerning parameter estimation, model fitting and forecasting. Alternatively to the literature, all inference was performed under the Bayesian paradigm. The models are fitted with a latent AR($p$) process in the mean, which accounts for autocorrelation in the data. An extensive simulation study shows that the estimates for the covariate parameters are remarkably similar across the different models. However, estimates for autoregressive coefficients and forecasts of future values depend heavily on the underlying process which generates the data. A real data set of bankruptcy in the United States is also analysed.
</p>projecteuclid.org/euclid.bjps/1566806432_20190826040045Mon, 26 Aug 2019 04:00 EDTBayesian modelling of the abilities in dichotomous IRT models via regression with missing values in the covariateshttps://projecteuclid.org/euclid.bjps/1566806433<strong>Flávio B. Gonçalves</strong>, <strong>Bárbara C. C. Dias</strong>. <p><strong>Source: </strong>Brazilian Journal of Probability and Statistics, Volume 33, Number 4, 782--800.</p><p><strong>Abstract:</strong><br/>
Educational assessment usually considers a contextual questionnaire to extract relevant information from the applicants. This may include items related to socio-economical profile as well as items to extract other characteristics potentially related to applicant’s performance in the test. A careful analysis of the questionnaires jointly with the test’s results may evidence important relations between profiles and test performance. The most coherent way to perform this task in a statistical context is to use the information from the questionnaire to help explain the variability of the abilities in a joint model-based approach. Nevertheless, the responses to the questionnaire typically present missing values which, in some cases, may be missing not at random. This paper proposes a statistical methodology to model the abilities in dichotomous IRT models using the information of the contextual questionnaires via linear regression. The proposed methodology models the missing data jointly with the all the observed data, which allows for the estimation of the former. The missing data modelling is flexible enough to allow the specification of missing not at random structures. Furthermore, even if those structures are not assumed a priori, they can be estimated from the posterior results when assuming missing (completely) at random structures a priori. Statistical inference is performed under the Bayesian paradigm via an efficient MCMC algorithm. Simulated and real examples are presented to investigate the efficiency and applicability of the proposed methodology.
</p>projecteuclid.org/euclid.bjps/1566806433_20190826040045Mon, 26 Aug 2019 04:00 EDTOption pricing with bivariate risk-neutral density via copula and heteroscedastic model: A Bayesian approachhttps://projecteuclid.org/euclid.bjps/1566806434<strong>Lucas Pereira Lopes</strong>, <strong>Vicente Garibay Cancho</strong>, <strong>Francisco Louzada</strong>. <p><strong>Source: </strong>Brazilian Journal of Probability and Statistics, Volume 33, Number 4, 801--825.</p><p><strong>Abstract:</strong><br/>
Multivariate options are adequate tools for multi-asset risk management. The pricing models derived from the pioneer Black and Scholes method under the multivariate case consider that the asset-object prices follow a Brownian geometric motion. However, the construction of such methods imposes some unrealistic constraints on the process of fair option calculation, such as constant volatility over the maturity time and linear correlation between the assets. Therefore, this paper aims to price and analyze the fair price behavior of the call-on-max (bivariate) option considering marginal heteroscedastic models with dependence structure modeled via copulas. Concerning inference, we adopt a Bayesian perspective and computationally intensive methods based on Monte Carlo simulations via Markov Chain (MCMC). A simulation study examines the bias, and the root mean squared errors of the posterior means for the parameters. Real stocks prices of Brazilian banks illustrate the approach. For the proposed method is verified the effects of strike and dependence structure on the fair price of the option. The results show that the prices obtained by our heteroscedastic model approach and copulas differ substantially from the prices obtained by the model derived from Black and Scholes. Empirical results are presented to argue the advantages of our strategy.
</p>projecteuclid.org/euclid.bjps/1566806434_20190826040045Mon, 26 Aug 2019 04:00 EDTBayesian approach for the zero-modified Poisson–Lindley regression modelhttps://projecteuclid.org/euclid.bjps/1566806435<strong>Wesley Bertoli</strong>, <strong>Katiane S. Conceição</strong>, <strong>Marinho G. Andrade</strong>, <strong>Francisco Louzada</strong>. <p><strong>Source: </strong>Brazilian Journal of Probability and Statistics, Volume 33, Number 4, 826--860.</p><p><strong>Abstract:</strong><br/>
The primary goal of this paper is to introduce the zero-modified Poisson–Lindley regression model as an alternative to model overdispersed count data exhibiting inflation or deflation of zeros in the presence of covariates. The zero-modification is incorporated by considering that a zero-truncated process produces positive observations and consequently, the proposed model can be fitted without any previous information about the zero-modification present in a given dataset. A fully Bayesian approach based on the g-prior method has been considered for inference concerns. An intensive Monte Carlo simulation study has been conducted to evaluate the performance of the developed methodology and the maximum likelihood estimators. The proposed model was considered for the analysis of a real dataset on the number of bids received by $126$ U.S. firms between 1978–1985, and the impact of choosing different prior distributions for the regression coefficients has been studied. A sensitivity analysis to detect influential points has been performed based on the Kullback–Leibler divergence. A general comparison with some well-known regression models for discrete data has been presented.
</p>projecteuclid.org/euclid.bjps/1566806435_20190826040045Mon, 26 Aug 2019 04:00 EDTSubjective Bayesian testing using calibrated prior probabilitieshttps://projecteuclid.org/euclid.bjps/1566806436<strong>Dan J. Spitzner</strong>. <p><strong>Source: </strong>Brazilian Journal of Probability and Statistics, Volume 33, Number 4, 861--893.</p><p><strong>Abstract:</strong><br/>
This article proposes a calibration scheme for Bayesian testing that coordinates analytically-derived statistical performance considerations with expert opinion. In other words, the scheme is effective and meaningful for incorporating objective elements into subjective Bayesian inference. It explores a novel role for default priors as anchors for calibration rather than substitutes for prior knowledge. Ideas are developed for use with multiplicity adjustments in multiple-model contexts, and to address the issue of prior sensitivity of Bayes factors. Along the way, the performance properties of an existing multiplicity adjustment related to the Poisson distribution are clarified theoretically. Connections of the overall calibration scheme to the Schwarz criterion are also explored. The proposed framework is examined and illustrated on a number of existing data sets related to problems in clinical trials, forensic pattern matching, and log-linear models methodology.
</p>projecteuclid.org/euclid.bjps/1566806436_20190826040045Mon, 26 Aug 2019 04:00 EDTBayesian inference on power Lindley distribution based on different loss functionshttps://projecteuclid.org/euclid.bjps/1566806437<strong>Abbas Pak</strong>, <strong>M. E. Ghitany</strong>, <strong>Mohammad Reza Mahmoudi</strong>. <p><strong>Source: </strong>Brazilian Journal of Probability and Statistics, Volume 33, Number 4, 894--914.</p><p><strong>Abstract:</strong><br/>
This paper focuses on Bayesian estimation of the parameters and reliability function of the power Lindley distribution by using various symmetric and asymmetric loss functions. Assuming suitable priors on the parameters, Bayes estimates are derived by using squared error, linear exponential (linex) and general entropy loss functions. Since, under these loss functions, Bayes estimates of the parameters do not have closed forms we use lindley’s approximation technique to calculate the Bayes estimates. Moreover, we obtain the Bayes estimates of the parameters using a Markov Chain Monte Carlo (MCMC) method. Simulation studies are conducted in order to evaluate the performances of the proposed estimators under the considered loss functions. Finally, analysis of a real data set is presented for illustrative purposes.
</p>projecteuclid.org/euclid.bjps/1566806437_20190826040045Mon, 26 Aug 2019 04:00 EDT