Let pi denote the intractable posterior density that results when the likelihood from a multivariate linear regression model with errors from a scale mixture of normals is combined with the standard non-informative pr...
详细信息
Let pi denote the intractable posterior density that results when the likelihood from a multivariate linear regression model with errors from a scale mixture of normals is combined with the standard non-informative prior. There is a simple data augmentation algorithm (based on latent data from the mixing density) that can be used to explore pi. Let h and d denote the mixing density and the dimension of the regression model, respectively. Hobert et al. (2018) have recently shown that, if h converges to 0 at the origin at an appropriate rate, and integral(infinity)(0) u(d/2) h(u) du < infinity, then the Markov chains underlying the data augmentation (da) algorithm and an alternative haar parameter expanded da (px-da) algorithm are both geometrically ergodic. Their results are established using probabilistic techniques based on drift and minorization conditions. In this paper, spectral analytic techniques are used to establish that something much stronger than geometric ergodicity often holds. In particular, it is shown that, under simple conditions on h, the Markov operators defined by the da and haarpx-da Markov chains are trace-class, i.e., compact with summable eigenvalues. Many standard mixing densities satisfy the conditions developed in this paper. Indeed, the new results imply that the da and haarpx-da Markov operators are trace-class whenever the mixing density is generalized inverse Gaussian, log-normal, Frechet (with shape parameter larger than d/2), or inverted Gamma (with shape parameter larger than d/2). (C) 2018 Elsevier Inc. All rights reserved.
When Gaussian errors are inappropriate in a multivariate linear regression setting, it is often assumed that the errors are iid from a distribution that is a scale mixture of multivariate normals. Combining this robus...
详细信息
When Gaussian errors are inappropriate in a multivariate linear regression setting, it is often assumed that the errors are iid from a distribution that is a scale mixture of multivariate normals. Combining this robust regression model with a default prior on the unknown parameters results in a highly intractable posterior density. Fortunately, there is a simple data augmentation (da) algorithm and a corresponding haar px-da algorithm that can be used to explore this posterior. This paper provides conditions (on the mixing density) for geometric ergodicity of the Markov chains underlying these Markov chain Monte Carlo algorithms. Letting d denote the dimension of the response, the main result shows that the da and haarpx-da Markov chains are geometrically ergodic whenever the mixing density is generalized inverse Gaussian, log-normal, inverted Gamma (with shape parameter larger than d/2) or Frechet (with shape parameter larger than d/2). The results also apply to certain subsets of the Gamma, F and Weibull families.
In this article, we consider Markov chain Monte Carlo (MCMC) algorithms for exploring the intractable posterior density associated with Bayesian probit linear mixed models under improper priors on the regression coeff...
详细信息
In this article, we consider Markov chain Monte Carlo (MCMC) algorithms for exploring the intractable posterior density associated with Bayesian probit linear mixed models under improper priors on the regression coefficients and variance components. In particular, we construct a two-block Gibbs sampler using the data augmentation (da) techniques. Furthermore, we prove geometric ergodicity of the Gibbs sampler, which is the foundation for building central limit theorems for MCMC based estimators and subsequent inferences. The conditions for geometric convergence are similar to those guaranteeing posterior propriety. We also provide conditions for the propriety of posterior distributions with a general link function when the design matrices take commonly observed forms. In general, the haar parameter expansion for da (px-da) algorithm is an improvement of the daalgorithm and it has been shown that it is theoretically at least as good as the daalgorithm. Here we construct a haar px-da algorithm, which has essentially the same computational cost as the two-block Gibbs sampler.
暂无评论