We apply the Kalman Filter to the analysis of multi-unit variance components models where each unit's response profile follows a state space model. We use mixed model results to obtain estimates of unit-specific r...
详细信息
We apply the Kalman Filter to the analysis of multi-unit variance components models where each unit's response profile follows a state space model. We use mixed model results to obtain estimates of unit-specific random effects, state disturbance terms and residual noise terms. We use the signal extraction approach to smooth individual profiles. We show how to utilize the Kalman Filter to efficiently compute the restricted loglikelihood of the model. For the important special case where each unit's response profile follows a continuous structural time series model with known transition matrix we derive an em algorithm for the restricted maximum likelihood (RemL) estimation of the variance components. We present details for the case where individual profiles are modeled as local polynomial trends or polynomial smoothing splines.
We show that a non-identifiability problem can occur when one attempts to estimate the degrees of freedom and the unstructured covariance matrix simultaneously in a Gaussian-inverted Wishart model, using the maximum l...
详细信息
We show that a non-identifiability problem can occur when one attempts to estimate the degrees of freedom and the unstructured covariance matrix simultaneously in a Gaussian-inverted Wishart model, using the maximum likelihood approach. However, the em algorithm may falsely converge to give finite estimates in this case simply because of the convergence criterion used. An alternative approach is proposed to overcome the problem. (C) 1998 Elsevier Science B.V. All rights reserved.
This paper presents techniques of parameter estimation in heteroskedastic mixed models having i) heterogeneous log residual variances which are described by a linear model of explanatory covariates and ii) log residua...
详细信息
This paper presents techniques of parameter estimation in heteroskedastic mixed models having i) heterogeneous log residual variances which are described by a linear model of explanatory covariates and ii) log residual and log u-components linearly related. This makes the intraclass correlation a monotonic function of the residual variance. Cases of a homogeneous variance ratio and of a homogeneous u-component of variance are also included in this parameterization. Estimation and testing procedures of the corresponding dispersion parameters are based on restricted maximum likelihood procedures. Estimating equations are derived using the standard and gradient em. The analysis of a small example is outlined to illustrate the theory. (C) Inra/Elsevier, Paris.
The Baum-Welch (em) algorithm is a familiar tool for calculation of the maximum likelihood estimate of the parameters in hidden Markov chain models. For the particular case of a binary Markov chain corrupted by binary...
详细信息
The Baum-Welch (em) algorithm is a familiar tool for calculation of the maximum likelihood estimate of the parameters in hidden Markov chain models. For the particular case of a binary Markov chain corrupted by binary channel noise a detailed study is carried out of the influence that the initial conditions impose on the results produced by the algorithm. (C) 1998 Elsevier Science B.V. All rights reserved.
Latent class analysis assumes the existence of a categorical latent variable that explains the relations between a set of categorical manifest variables. Simultaneous latent class analysis deals with sets of multiway ...
详细信息
Latent class analysis assumes the existence of a categorical latent variable that explains the relations between a set of categorical manifest variables. Simultaneous latent class analysis deals with sets of multiway contingency tables simultaneously. In this way an explanatory categorical grouping variable is related to latent class results. In this article ta discuss a tool called the concomitant-variable latent-class model, which generalizes this work to continuous explanatory variables. An em estimation procedure to estimate the model is worked out in derail, and the model is applied to an example on juvenile delinquency.
Health impact studies of air pollution often require estimates of pollutant concentrations at locations where monitoring data are not available, using the concentrations observed at other monitoring stations and possi...
详细信息
Health impact studies of air pollution often require estimates of pollutant concentrations at locations where monitoring data are not available, using the concentrations observed at other monitoring stations and possibly at different time periods. Recently, a Bayesian approach for such a temporal and spatial interpolation problem has been proposed by Le, Sun and Zidek (1997). One special feature of the method is that it does not require all sites to monitor the same set of pollutants. This feature is particularly relevant in environmental health studies where pollution data are often pooled together from several monitoring networks which may or may not monitor the same set of pollutants. The methodology is applied to the data in the Province of Ontario, where monthly average concentrations for summer months of nitrogen dioxide (NO2 in mu g/m(3)), ozone (O-3 in ppb), sulphur dioxide (SO2 in mu g/m(3)) and sulfate ion (SO4 in mu g/m(3)) are available for the period from January 1 of 1983 to December 31 of 1988 at 31 ambient monitoring sites. Detailed descriptions of spatial interpolation for air pollutant concentrations at 37 approximate centroids of Public Health Units in Ontario using all available data are presented. The methodology is empirically assessed by a cross-validation study where each of the 31 sites is successively removed and the remaining sites are used to predict its concentration levels. The methodology seems to perform well. (C) 1998 John Wiley & Sons, Ltd.
The Probabilistic Multi-Hypothesis Tracker (PMHT) of Streit and Luginbuhl(1) uses the em algorithm and a slight modification of the usual target-tracking assumptions to combine data-association and filtering. The perf...
详细信息
ISBN:
(纸本)0819428221
The Probabilistic Multi-Hypothesis Tracker (PMHT) of Streit and Luginbuhl(1) uses the em algorithm and a slight modification of the usual target-tracking assumptions to combine data-association and filtering. The performance of the PMHT to date has been comparable to that of existing tracking algorithms;however, part of its appeal is a consistent and extensible statistical foundation, and it is the extension to the tracking of maneuvering targets which we explore in this paper. The basis, as with many algorithms designed for maneuvering targets, is of an underlying and hidden "model-switch" process controlled by a Markov probability structure. Performance of the modified PMHT is investigated both for maneuvering and non-maneuvering targets. The improved performance observed in the latter case is somewhat surprising.
Recently, a criterion based on the Bayesian Ying Yang Learning Theory and System has been proposed by Xu[8, 9, 10] for selecting the number of clusters in the clustering analysis and the number of Guassians in a finit...
详细信息
ISBN:
(纸本)0780348605
Recently, a criterion based on the Bayesian Ying Yang Learning Theory and System has been proposed by Xu[8, 9, 10] for selecting the number of clusters in the clustering analysis and the number of Guassians in a finite mixture model. In this paper we compare the performance of this criterion with other existing cluster number selection criteria such as AIC, CAIC etc.
Count-data models are used to analyze the relationship between patents and research and development spending at the firm level, accounting for overdispersion using a finite mixed Poisson regression model with covariat...
详细信息
Count-data models are used to analyze the relationship between patents and research and development spending at the firm level, accounting for overdispersion using a finite mixed Poisson regression model with covariates in both Poisson rates and mixing probabilities. Maximum likelihood estimation using the em and quasi-Newton algorithms is discussed. Monte Carlo studies suggest that (a) penalized likelihood criteria are a reliable basis for model selection and can be used to determine whether continuous or finite support for the mixing distribution is more appropriate and (b) when the mixing distribution is incorrectly specified, parameter estimates remain unbiased but have inflated variances.
We consider the fitting of normal mixture models to multivariate data, using maximum likelihood via the em algorithm. This approach requires the specification of an initial estimate of the vector of unknown parameters...
详细信息
ISBN:
(纸本)0818685123
We consider the fitting of normal mixture models to multivariate data, using maximum likelihood via the em algorithm. This approach requires the specification of an initial estimate of the vector of unknown parameters, or equivalently, of an initial classification of rite data with respct to the components of the mixture model under fit. We describe art algorithm called MIXFIT that automatically undertakes this fitting, including the specification of suitable initial values if not supplied by the user: The MIXFIT algorithm has several options, including the provision to carry out a resampling-based test for the number of components in the mixture model.
暂无评论