A multidimensional Rasch-type item response model, the multidimensional random coefficients multinomial logit model, is presented as an extension to the Adams & Wilson (1996) random coefficients multinomial logit ...
详细信息
A multidimensional Rasch-type item response model, the multidimensional random coefficients multinomial logit model, is presented as an extension to the Adams & Wilson (1996) random coefficients multinomial logit model. The model is developed in a form that permits generalization to the multidimensional case of a wide class of Rasch models, including the simple logistic model, Masters' partial credit model, Wilson's ordered partition model, and Fischer's linear logistic model. Moreover, the model includes several existing multidimensional models as special cases, including Whitely's multicomponent latent trait model, Andersen's multidimensional Rasch model for repeated testing, and embretson's multidimensional Rasch model for learning and change. Marginal maximum likelihood estimators for the model are derived and the estimation is examined using a simulation study. Implications and applications of the model are discussed and an example is given.
The parameters of the life length distribution of a given component are to be estimated. The observations on which inference is to be based are field data which are incomplete in some fashion. Thus, for example, the r...
详细信息
The parameters of the life length distribution of a given component are to be estimated. The observations on which inference is to be based are field data which are incomplete in some fashion. Thus, for example, the reported life length may include a period of unknown duration during which the component is not in use, the life length distribution may be affected by an unobserved environmental factor or the component may be part of a larger system, and failure mode analysis reveals only the module containing the failed component, not its identity. It is shown how the em algorithm can be used to calculate the maximum likelihood estimates of the parameters of interest in these instances. The methodology is applied to some data on the life lengths of electronic components used in the telecommunications industry, yielding values that are similar to those obtained from complete observations on comparable components.
Sample survey designs in which each participant is administered a subset of the items contained in a complete survey instrument are becoming an increasingly popular method of reducing respondent burden (Mislevy, Beate...
详细信息
Sample survey designs in which each participant is administered a subset of the items contained in a complete survey instrument are becoming an increasingly popular method of reducing respondent burden (Mislevy, Beaten, Kaplan, & Sheehan, 1992;Raghunathan & Grizzle, 1995: Wacholder, Carroll, Pee, & Gall, 1994). Data from these survey designs can be analyzed using multiple imputation methodology that generates several imputed values for the missing data and thus yields several complete data sets. These data sets are then analyzed using complete data estimators and their standard errors (Rubin, 1987b). Generating the imputed data sets, however, can be very difficult. We describe improvements to the methods currently used to generate the imputed data sets for item response models summarizing educational data collected by the National Assessment of Educational Progress (NAEP), an ongoing collection of samples of 4th, 8th, and 12th grade students in the United States. The improved approximations produce small to moderate changes in commonly reported estimates, with the larger changes associated with an increasing amount of missing data. The improved approximations produce larger standard errors.
We study a nonparametric deconvolution density estimation problem. The estimator is obtained by an em algorithm for a smoothed maximum likelihood estimation problem, which has a unique continuous solution. We present ...
详细信息
We study a nonparametric deconvolution density estimation problem. The estimator is obtained by an em algorithm for a smoothed maximum likelihood estimation problem, which has a unique continuous solution. We present an implementation of the procedure incorporating a data-driven discrepancy principle for selecting the smoothing parameter. Simulations illustrate the good properties of the resulting estimator when the unknown distribution is smooth and has regularly varying thin tails. Comparisons with a Fourier kernel deconvolution method are made for the case of normal noise. We show that under mild smoothness conditions, the estimator based on the data-driven smoothing parameter is strongly consistent. [ABSTRACT FROM AUTHOR]
Rasch's Poisson counts model is a latent trait model for the situation in which K tests are administered to N examinees and the test score is a count [e.g., the repeated occurrence of some event, such as the numbe...
详细信息
Rasch's Poisson counts model is a latent trait model for the situation in which K tests are administered to N examinees and the test score is a count [e.g., the repeated occurrence of some event, such as the number of items completed or the number of items answered (in)correctly]. The Rasch Poisson counts model assumes that the test scores are Poisson distributed random variables. In the approach presented here, the Poisson parameter is assumed to be a product of a fixed test difficulty and a gamma-distributed random examinee latent trait parameter. From these assumptions, marginal maximum likelihood estimators can be derived for the test difficulties and the parameters of the prior gamma distribution. For the examinee parameters, there are a number of options. The model can be applied in a situation in which observations result from an incomplete design. When examinees are assigned to different subsets of tests using background information, this information must be taken into account when using marginal maximum likelihood estimation. If the focus is on test calibration and there is no interest in the characteristics of the latent traits in relation to the background information, conditional maximum likelihood methods may be preferred because they are easier to implement and are justified for incomplete data for test parameter estimation.
Many longiludinal studies of children are concerned with the modeling of monoionic responses such as growth and sexual maturation. The National Heart, Lung, and Blood Institute Growth and Health Study (NGHS) is a long...
详细信息
Many longiludinal studies of children are concerned with the modeling of monoionic responses such as growth and sexual maturation. The National Heart, Lung, and Blood Institute Growth and Health Study (NGHS) is a longitudinal study designed to examine the effect of growth and maturation on the development of obesity and related cardiovascular risk factors among black and white adolescent girls. Sexual maturation is measured with an ordinal outcome and is known to be measured with sizable diagnostic error. Of inierest is examining the etfecis of race and age on the sexual maturation process. Here we propose a class of models for analyzing repeated monotonic ordinal responses with diagnostic misclassification in which we separately model the underlying monotonic response and misclassification processes. We develop an em algorithm for maximum likelihood estimation that incorporates covariates and randomly missing data. We use the method to analyze the NGHS sexual maturation data. [ABSTRACT FROM AUTHOR]
This article presents a multivariate hazard model for survival data that are clustered at two hierarchical levels. The model provides corrected parameter estimates and standard errors, as well as estimates of the intr...
详细信息
This article presents a multivariate hazard model for survival data that are clustered at two hierarchical levels. The model provides corrected parameter estimates and standard errors, as well as estimates of the intragroup correlation at both levels. The model is estimated using the expectation-maximization (em) algorithm. We apply the model to an analysis of the covariates of child survival using survey data from northeast Brazil collected via a hierarchically clustered sampling scheme. We find that family and community frailty effects are fairly small in magnitude but are of importance because they alter the results in a systematic pattern.
We present a hybrid algorithm for nonparametric maximum likelihood estimation from censured data when the log-likelihood is concave. The hybrid algorithm uses a composite algorithmic mapping combining the expectation-...
详细信息
We present a hybrid algorithm for nonparametric maximum likelihood estimation from censured data when the log-likelihood is concave. The hybrid algorithm uses a composite algorithmic mapping combining the expectation-matimization (em) algorithm and the (modified) iterative convex minorant (ICM) algorithm. Global convergence of the hybrid algorithm is proven; the iterates generated by the hybrid algorithm are shown to converge to the nonparametric maximum likelihood estimator (NPMLE) unambiguously. Numerical simulations demonstrate that the hybrid algorithm converges more rapidly than either of the em or the naive ICM algorithm for doubly censored data. The speed of the hybrid algorithm makes it possible to accompany the NPMLE with bootstrap confidence bands. [ABSTRACT FROM AUTHOR]
The use of random-effects models for the analysis of longitudinal data with missing responses has been discussed by several authors. This article extends the random-effects model for a single characteristic to the cas...
详细信息
The use of random-effects models for the analysis of longitudinal data with missing responses has been discussed by several authors. This article extends the random-effects model for a single characteristic to the case of multiple characteristics, allowing for arbitrary patterns of observed data. Two different structures for the covariance matrix of measurement error are considered: uncorrelated error between responses and correlation of error terms at the same measurement times. Parameters for this model are estimated via the em algorithm. The set of equations for this estimation procedure is derived; these equations are appropriately modified to deal with missing data. The methodology is illustrated with an example from clinical trials.
The autoregressive random variance (ARV) model proposed by Taylor (Financial returns modelled by the product of two stochastic processes, a study of daily sugar prices 1961-79. In Time Series Analysis: Theory and Prac...
详细信息
The autoregressive random variance (ARV) model proposed by Taylor (Financial returns modelled by the product of two stochastic processes, a study of daily sugar prices 1961-79. In Time Series Analysis: Theory and Practice 1 (ed. O. D. Anderson). Amsterdam: North-Holland, 1982, pp. 203-26) is useful in modelling stochastic changes in the variance structure of a time series. In this paper we focus on a general multivariate ARV model. A traditional em algorithm is derived as the estimation method. The proposed em approach is simple to program, computationally efficient and numerically well behaved. The asymptotic variance-covariance matrix can be easily computed as a by-product using a well-known asymptotic result for extremum estimators. A result that is of interest in itself is that the dimension of the augmented state space form used in computing the variance-covariance matrix can be shown to be greatly reduced, resulting in greater computational efficiency. The multivariate ARV model considered here is useful in studying the lead-lag (causality) relationship of the variance structure across different time series. As an example, the leading effect of Thailand on Malaysia in terms of variance changes in the stock indices is demonstrated.
暂无评论