We consider additive mixed models for longitudinal data with a nonlinear time trend. As random effects distribution an approximate Dirichlet process mixture is proposed that is based on the truncated version of the st...
详细信息
We consider additive mixed models for longitudinal data with a nonlinear time trend. As random effects distribution an approximate Dirichlet process mixture is proposed that is based on the truncated version of the stick breaking presentation of the Dirichlet process and provides a Gaussian mixture with a data driven choice of the number of mixture components. The main advantage of the specification is its ability to identify clusters of subjects with a similar random effects structure. For the estimation of the trend curve the mixed model representation of penalized splines is used. An Expectation-Maximization algorithm is given that solves the estimation problem and that exhibits advantages over Markov chain Monte Carlo approaches, which are typically used when modeling with Dirichlet processes. The method is evaluated in a simulation study and applied to theophylline data and to body mass index profiles of children.
To overcome the real-time and robust problem of visual tracking, a visual tracking algorithm based on a fusion of multi cue and particle filter was proposed. Firstly, the integrating formula was transmuted on the fram...
详细信息
ISBN:
(纸本)9781509041039
To overcome the real-time and robust problem of visual tracking, a visual tracking algorithm based on a fusion of multi cue and particle filter was proposed. Firstly, the integrating formula was transmuted on the framework of particle filter base on multi-cues which integrated multi-cues based on cues rather than feature points so that it could be used for parallel computing. Secondly, the em algorithm was used in multi-cues integration without the weight of each cue which improved the visual tracking algorithm robust by acclimatizing the appearance model to the object change. Experiment results show that the algorithm can track targets in complex environments and under pose and local occlusion.
作者:
Grim, JiriCzech Acad Sci
Inst Informat Theory & Automat POB 18Pod Vodarenskou Vezi 4 CZ-18208 Prague 8 Czech Republic
In literature the references to em estimation of product mixtures are not very frequent. The simplifying assumption of product components, e.g. diagonal covariance matrices in case of Gaussian mixtures, is usually con...
详细信息
In literature the references to em estimation of product mixtures are not very frequent. The simplifying assumption of product components, e.g. diagonal covariance matrices in case of Gaussian mixtures, is usually considered only as a compromise because of some computational constraints or limited dataset. We have found that the product mixtures are rarely used intentionally as a preferable approximating tool. Probably, most practitioners do not "trust" the product components because of their formal similarity to "naive Bayes models." Another reason could be an unrecognized numerical instability of em algorithm in multidimensional spaces. In this paper we recall that the product mixture model does not imply the assumption of independence of variables. It is even not restrictive if the number of components is large enough. In addition, the product components increase numerical stability of the standard em algorithm, simplify the em iterations and have some other important advantages. We discuss and explain the implementation details of em algorithm and summarize our experience in estimating product mixtures. Finally we illustrate the wide applicability of product mixtures in pattern recognition and in other fields.
In this article, a non-iterative sampling algorithm is developed to obtain an independently and identically distributed samples approximately from the posterior distribution of parameters in Laplace linear regression ...
详细信息
In this article, a non-iterative sampling algorithm is developed to obtain an independently and identically distributed samples approximately from the posterior distribution of parameters in Laplace linear regression model. By combining the inverse Bayes formulae, sampling/importance resampling, and expectation maximum algorithm, the algorithm eliminates the diagnosis of convergence in the iterative Gibbs sampling and the samples generated from it can be used for inferences immediately. Simulations are conducted to illustrate the robustness and effectiveness of the algorithm. Finally, real data are studied to show the usefulness of the proposed methodology.
Prediction on the basis of censored data has an important role in many fields. This article develops a non-Bayesian two-sample prediction based on a progressive Type-II right censoring scheme. We obtain the maximum li...
详细信息
Prediction on the basis of censored data has an important role in many fields. This article develops a non-Bayesian two-sample prediction based on a progressive Type-II right censoring scheme. We obtain the maximum likelihood (ML) prediction in a general form for lifetime models including the Weibull distribution. The Weibull distribution is considered to obtain the ML predictor (MLP), the ML prediction estimate (MLPE), the asymptotic ML prediction interval (AMLPI), and the asymptotic predictive ML intervals of the sth-order statistic in a future random sample (Y-s) drawn independently from the parent population, for an arbitrary progressive censoring scheme. To reach this aim, we present three ML prediction methods namely the numerical solution, the em algorithm, and the approximate ML prediction. We compare the performances of the different methods of ML prediction under asymptotic normality and bootstrap methods by Monte Carlo simulation with respect to biases and mean square prediction errors (MSPEs) of the MLPs of Y-s as well as coverage probabilities (CP) and average lengths (AL) of the AMLPIs. Finally, we give a numerical example and a real data sample to assess the computational comparison of these methods of the ML prediction.
Interval-censored data arise due to a sequence random examination such that the failure time of interest occurs in an interval. In some medical studies, there exist long-term survivors who can be considered as permane...
详细信息
Interval-censored data arise due to a sequence random examination such that the failure time of interest occurs in an interval. In some medical studies, there exist long-term survivors who can be considered as permanently cured. We consider a mixed model for the uncured group coming from linear transformation models and cured group coming from a logistic regression model. For the inference of parameters, an em algorithm is developed for a full likelihood approach. To investigate finite sample properties of the proposed method, simulation studies are conducted. The approach is applied to the National Aeronautics and Space Administration's hypobaric decompression sickness data.
Previously, a method was proposed for calculating a reconstructed coefficient of determination in the case of right-censored regression using the expectation-maximization (em) algorithm. This measure is assessed via s...
详细信息
Previously, a method was proposed for calculating a reconstructed coefficient of determination in the case of right-censored regression using the expectation-maximization (em) algorithm. This measure is assessed via simulation study for the purpose of evaluating the utility of model fit. Further, several reconstructed adjusted coefficients of determination are proposed and compared via simulation study for the purpose of model selection. The application of these proposed measures is illustrated on a real dataset.
Parameters of a finite mixture model are often estimated by the expectation-maximization (em) algorithm where the observed data log-likelihood function is maximized. This paper proposes an alternative approach for fit...
详细信息
Parameters of a finite mixture model are often estimated by the expectation-maximization (em) algorithm where the observed data log-likelihood function is maximized. This paper proposes an alternative approach for fitting finite mixture models. Our method, called the iterativemonte Carlo classification (IMCC), is also an iterative fitting procedure. Within each iteration, it first estimates the membership probabilities for each data point, namely the conditional probability of a data point belonging to a particular mixing component given that the data point value is obtained, it then classifies each data point into a component distribution using the estimated conditional probabilities and the Monte Carlo method. It finally updates the parameters of each component distribution based on the classified data. Simulation studies were conducted to compare IMCC with some other algorithms for fittingmixture normal, and mixture t, densities.
We compare the commonly used two-step methods and joint likelihood method for joint models of longitudinal and survival data via extensive simulations. The longitudinal models include LME, GLMM, and NLME models, and t...
详细信息
We compare the commonly used two-step methods and joint likelihood method for joint models of longitudinal and survival data via extensive simulations. The longitudinal models include LME, GLMM, and NLME models, and the survival models include Cox models and AFT models. We find that the full likelihood method outperforms the two-step methods for various joint models, but it can be computationally challenging when the dimension of the random effects in the longitudinal model is not small. We thus propose an approximate joint likelihood method which is computationally efficient. We find that the proposed approximation method performs well in the joint model context, and it performs better for more "continuous" longitudinal data. Finally, a real AIDS data example shows that patients with higher initial viral load or lower initial CD4 are more likely to drop out earlier during an anti-HIV treatment.
We consider causal inference in randomized studies for survival data with a cure fraction and all-or-none treatment non compliance. To describe the causal effects, we consider the complier average causal effect (CACE)...
详细信息
We consider causal inference in randomized studies for survival data with a cure fraction and all-or-none treatment non compliance. To describe the causal effects, we consider the complier average causal effect (CACE) and the complier effect on survival probability beyond time t (CESP), where CACE and CESP are defined as the difference of cure rate and non cured subjects' survival probability between treatment and control groups within the complier class. These estimands depend on the distributions of survival times in treatment and control groups. Given covariates and latent compliance type, we model these distributions with transformation promotion time cure model whose parameters are estimated by maximum likelihood. Both the infinite dimensional parameter in the model and the mixture structure of the problem create some computational difficulties which are overcome by an expectation-maximization (em) algorithm. We show the estimators are consistent and asymptotically normal. Some simulation studies are conducted to assess the finite-sample performance of the proposed approach. We also illustrate our method by analyzing a real data from the Healthy Insurance Plan of Greater New York.
暂无评论