Multi-state models have been widely used to analyse longitudinal event history data obtained in medical and epidemiological studies. The tools and methods developed recently in this area require completely observed da...
详细信息
In this paper, a zero-and-one-inflated Poisson (ZOIP) model is studied. The maximum likelihoodestimation and the Bayesian estimation of the model parameters are obtained based on dataaugmentation method. A simulation ...
详细信息
In this paper, a zero-and-one-inflated Poisson (ZOIP) model is studied. The maximum likelihoodestimation and the Bayesian estimation of the model parameters are obtained based on dataaugmentation method. A simulation study based on proposed sampling algorithm is conductedto assess the performance of the proposed estimation for various sample sizes. Finally, two realdata-sets are analysed to illustrate the practicability of the proposed method.
System identification is a data-driven input-output modeling approach more and more used in biology and biomedicine. In this application context, several assays are repeated to estimate the response variability and re...
详细信息
System identification is a data-driven input-output modeling approach more and more used in biology and biomedicine. In this application context, several assays are repeated to estimate the response variability and reproducibility. The inference of the modeling conclusions to the whole population requires to account for the inter-individual variability within the modeling procedure. One solution consists of using mixed effects models but up to now no similar approach exists in the system identification literature. In this article, we propose a first solution based on an ARX (Auto Regressive model with eXternal inputs) structure using the em (Expectation-Maximisation) algorithm for the estimation of the model parameters. Using the Fisher information matrix, the parameter standard errors are estimated;this allows for group comparison tests. Simulations show the relevance of this solution compared with a classical procedure of system identification repeated on each subject. Taking into account all the information available in the population allows to gather the parameters between individuals. (C) 2017, IFAC (International Federation of Automatic Control) Hosting by Elsevier Ltd. All rights reserved.
The reliability of the product is getting higher and higher. Because of product's performance will be degraded as it is used, so the degradation analysis becomes an important method to evaluate the reliability of ...
详细信息
ISBN:
(纸本)9781538633229
The reliability of the product is getting higher and higher. Because of product's performance will be degraded as it is used, so the degradation analysis becomes an important method to evaluate the reliability of the product. In this paper, the stochastic process is used to describe the degradation process of the product. For describing the difference between the products, the random variable is introduced in the Inverse Gaussian model. But there is the latent variable in this model. The algorithm is used to solve parameters in this model. The correctness of this method is verified by simulation. Finally, this method is applied to the degraded data of heavy machine tool.
An important challenge in speech processing involves extracting non-linguistic information from a fundamental frequency (F-0) contour of speech. We propose a fast algorithm for estimating the model parameters of the F...
详细信息
ISBN:
(纸本)9781509041176
An important challenge in speech processing involves extracting non-linguistic information from a fundamental frequency (F-0) contour of speech. We propose a fast algorithm for estimating the model parameters of the Fujisaki model, namely, the timings and magnitudes of the phrase and accent commands. Although a powerful parameter estimation framework based on a stochastic counterpart of the Fujisaki model has recently been proposed, it still had room for improvement in terms of both computational efficiency and parameter estimation accuracy. This paper describes our two contributions. First, we propose a hard expectation-maximization (em) algorithm for parameter inference where the E step of the conventional em algorithm is replaced with a point estimation procedure to accelerate the estimation process. Second, to improve the parameter estimation accuracy, we add a generative process of a spectral feature sequence to the generative model. This makes it possible to use linguistic or phonological information as an additional clue to estimate the timings of the accent commands. The experiments confirmed that the present algorithm was approximately 16 times faster and estimated parameters about 3% more accurately than the conventional algorithm.
With the rapid growing of crowdsourcing systems, class labels for supervised learning can be easily obtained from crowdsourcing platforms. To deal with the problem that labels obtained from crowds are usually noisy du...
详细信息
ISBN:
(纸本)9781538618295
With the rapid growing of crowdsourcing systems, class labels for supervised learning can be easily obtained from crowdsourcing platforms. To deal with the problem that labels obtained from crowds are usually noisy due to imperfect reliability of non-expert workers, we let multiple workers provide labels for the same object. Then, true labels of the labeled object are estimated through ground truth inference algorithms. The inferred integrated labels are expected to be of high quality. In this paper, we propose a novel ground truth inference algorithm based on em algorithm, which not only infers the true labels of the instances but also simultaneously estimates the reliability of each worker and the difficulty of each instance. Experimental results on seven real-world crowdsourcing datasets show that our proposed algorithm outperforms eight state-of-the art algorithms.
Photon attenuation during SPECT imaging significantly degrades the diagnostic outcome and the quantitative accuracy of final reconstructed images. It is well known that attenuation correction can be done by using iter...
详细信息
ISBN:
(数字)9781510607101
ISBN:
(纸本)9781510607095;9781510607101
Photon attenuation during SPECT imaging significantly degrades the diagnostic outcome and the quantitative accuracy of final reconstructed images. It is well known that attenuation correction can be done by using iterative reconstruction methods if we access to attenuation map. Two methods have been used to calculate the attenuation map: transmission- based and transmissionless techniques. In this phantom study, we evaluated the importance of attenuation correction by quantitative evaluation of errors associated with each method. For transmissionless approach, the attenuation map was estimated from the emission data only. An em algorithm with attenuation model was developed and used for attenuation correction during image reconstruction. Finally, a comparison was done between reconstructed images using our OSem code and analytical FBP method before and after attenuation correction. The results of measurements showed that: our programs are capable to reconstruct SPECT images and correct the attenuation effects. Moreover, to evaluate reconstructed image quality before and after attenuation correction we applied a novel approach using Image Quality Index. Attenuation correction increases the quality and quantitative accuracy in both methods. This increase is independent of activity in quantity factor and decreases with activity in quality factor. In em algorithm, it is necessary to use regularization to obtain true distribution of attenuation coefficients.
Mixture modeling is a popular technique for identifying unobserved subpopulations (e.g., components) within a data set, with Gaussian (normal) mixture modeling being the form most widely used. Generally, the parameter...
详细信息
Mixture modeling is a popular technique for identifying unobserved subpopulations (e.g., components) within a data set, with Gaussian (normal) mixture modeling being the form most widely used. Generally, the parameters of these Gaussian mixtures cannot be estimated in closed form, so estimates are typically obtained via an iterative process. The most common estimation procedure is maximum likelihood via the expectation-maximization (em) algorithm. Like many approaches for identifying subpopulations, finite mixture modeling can suffer from locally optimal solutions, and the final parameter estimates are dependent on the initial starting values of the em algorithm. Initial values have been shown to significantly impact the quality of the solution, and researchers have proposed several approaches for selecting the set of starting values. Five techniques for obtaining starting values that are implemented in popular software packages are compared. Their performances are assessed in terms of the following four measures: (1) the ability to find the best observed solution, (2) settling on a solution that classifies observations correctly, (3) the number of local solutions found by each technique, and (4) the speed at which the start values are obtained. On the basis of these results, a set of recommendations is provided to the user.
Many signal and image processing applications, including SAR polarimetry and texture analysis, require the classification of complex covariance matrices. The present paper introduces a geometric learning approach on t...
详细信息
ISBN:
(纸本)9781509041176
Many signal and image processing applications, including SAR polarimetry and texture analysis, require the classification of complex covariance matrices. The present paper introduces a geometric learning approach on the space of complex covariance matrices based on a new distribution called Riemannian Gaussian distribution. The proposed distribution has two parameters, the centre of mass (Y) over bar Y and the dispersion parameter sigma. After having derived its maximum likelihood estimator and its extension to mixture models, we propose an application to texture recognition on the VisTex database.
Automatic trajectory classification has countless applications, ranging from the natural sciences, such as zoology and meteorology, to urban planning, sports analysis, and surveillance, and has generated great researc...
详细信息
ISBN:
(纸本)9789897582226
Automatic trajectory classification has countless applications, ranging from the natural sciences, such as zoology and meteorology, to urban planning, sports analysis, and surveillance, and has generated great research interest. This paper proposes and evaluates three new methods for trajectory clustering, strictly based on the trajectory shapes, thus invariant under changes in spatial position and scale (and, optionally, orientation). To extract shape information, the trajectories are first uniformly resampled using splines, and then described by the sequence of tangent angles at the resampled points. Dealing with angular data is challenging, namely due to its periodic nature, which needs to be taken into account when designing any clustering technique. In this context, we propose three methods: a variant of the k-means algorithm, based on a dissimilarity measure that is adequate for angular data;a finite mixture of multivariate Von Mises distributions, which is fitted using an em algorithm;sparse nonnegative matrix factorization, using complex representation of the angular data. Methods for the automatic selection of the number of clusters are also introduced. Finally, these techniques are tested and compared on both real and synthetic data, demonstrating their viability.
暂无评论