The reliability of the product is getting higher and higher. Because of product's performance will be degraded as it is used, so the degradation analysis becomes an important method to evaluate the reliability of ...
详细信息
ISBN:
(纸本)9781538633229
The reliability of the product is getting higher and higher. Because of product's performance will be degraded as it is used, so the degradation analysis becomes an important method to evaluate the reliability of the product. In this paper, the stochastic process is used to describe the degradation process of the product. For describing the difference between the products, the random variable is introduced in the Inverse Gaussian model. But there is the latent variable in this model. The algorithm is used to solve parameters in this model. The correctness of this method is verified by simulation. Finally, this method is applied to the degraded data of heavy machine tool.
With the rapid growing of crowdsourcing systems, class labels for supervised learning can be easily obtained from crowdsourcing platforms. To deal with the problem that labels obtained from crowds are usually noisy du...
详细信息
ISBN:
(纸本)9781538618295
With the rapid growing of crowdsourcing systems, class labels for supervised learning can be easily obtained from crowdsourcing platforms. To deal with the problem that labels obtained from crowds are usually noisy due to imperfect reliability of non-expert workers, we let multiple workers provide labels for the same object. Then, true labels of the labeled object are estimated through ground truth inference algorithms. The inferred integrated labels are expected to be of high quality. In this paper, we propose a novel ground truth inference algorithm based on em algorithm, which not only infers the true labels of the instances but also simultaneously estimates the reliability of each worker and the difficulty of each instance. Experimental results on seven real-world crowdsourcing datasets show that our proposed algorithm outperforms eight state-of-the art algorithms.
Many signal and image processing applications, including SAR polarimetry and texture analysis, require the classification of complex covariance matrices. The present paper introduces a geometric learning approach on t...
详细信息
ISBN:
(纸本)9781509041176
Many signal and image processing applications, including SAR polarimetry and texture analysis, require the classification of complex covariance matrices. The present paper introduces a geometric learning approach on the space of complex covariance matrices based on a new distribution called Riemannian Gaussian distribution. The proposed distribution has two parameters, the centre of mass (Y) over bar Y and the dispersion parameter sigma. After having derived its maximum likelihood estimator and its extension to mixture models, we propose an application to texture recognition on the VisTex database.
An important challenge in speech processing involves extracting non-linguistic information from a fundamental frequency (F-0) contour of speech. We propose a fast algorithm for estimating the model parameters of the F...
详细信息
ISBN:
(纸本)9781509041176
An important challenge in speech processing involves extracting non-linguistic information from a fundamental frequency (F-0) contour of speech. We propose a fast algorithm for estimating the model parameters of the Fujisaki model, namely, the timings and magnitudes of the phrase and accent commands. Although a powerful parameter estimation framework based on a stochastic counterpart of the Fujisaki model has recently been proposed, it still had room for improvement in terms of both computational efficiency and parameter estimation accuracy. This paper describes our two contributions. First, we propose a hard expectation-maximization (em) algorithm for parameter inference where the E step of the conventional em algorithm is replaced with a point estimation procedure to accelerate the estimation process. Second, to improve the parameter estimation accuracy, we add a generative process of a spectral feature sequence to the generative model. This makes it possible to use linguistic or phonological information as an additional clue to estimate the timings of the accent commands. The experiments confirmed that the present algorithm was approximately 16 times faster and estimated parameters about 3% more accurately than the conventional algorithm.
Crowdsourcing appears as one of cheap and fast solutions of distributed labor networks. Since the workers have various expertise levels, several approaches to measure annotators reliability have been addressed. There ...
详细信息
ISBN:
(纸本)9783319687650;9783319687643
Crowdsourcing appears as one of cheap and fast solutions of distributed labor networks. Since the workers have various expertise levels, several approaches to measure annotators reliability have been addressed. There is a condition when annotators who give random answer are abundance and few number of expert is available Therefore, we proposed an iterative algorithm in crowds problem when it is hard to find expert annotators by selecting expert annotator based on em-Bayesian algorithm, Entropy Measure, and Condorcet Jury's Theorem. Experimental results using eight datasets show the best performance of our proposed algorithm compared to previous approaches.
Aiming at solving the problem that the performance of moving target tracking is not very well when the statistical characteristics of the process noise are unknown in addition to non - Gaussian observation noise, a me...
详细信息
ISBN:
(纸本)9781538610091
Aiming at solving the problem that the performance of moving target tracking is not very well when the statistical characteristics of the process noise are unknown in addition to non - Gaussian observation noise, a method combining em algorithm and particle filter are proposed. Such method applies em algorithm to estimate the accurate process noise parameters, then the particle filter is used to obtain high precision target motion state. Simulation results demonstrate our method can effectively suppress the divergence of filtering and improve the accuracy of tracking significantly.
Crowdsourcing approaches rely on the collection of multiple individuals to solve problems that require analysis of large data sets in a timely accurate manner. The inexperience of participants or annotators motivates ...
详细信息
ISBN:
(纸本)9781509041176
Crowdsourcing approaches rely on the collection of multiple individuals to solve problems that require analysis of large data sets in a timely accurate manner. The inexperience of participants or annotators motivates well robust techniques. Focusing on clustering setups, the data provided by all annotators is suitably modeled here as a mixture of Gaussian components plus a uniformly distributed random variable to capture outliers. The proposed algorithm is based on the expectation-maximization algorithm and allows for soft assignments of data to clusters, to rate annotators according to their performance, and to estimate the number of Gaussian components in the non-Gaussian/Gaussian mixture model, in a jointly manner.
Mixture modeling is a popular technique for identifying unobserved subpopulations (e.g., components) within a data set, with Gaussian (normal) mixture modeling being the form most widely used. Generally, the parameter...
详细信息
Mixture modeling is a popular technique for identifying unobserved subpopulations (e.g., components) within a data set, with Gaussian (normal) mixture modeling being the form most widely used. Generally, the parameters of these Gaussian mixtures cannot be estimated in closed form, so estimates are typically obtained via an iterative process. The most common estimation procedure is maximum likelihood via the expectation-maximization (em) algorithm. Like many approaches for identifying subpopulations, finite mixture modeling can suffer from locally optimal solutions, and the final parameter estimates are dependent on the initial starting values of the em algorithm. Initial values have been shown to significantly impact the quality of the solution, and researchers have proposed several approaches for selecting the set of starting values. Five techniques for obtaining starting values that are implemented in popular software packages are compared. Their performances are assessed in terms of the following four measures: (1) the ability to find the best observed solution, (2) settling on a solution that classifies observations correctly, (3) the number of local solutions found by each technique, and (4) the speed at which the start values are obtained. On the basis of these results, a set of recommendations is provided to the user.
Proximal distance algorithms combine the classical penalty method of constrained minimization with distance majorization. If f(x) is the loss function, and C is the constraint set in a constrained minimization problem...
详细信息
Proximal distance algorithms combine the classical penalty method of constrained minimization with distance majorization. If f(x) is the loss function, and C is the constraint set in a constrained minimization problem, then the proximal distance principle mandates minimizing the penalized loss f(x) + ρ/2 dist(x, C)2 and following the solution xρ to its limit as ρ tends to ∞. At each iteration the squared Euclidean distance dist(x, C)2 is majorized by the spherical quadratic ǁx - PC(xk)ǁ2, where PC(xk) denotes the projection of the current iterate xk onto C. The minimum of the surrogate function f(x) + ρ/2 ǁx-PC(xk)ǁ2 is given by the proximal map proxρ-1f [PC(xk)]. The next iterate xk+1 automatically decreases the original penalized loss for fixed ρ. Since many explicit projections and proximal maps are known, it is straightforward to derive and implement novel optimization algorithms in this setting. These algorithms can take hundreds if not thousands of iterations to converge, but the simple nature of each iteration makes proximal distance algorithms competitive with traditional algorithms. For convex problems, proximal distance algorithms reduce to proximal gradient algorithms and therefore enjoy well understood convergence properties. For nonconvex problems, one can attack convergence by invoking Zangwill's theorem. Our numerical examples demonstrate the utility of proximal distance algorithms in various high-dimensional settings, including a) linear programming, b) constrained least squares, c) projection to the closest kinship matrix, d) projection onto a second-order cone constraint, e) calculation of Horn's copositive matrix index, f) linear complementarity programming, and g) sparse principal components analysis. The proximal distance algorithm in each case is competitive or superior in speed to traditional methods such as the interior point method and the alternating direction method of multipliers (ADMM). Source code for the numerical examples can be found
Operational Modal Analysis consists of estimating the modal parameters of civil/structural systems using only the response of the systems, while the unknown inputs are considered as realizations of white noise process...
详细信息
Operational Modal Analysis consists of estimating the modal parameters of civil/structural systems using only the response of the systems, while the unknown inputs are considered as realizations of white noise processes. Sometimes this last hypothesis is not correct, that is, the unknown input does not have a flat spectrum. The consequence is that the peaks of the inputs spectrum are observed in the system response, and, since the input is unmeasured, it is not possible to separate system poles (the modal parameters) from the input poles. The method proposed in this work is based on recording the response of the structure under different (unknown) excitations, including both white noise and non-white noise excitations. Then, a joint analysis of the records is performed: the common poles will correspond to system poles and the specific poles will correspond to input poles. (C) 2017 The Authors. Published by Elsevier Ltd.
暂无评论