Pulsed eddy current (PEC) is a non-destructive testing method used to detect corrosion and cracks in multilayer aluminum structures which are typically found in aircraft applications. Corrosion and metal loss in thin ...
详细信息
Pulsed eddy current (PEC) is a non-destructive testing method used to detect corrosion and cracks in multilayer aluminum structures which are typically found in aircraft applications. Corrosion and metal loss in thin multi-layer structures are complex and variable phenomena that diminish the reliability of pulsed eddy current measurements. In this article, pulsed eddy current signals are processed to improve the accuracy and reliably of these measurements. PEC's results (time domain data) are converted by time-frequency analysis (Rihaczek distribution) to represent data in three dimensions. The time-frequency approach generates a large amount of data. Principal component analysis is applied as feature extraction to reduce redundant data to provide new features for classifiers. K-means clustering and expectation-maximization are applied to classify data and automatically determine corrosion distribution in each layer. (C) 2011 Elsevier Ltd. All rights reserved.
This paper introduces a generative model of voice fundamental frequency (F-0) contours that allows us to extract prosodic features from raw speech data. The present F-0 contour model is formulated by translating the F...
详细信息
This paper introduces a generative model of voice fundamental frequency (F-0) contours that allows us to extract prosodic features from raw speech data. The present F-0 contour model is formulated by translating the Fujisaki model, a well-founded mathematical model representing the control mechanism of vocal fold vibration, into a probabilistic model described as a discrete-time stochastic process. There are two motivations behind this formulation. One is to derive a general parameter estimation framework for the Fujisaki model that allows the introduction of powerful statistical methods. The other is to construct an automatically trainable version of the Fujisaki model that we can incorporate into statistical-model-based text-to-speech synthesizers in such a way that the Fujisaki-model parameters can be learned from a speech corpus in a unified manner. It could also be useful for other speech applications such as emotion recognition, speaker identification, speech conversion and dialogue systems, in which prosodic information plays a significant role. We quantitatively evaluated the performance of the proposed Fujisaki model parameter extractor using real speech data. Experimental results revealed that our method was superior to a state-of-the-art Fujisaki model parameter extractor.
A new model is proposed to represent a general vector nonstationary and nonlinear process by setting up a state-dependent vector hybrid linear and nonlinear autoregressive moving average (SVH-ARMA) model. The linear p...
详细信息
A new model is proposed to represent a general vector nonstationary and nonlinear process by setting up a state-dependent vector hybrid linear and nonlinear autoregressive moving average (SVH-ARMA) model. The linear part of the process is represented by a vector ARMA model, the nonlinear part is represented by a vector nonlinear ARMA model employing a multilayer feedforward neural network, and the nonstationary characteristics are captured with a hidden Markov chain. Based on a unified Q-likelihood function, an expectation-maximization algorithm for model identification is derived, and the model parameters are estimated by applying a state-dependent training and nonlinear optimization technique iteratively, which finally yields maximum likelihood estimation of model parameters. This model can track the nonstationary varying of a vector linear and/or nonlinear process adaptively and represent a vector linear and/or nonlinear system with low order. Moreover, it is able to characterize and track the long-range, second-order correlation features of many time series and thus can be used for reliable multiple step ahead prediction. Some impressive applications of the SVH-ARMA model are being presented in the companion paper by Zheng et al., pp. 575-597, this issue.
The aim of this paper is to propose an approach to constructing lower confidence limits for a reliability function and investigate the effect of a sampling scheme on the performance of the proposed approach. This is a...
详细信息
The aim of this paper is to propose an approach to constructing lower confidence limits for a reliability function and investigate the effect of a sampling scheme on the performance of the proposed approach. This is accomplished by using a data-completion algorithm and certain Monte Carlo methods. The data-completion algorithm fills in censored observations with pseudo-complete data while the Monte Carlo methods simulate observations for complicated pivotal quantities. The Birnbaum-Saunders distribution, the lognormal distribution and the Weibull distribution are employed for illustrative purpose. The results of three cases of data-analysis are presented to validate the applicability and effectiveness of the proposed methods. The first case is illustrated through simulated data, and the last two cases are illustrated through two real-data sets. (C) 2014 Elsevier B.V. All rights reserved.
This paper considers the problem of lossless source coding with side information at the decoder, when the correlation model between the source and the side information is uncertain. Four parametrized models representi...
详细信息
This paper considers the problem of lossless source coding with side information at the decoder, when the correlation model between the source and the side information is uncertain. Four parametrized models representing the correlation between the source and the side information are introduced. The uncertainty on the correlation appears through the lack of knowledge on the value of the parameters. For each model, we propose a practical coding scheme based on non-binary Low Density Parity Check Codes and able to deal with the parameter uncertainty. At the encoder, the choice of the coding rate results from an information theoretical analysis. Then we propose decoding algorithms that jointly estimate the source vector and the parameters. As the proposed decoder is based on the expectation-maximization algorithm, which is very sensitive to initialization, we also propose a method to produce first a coarse estimate of the parameters.
We consider an orthogonal frequency-division multiplexing (OFDM) system and address the problem of carrier frequency estimation in the presence of narrowband interference (NBI) with unknown power. This scenario is enc...
详细信息
We consider an orthogonal frequency-division multiplexing (OFDM) system and address the problem of carrier frequency estimation in the presence of narrowband interference (NBI) with unknown power. This scenario is encountered in emerging spectrum sharing systems, where coexistence of different wireless services over the same frequency band may result into a remarkable co-channel interference, and also in digital subscriber line transmissions as a consequence of the cross-talk phenomenon. A possible solution for frequency recovery in OFDM systems plagued by NBI has recently been derived using the maximum-likelihood criterion. Such scheme exhibits good accuracy, but involves a computationally demanding grid-search over the uncertainty frequency range. In the present work, we derive an alternative method that provides frequency estimates in closed-form by resorting to the expectation-maximization algorithm. This makes it possible to achieve some computational saving while maintaining a remarkable robustness against NBI.
The maximum-likelihood (ML) approach in emission tomography provides images with superior noise characteristics compared to conventional filtered backprojection (FBP) algorithms, The expectation-maximization (EM) algo...
详细信息
The maximum-likelihood (ML) approach in emission tomography provides images with superior noise characteristics compared to conventional filtered backprojection (FBP) algorithms, The expectation-maximization (EM) algorithm is an iterative algorithm for maximizing the Poisson likelihood in emission computed tomography that became very popular for solving the ML problem because of its attractive theoretical and practical properties. Recently, (Browne and DePierro, 1996 and Hudson and Larkin, 1994) block sequential versions of the EM algorithm that take advantage of the scanner's geometry have been proposed in order to accelerate its convergence. In Hudson and Larkin, 1994, the ordered subsets EM (OS-EM) method was applied to the ML problem and a modification (OS-GP) to the maximum a posteriori (MAP) regularized approach without showing convergence, In Browne and DePierro, 1996, we presented a relaxed version of OS-EM (RAMLA) that converges to an ML solution. In this paper, we present an extension of RAMLA for MAP reconstruction We show that, if the sequence generated by this method converges, then it must converge to the true MAP solution. Experimental evidence of this convergence is also shown, To illustrate this behavior we apply the algorithm to positron emission tomography simulated data comparing its performance to OS-GP.
In this paper, we develop a Mean Field Games approach to Cluster Analysis. We consider a finite mixture model, given by a convex combination of probability density functions, to describe the given data set. We interpr...
详细信息
In this paper, we develop a Mean Field Games approach to Cluster Analysis. We consider a finite mixture model, given by a convex combination of probability density functions, to describe the given data set. We interpret a data point as an agent of one of the populations represented by the components of the mixture model, and we introduce a corresponding optimal control problem. In this way, we obtain a multi-population Mean Field Games system which characterizes the parameters of the finite mixture model. Our method can be interpreted as a continuous version of the classical expectation-maximization algorithm.
To achieve effective collaboration of multiple robots, it requires efficient exchanges of map information. As directly exchanging generally used depth map requires high communication bandwidth, it is practical to enha...
详细信息
To achieve effective collaboration of multiple robots, it requires efficient exchanges of map information. As directly exchanging generally used depth map requires high communication bandwidth, it is practical to enhance the efficiency using map compression techniques based on Gaussian mixture models. Currently, parameters of the Gaussian mixture model are mostly computed using the expectation-maximization algorithm. It is time consuming as it has to iteratively update parameters by traversing all points in a point cloud converted from the depth map, and it is not suitable for real-time applications. Other methods directly segment the point cloud into grids and then perform a single Gaussian parameter estimation for each grid. They achieve real-time compression but generate parameter sensitive results. To tackle issues above, we improve compression methods with an integrated hierarchical approach. First, the points are clustered hierarchically and efficiently by K-means, generating coarse clusters. Then, each cluster is further hierarchically clustered by expectation-maximization algorithm for accuracy enhancement. After each clustering process, an evaluation index for ensuring accuracy and preventing over-fitting is calculated to determine whether pruning or retention of newly generated clusters is appropriate. At last, parameters of each Gaussian distribution in the model are estimated by points in a corresponding cluster. Experiments conducted in various environments demonstrate that our approach improves computing efficiency by over 79 times compared to the state-of-the-art approach.
作者:
Zhou, ZhenhuaLiao, BinShenzhen Univ
Coll Elect & Informat Engn Guangdong Key Lab Intelligent Informat Proc State Key Lab Radio Frequency Heterogeneous Integr Shenzhen 518060 Peoples R China
In this article, we present a maximum a posteriori (MAP) based framework to deal with the challenging problem of joint fundamental frequency and order estimation for harmonic signal corrupted by impulsive noise, which...
详细信息
In this article, we present a maximum a posteriori (MAP) based framework to deal with the challenging problem of joint fundamental frequency and order estimation for harmonic signal corrupted by impulsive noise, which is modeled as Gaussian noise contaminated by outliers. In the proposed method, parameters including the fundamental frequency (subject to possible scaling), noise variance, signal waveform and precision parameters of the outliers are firstly jointly estimated through maximizing the posterior probability density function (PDF). To tackle the consequent problem, the expectation-maximization (EM) algorithm is employed and an alternating optimization method is developed to solve the multi-variable optimization problem in the maximization step. Based on the estimated parameters, the order of the harmonic signal is determined according to the MAP criterion. Moreover, the scaling of fundamental frequency is resolved according to the order estimate and selected harmonic components. Simulation results demonstrate the superiorities of the proposed approach in comparison with existing schemes.
暂无评论