In this paper, the estimation of speech AR parameters under noisy conditions is revisited. The em algorithm serving this purpose was first proposed by Gannot et al. We present an extensive experimental study along wit...
详细信息
ISBN:
(纸本)9781479928934
In this paper, the estimation of speech AR parameters under noisy conditions is revisited. The em algorithm serving this purpose was first proposed by Gannot et al. We present an extensive experimental study along with a new approach to implement the E-step of the algorithm. The new realization of the E-step uses matrix computations instead of a Kalman filter. By appropriate rearrangement of the E-step, the complexity O(P(p + q)(3)) of the Kalman filter approach has been reduced to O(P log P), where P is the frame length, p is the speech order and q is the noise order. In practice, a speed up of the E-step of at least two orders of magnitude has been achieved. An extensive evaluation of the algorithm shows that em algorithm in its base form is unable to improve over a recent speech enhancement method proposed by Heusdens et al. and over an established Spectral Subtraction with Minimum Statistics method, as measured by various quality measures. However, with some modification it was possible to improve over these methods in terms of spectral distortion.
Interference detection and suppression schemes are proposed for coded OFDM systems in the presence of narrow-band interference. Previously proposed interference detection schemes perform with a recursive forward error...
详细信息
ISBN:
(纸本)9781479935123
Interference detection and suppression schemes are proposed for coded OFDM systems in the presence of narrow-band interference. Previously proposed interference detection schemes perform with a recursive forward error correcting (FEC) decoding, thus its calculation is heavy. The proposed scheme is based on an expectation maximization (em) algorithm to detect the interference before the FEC decoding process. The complexity is reduced because the FEC decoding is performed only once. Simulation results indicate that the proposed scheme achieves almost the same bit error rate performance as the conventional scheme, while decreasing the times of the FEC decoding.
The single-strategy deterministic, inputs, noisy "and'' gate (SS-DINA) model has previously been extended to a model called the multiple-strategy deterministic, inputs, noisy "and'' gate (MSD...
详细信息
The single-strategy deterministic, inputs, noisy "and'' gate (SS-DINA) model has previously been extended to a model called the multiple-strategy deterministic, inputs, noisy "and'' gate (MSDINA) model to address more complex situations where examinees can use multiple problem-solving strategies during the test. The main purpose of this article is to adapt an efficient estimation algorithm, the Expectation-Maximization algorithm, that can be used to fit the MS-DINA model when the joint attribute distribution is most general (i. e., saturated). The article also examines through a simulation study the impact of sample size and test length on the fit of the SS-DINA and MS-DINA models, and the implications of misfit on item parameter recovery and attribute classification accuracy. In addition, an analysis of fraction subtraction data is presented to illustrate the use of the algorithm with real data. Finally, the article concludes by discussing several important issues associated with multiple-strategies models for cognitive diagnosis.
Obtaining high quality images is very important in many areas of applied sciences, but images are usually polluted by noise in the process of generation, transmission and acquisition. In recent years, wavelet analysis...
详细信息
ISBN:
(纸本)9783038351153
Obtaining high quality images is very important in many areas of applied sciences, but images are usually polluted by noise in the process of generation, transmission and acquisition. In recent years, wavelet analysis achieves significant results in the field of image de-noising. However, most of the studies of noise-induced phenomena assume that the noise source is Gaussian. The use of mixed Gaussian and impulse noise is rare, mainly because of the difficulties in handling them. In the process of image de-noising, the noise model's parameter estimation is a key issue, because the accuracy of the noise model's parameters could affect the de-noising quality. In the case of mixed Gaussian noises, em algorithm is an iterative algorithm, which simplifies the maximum likelihood equation. This thesis takes wavelet analysis and statistics theory as tools, studies on mixed noise image de-noising, provides two classes of algorithms for dealing with a special type of non-Gaussian noise, mixed Gaussian and Pepper & Salt noise.
In the paper the problem of estimation of Gaussian mixture model parameters is considered. A shared memory parallelization of the standard em algorithm, based on data decomposition, is proposed. Our approach uses a ro...
详细信息
ISBN:
(纸本)9781479927289
In the paper the problem of estimation of Gaussian mixture model parameters is considered. A shared memory parallelization of the standard em algorithm, based on data decomposition, is proposed. Our approach uses a rowwise block striped decomposition of large arrays storing feature vectors and posterior probabilities. Additionally, some NUMA optimizations, which allow threads to use as much local memory as possible, exploiting the first-touch memory allocation policy of the Linux operating system, are described. The proposed method was implemented in OpenMP and tested on a 64-core system based on four AMD Opteron 6272 (codenamed "Interlagos") processors. The experimental results indicate, that on large datasets, the algorithm scales very well with respect to the number of cores, and NUMA optimizations significantly improve its performance.
Crowdsourcing has been a helpful mechanism to leverage human intelligence to acquire useful ***,when we aggregate the crowd knowledge based on the currently developed voting algorithms,it often results in common knowl...
详细信息
Crowdsourcing has been a helpful mechanism to leverage human intelligence to acquire useful ***,when we aggregate the crowd knowledge based on the currently developed voting algorithms,it often results in common knowledge that may not be *** this paper,we consider the problem of collecting specific knowledge via *** the help of using external knowledge base such as WordNet,we incorporate the semantic relations between the alternative answers into a probabilisticmodel to determine which answer is more *** formulate the probabilistic model considering both worker’s ability and task’s difficulty from the basic assumption,and solve it by the expectation-maximization(em)*** increase algorithm compatibility,we also refine our method into semi-supervised *** results show that our approach is robust with hyper-parameters and achieves better improvement thanmajority voting and other algorithms when more specific answers are expected,especially for sparse data.
The mixture of Gaussian processes (MGP) is a powerful framework for machine learning. However, its parameter learning or estimation is still a very challenging problem. In this paper, a precise hard-cut em algorithm i...
详细信息
ISBN:
(纸本)9783319093390;9783319093383
The mixture of Gaussian processes (MGP) is a powerful framework for machine learning. However, its parameter learning or estimation is still a very challenging problem. In this paper, a precise hard-cut em algorithm is proposed for learning the parameters of the MGP without any approximation in the derivation. It is demonstrated by the experimental results that our proposed hard-cut em algorithm for MGP is feasible and even outperforms two available hard-cut em algorithms.
We consider a parameter estimation problem for a Hidden Markov Model in the framework of particle filters. Using constructs from reinforcement learning for variance reduction in particle filters, a simulation based sc...
详细信息
ISBN:
(纸本)9781479935901
We consider a parameter estimation problem for a Hidden Markov Model in the framework of particle filters. Using constructs from reinforcement learning for variance reduction in particle filters, a simulation based scheme is developed for estimating the partially observed log-likelihood function. A Kiefer-Wolfowitz like stochastic approximation scheme maximizes this function over the unknown parameter. The two procedures are performed on two different time scales, emulating the alternating 'expectation' and 'maximization' operations of the em algorithm. Numerical experiments are presented in support of the proposed scheme.
Degradation of many products in practical applications is often subject to unit-to-unit heterogeneity. Such heterogeneity can be attributed to the heterogeneous quality of the raw materials and the fluctuation of the ...
详细信息
Degradation of many products in practical applications is often subject to unit-to-unit heterogeneity. Such heterogeneity can be attributed to the heterogeneous quality of the raw materials and the fluctuation of the manufacturing process, as well as the heterogeneous usage conditions and environments. The heterogeneity leads to the scattering of the degradation rates and diffusion intensities in the population. To model this phenomenon, this study proposes a general random-effects Wiener process model that accounts for the unit-to-unit heterogeneity in the degradation drift and the volatility simultaneously. In particular, the drift of the Wiener process is characterized by a normal distribution and the diffusion parameter is characterized by an independent inverse Gaussian distribution. The proposed model is flexible for characterization of heterogeneous degradation, and permits an analytically tractable model inference. An em algorithm incorporating the variational Bayesian method is developed to estimate the model parameters, and a parametric bootstrap approach is proposed to construct confidence intervals. The performance of the proposed model and the estimation algorithm is validated by Monte Carlo simulations. The degradation data of an infrared LED device and the wear data of the magnetic head of a hard disk drive are studied based on the proposed model. With comprehensive comparative studies, the good performance of the proposed model in fitting the real degradation data is validated.
The logistic regression model is one of the most powerful statistical methods for the analysis of binary data. Logistic regression allows using a set of covariates to explain the binary responses. A mixture of logisti...
详细信息
The logistic regression model is one of the most powerful statistical methods for the analysis of binary data. Logistic regression allows using a set of covariates to explain the binary responses. A mixture of logistic regression models is used to fit heterogeneous populations using an unsupervised learning approach. The multicollinearity problem is one of the most common problems in logistic and a mixture of logistic regressions, where the covariates are highly correlated. This problem results in unreliable maximum likelihood estimates for the regression coefficients. This research develops shrinkage methods to deal with the multicollinearity in a mixture of logistic regression models. These shrinkage methods include ridge and Liu-type estimators. Through extensive numerical studies, we show that the developed methods provide more reliable results in estimating the coefficients of the mixture. Finally, we applied shrinkage methods to analyze the status of bone disorders in women aged 50 and older.
暂无评论