The expectation-maximization (EM) method can facilitate maximizing likelihood functions that arise in statistical estimation problems. In the classical EM paradigm, one iteratively maximizes the conditional log-likeli...
详细信息
The expectation-maximization (EM) method can facilitate maximizing likelihood functions that arise in statistical estimation problems. In the classical EM paradigm, one iteratively maximizes the conditional log-likelihood of a single unobservable complete data space, rather than maximizing the intractable likelihood function for the measured or incomplete data. EM algorithms update all parameters simultaneously, which has two drawbacks: 1) slow convergence, and 2) difficult maximization steps due to coupling when smoothness penalties are used. This paper describes the space-alternating generalized EM (SAGE) method, which updates the parameters sequentially by alternating between several small hidden-data spaces defined by the algorithm designer. We prove that the sequence of estimates monotonically increases the penalized-likelihood objective, we derive asymptotic convergence rates, and we provide sufficient conditions for monotone convergence in norm. Two signal processing applications illustrate the method: estimation of superimposed signals in Gaussian noise, and image reconstruction from Poisson measurements. In both applications, our SAGE algorithms easily accommodate smoothness penalties and converge faster than the EM algorithms.
The proposal of considering nonlinear principal component analysis as a kernel eigenvalue problem has provided an extremely powerful method of extracting nonlinear features for a number of classification and regressio...
详细信息
The proposal of considering nonlinear principal component analysis as a kernel eigenvalue problem has provided an extremely powerful method of extracting nonlinear features for a number of classification and regression applications. Whereas the utilization of Mercer kernels makes the problem of computing principal components in, possibly, infinite-demensional feature spaces tractable, there are still the attendant numerical problems of diagonalizing large matrices. In this contribution, we propose an expectation-maximization approach for performing kernel principal component analysis and show this to be a computationally efficient method, especially when the number of data points is large.
We explore a statistical view of radar imaging in which target reflectances are realizations of an underlying random process. For diffuse targets, this process is zero-mean complex Gaussian. The data consists of a rea...
详细信息
ISBN:
(纸本)0819440779
We explore a statistical view of radar imaging in which target reflectances are realizations of an underlying random process. For diffuse targets, this process is zero-mean complex Gaussian. The data consists of a realization of this process, observed through a linear transformation, and corrupted by additive noise. Image formation corresponds to estimating the elements of a diagonal covariance matrix. In general, maximum-likelihood estimates of these parameters cannot be computed in closed form. Snyder, O'Sullivan, and Miller proposed an expectation-maximization algorithm for computing these estimates iteratively. Straightforward implementations of the algorithm involve multiplication and inversion operations on extremely large matrices, which makes them computationally prohibitive. We present an implementation which exploits Strassen's recursive strategy for matrix multiplication and inversion, which may make the algorithm feasible for image sizes of interest in high-resolution radar applications.
Areal interpolation transforms data for a variable of interest from a set of source zones to estimate the same variable's distribution over a set of target zones. One common practice has been to guide interpolatio...
详细信息
Areal interpolation transforms data for a variable of interest from a set of source zones to estimate the same variable's distribution over a set of target zones. One common practice has been to guide interpolation by using ancillary control zones that are related to the variable of interest's spatial distribution. This guidance typically involves using source zone data to estimate the density of the variable of interest within each control zone. This article introduces a novel approach to density estimation, the geographically weighted expectation-maximization (GWEM), which combines features of two previously used techniques, the expectation-maximization (EM) algorithm and geographically weighted regression. The EM algorithm provides a framework for incorporating proper constraints on data distributions, and using geographical weighting allows estimated control-zone density ratios to vary spatially. We assess the accuracy of GWEM by applying it with land use/land cover (LULC) ancillary data to population counts from a nationwide sample of 1980 U.S. census tract pairs. We find that GWEM generally is more accurate in this setting than several previously studied methods. Because target-density weighting (TDW)using 1970 tract densities to guide interpolationoutperforms GWEM in many cases, we also consider two GWEM-TDW hybrid approaches and find them to improve estimates substantially.
In this paper we deal with an unsupervised segmentation approach for images given by a synthetic aperture sonar (SAS). The images with objects are segmented into highlight, background and shadow. Since the shape featu...
详细信息
ISBN:
(纸本)9781467300469
In this paper we deal with an unsupervised segmentation approach for images given by a synthetic aperture sonar (SAS). The images with objects are segmented into highlight, background and shadow. Since the shape features are extracted from these segmented images, correctness and precision of the segmentation are highly required. We improve the expectation-maximization (EM) methods of Sanjay-Gopal et al. by using the gamma mixture model. Moreover an intermediate step (I-step) based on Dempster-Shafer theory (DST) is introduced between the E-and M-steps of the EM to consider the pixel spatial dependency. Finally, numerical tests are carried out on both synthetic images and SAS images. The results are compared to iterative conditional mode (ICM) and diffused EM (DEM). Our approach provides segmentations with less false alarms and better shape preservation.
Model-based clustering using a family of Gaussian mixture models, with parsimonious factor analysis like covariance structure, is described and an efficient algorithm for its implementation is presented. This algorith...
详细信息
Model-based clustering using a family of Gaussian mixture models, with parsimonious factor analysis like covariance structure, is described and an efficient algorithm for its implementation is presented. This algorithm uses the alternating expectation-conditional maximization (AECM) variant of the expectation-maximization (EM) algorithm. Two central issues around the implementation of this family of models, namely model selection and convergence criteria, are discussed. These central issues also have implications for other model-based clustering techniques and for the implementation of techniques like the EM algorithm, in general. The Bayesian information criterion (BIC) is used for model selection and Aitken's acceleration, which is shown to outperform the lack of progress criterion, is used to determine convergence. A brief introduction to parallel computing is then given before the implementation of this algorithm in parallel is facilitated within the master-slave paradigm. A simulation study is then carried out to confirm the effectiveness of this parallelization. The resulting software is applied to two datasets to demonstrate its effectiveness when compared to existing software. (C) 2009 Elsevier B.V. All rights reserved.
We propose a robust 2D shape reconstruction and simplification algorithm which takes as input a defect-laden point set with noise and outliers. We introduce an optimal-transport driven approach where the input point s...
详细信息
We propose a robust 2D shape reconstruction and simplification algorithm which takes as input a defect-laden point set with noise and outliers. We introduce an optimal-transport driven approach where the input point set, considered as a sum of Dirac measures, is approximated by a simplicial complex considered as a sum of uniform measures on 0- and 1-simplices. A fine-to-coarse scheme is devised to construct the resulting simplicial complex through greedy decimation of a Delaunay triangulation of the input point set. Our method performs well on a variety of examples ranging from line drawings to grayscale images, with or without noise, features, and boundaries.
EM-type algorithms are popular tools for modal estimation and the most widely used parameter estimation procedures in statistical modeling. However, they are often criticized for their slow convergence. Despite the ap...
详细信息
EM-type algorithms are popular tools for modal estimation and the most widely used parameter estimation procedures in statistical modeling. However, they are often criticized for their slow convergence. Despite the appearance of numerous acceleration techniques along the last decades, their use has been limited because they are either difficult to implement or not general. In the present paper, a new generation of fast, general and simple maximum likelihood estimation (MLE) algorithms is presented. In these cyclic iterative algorithms, extrapolation techniques are integrated with the iterations in gradient-based MLE algorithms, with the objective of accelerating the convergence of the base iterations. Some new complementary strategies like cycling, squaring and alternating are added to that processes. The presented schemes generally exhibit either fast-linear or superlinear convergence. Numerical illustrations allow us to compare a selection of its variants and generally confirm that this category is extremely simple as well as fast. (C) 2008 Elsevier B.V. All rights reserved.
Images produced in emission tomography with the expectation-maximization algorithm have been observed to become more noisy and to have large distortions near edges as iterations proceed and the images converge towards...
详细信息
Images produced in emission tomography with the expectation-maximization algorithm have been observed to become more noisy and to have large distortions near edges as iterations proceed and the images converge towards the maximum-likelihood estimate. It is our conclusion that these artifacts are fundamental to reconstructions based on maximum-likelihood estimation as it has been applied usually; they are not due to the use of the expectation-maximization algorithm, which is but one numerical approach for finding the maximum-likelihood estimate. In this paper, we develop a mathematical approach for suppressing both the noise and edge artifacts by modifying the maximum-likelihood approach to include constraints which the estimate must satisfy.
In this study, we apply the expectation-maximisation (EM) algorithm to identify continuous-time state-space models from non-uniformly fast-sampled data. The sampling intervals are assumed to be small and uniformly bou...
详细信息
In this study, we apply the expectation-maximisation (EM) algorithm to identify continuous-time state-space models from non-uniformly fast-sampled data. The sampling intervals are assumed to be small and uniformly bounded. The authors use a parameterisation of the sampled-data model in incremental form in order to modify the standard formulation of the EM algorithm for discrete-time models. The parameters of the incremental model converge to the parameter of the continuous-time system description as the sampling period goes to zero. The benefits of the proposed algorithm are successfully demonstrated via simulation studies.
暂无评论