The last two decades have seen an escalating interest in methods for large-scale unconstrained face recognition. While the promise of computer vision systems to efficiently and accurately verify and identify faces in ...
详细信息
The last two decades have seen an escalating interest in methods for large-scale unconstrained face recognition. While the promise of computer vision systems to efficiently and accurately verify and identify faces in naturally occurring circumstances still remains elusive, recent advances in deep learning are taking us closer to human-level recognition. In this study, the authors propose a new paradigm which employs deep features in a feature extractor and intra-personal factor analysis as a recogniser. The proposed new strategy represents the face changes of a person using identity specific components and the intra-personal variation through reinterpretation of a Bayesian generative factor analysis model. The authors employ the expectation-maximisation algorithm to calculate model parameters which cannot be observed directly. Recognition outcomes achieved through benchmarking on large-scale wild databases, Labeled Faces in the Wild (LFW) and Youtube Face (YTF), clearly prove that the proposed approach provides remarkable face verification performance improvement over state-of-the-art approaches.
A polynomial approach for maximum-likelihood (ML) estimation of superimposed signals in time-series problems and array processing was recently proposed. This technique was applied successfully to linear uniform arrays...
详细信息
A polynomial approach for maximum-likelihood (ML) estimation of superimposed signals in time-series problems and array processing was recently proposed. This technique was applied successfully to linear uniform arrays and to uniformly sampled complex exponential signals. However, uniformly spaced arrays are not optimal for minimum variance estimation of bearing, range or position, and uniform sampling of signals is not always possible in practice. The authors make use of the expectation-maximization algorithm to apply the polynomial approach to sublattice arrays and to missing samples in time-series problems.
The surface of various carbon black and silica grades is characterized via static gas adsorption using different gases. From decomposition of the adsorption isotherm into distinct energetic contributions, the adsorpti...
详细信息
The surface of various carbon black and silica grades is characterized via static gas adsorption using different gases. From decomposition of the adsorption isotherm into distinct energetic contributions, the adsorption energy distribution as well as the surface area are obtained. The decomposition is done by an iterative expectation maximization algorithm specifically designed for this problem. It is found that the adsorption isotherms of the various gases differ significantly in the low-pressure regime, leading to characteristic energy distributions with distinct maxima. As expected, the mean adsorption energy generally increases with the cross section of the gases, and systematic deviations are found reflecting the polar and dispersive interaction characteristics of silica and carbon black, respectively. The surface fractal dimension of two different carbon black grades is estimated using the yardstick method. The obtained values 2.6 and 2.7 agree with previous findings that the carbon black surface morphology is very rough. The adsorption of CO2 on both carbon blacks delivers unexpectedly low values of the monolayer coverage or specific surface area, indicating that mainly high energetic sites of the surface are covered. In consequence, compared with N-2, a relatively high value of the mean adsorption energy is found. For both investigated silicas, the mean adsorption energy scales with the quadrupole moments of CO2 and N-2, which is indicative of a large polar contribution to interaction energy.
While the ML-EM algorithm for reconstruction for emission tomography is unstable due to the ill-posed nature of the problem. Bayesian reconstruction methods overcome this instability by introducing prior information, ...
详细信息
While the ML-EM algorithm for reconstruction for emission tomography is unstable due to the ill-posed nature of the problem. Bayesian reconstruction methods overcome this instability by introducing prior information, often in the form of a spatial smoothness regularizer. More elaborate forms of smoothness constraints may be used to extend the role of the prior beyond that of a stabilizer in order to capture actual spatial information about the object. Previously proposed forms of such prior distributions were based on the assumption of a piecewise constant source distribution. Here, we propose an extension to a piecewise linear model-the weak plate-which is more expressive than the piecewise constant model. The weak plate prior not only preserves edges but also allows for piecewise ramplike regions in the reconstruction. Indeed, for our application in SPECT, such ramplike regions are observed in ground-truth source distributions in the form of primate autoradiographs of rCBF radionuclides. To incorporate the weak plate prior in a MAP approach, we model the prior as a Gibbs distribution and use a GEM formulation for the optimization. We compare quantitative performance of the ML-EM algorithm, a GEM algorithm with a prior favoring piecewise constant regions, and a GEM algorithm with our weak plate prior. Pointwise and regional bias and variance of ensemble image reconstructions are used as indications of image quality. Our results show that the weak plate and membrane priors exhibit improved bias and variance relative to ML-EM techniques.
Breast cancer becomes a significant public health problem in the world. During the early detection of breast cancer, it is a very challenging task to classify accurately the benign-malignant patterns in digital mammog...
详细信息
Breast cancer becomes a significant public health problem in the world. During the early detection of breast cancer, it is a very challenging task to classify accurately the benign-malignant patterns in digital mammograms. This study proposes a new fully automated computer-aided diagnosis (CAD) system for breast cancer diagnosis with high-accuracy and low-computational requirements. The expectation-maximisation algorithm is investigated to extract automatically the region of interests (ROIs) within mammograms. The standard shape, statistical, and textural features of ROIs are extracted and combined with multi-resolution and multi-orientation features derived from a new feature extraction technique based on wavelet-based contourlet transform. A hybrid feature selection approach based on combining the support vector machine recursive feature elimination with correlation bias reduction algorithm is proposed. Also, the authors investigate a new similarity-based learning algorithm, called Q, for benign-malignant classification. The proposed CAD system is applied to real clinical mammograms, and the experimental results demonstrate the superior performance of the proposed CAD system over other existing CAD systems in terms of accuracy 98.16%, sensitivity 98.63%, specificity 97.80%, and computational time 2.2 s. This reveals the effectiveness of the proposed CAD system in improving the accuracy of breast cancer diagnosis in real-time systems.
This paper is devoted to extending common factors and categorical variables in the model of a finite mixture of factor analyzers based on the multivariate generalized linear model and the principle of maximum random u...
详细信息
This paper is devoted to extending common factors and categorical variables in the model of a finite mixture of factor analyzers based on the multivariate generalized linear model and the principle of maximum random utility in the probabilistic choice theory. The EM algorithm and Newton-Raphson algorithm are used to estimate model parameters, and then the algorithm is illustrated with a simulation study and a real example. (c) 2008 Elsevier B.V. All rights reserved.
Recently, the third generation partnership standards bodies (3GPP/3GPP2) have defined a two-dimensional channel model for multiple-input multiple-output (MIMO) systems, where the propagating plane waves are assumed to...
详细信息
Recently, the third generation partnership standards bodies (3GPP/3GPP2) have defined a two-dimensional channel model for multiple-input multiple-output (MIMO) systems, where the propagating plane waves are assumed to arrive only from the azimuthal direction and therefore not include the elevation domain. As a result of this assumption, the derived angle-of-arrival (AoA) distribution is characterised only by the azimuth direction of these waves. The AoA distribution of multipaths is implemented with a novel three-dimensional approach. The von Mises-Fisher (VMF) probability density function is used to describe their distribution within the propagation environment in both azimuth and co-latitude. More specifically, the proposed model uses a mixture of VMF distributions. A mixture can be composed of any number of clusters and this is clutter specific. The parameters of the individual cluster of scatterers within the mixture are derived and an estimation of those parameters is achieved using the spherical K-means algorithm and also the expectationmaximisationalgorithm. Statistical tests are provided to measure the goodness of fit of the proposed model. The results indicate that the proposed model fits well with MIMO experimental data obtained from a measurement campaign in Germany.
Pharmacokinetic/pharmacodynamic phenotypes are identified using nonlinear random effect models with finite mixture structures. A maximum a posteriori probability estimation approach is presented using an EM algorithm ...
详细信息
Pharmacokinetic/pharmacodynamic phenotypes are identified using nonlinear random effect models with finite mixture structures. A maximum a posteriori probability estimation approach is presented using an EM algorithm with importance sampling. Parameters for the conjugate prior densities can be based on prior studies or set to represent vague knowledge about the model parameters. A detailed simulation study illustrates the feasibility of the approach and evaluates its performance, including selecting the number of mixture components and proper subject classification. (C) 2009 Elsevier B.V. All rights reserved.
Initial ranging (IR) is a process of performing power adjustment, timing offset estimation, and synchronisation between the base station (BS) and all users. It plays a vital role in mobile WiMAX standard, which establ...
详细信息
Initial ranging (IR) is a process of performing power adjustment, timing offset estimation, and synchronisation between the base station (BS) and all users. It plays a vital role in mobile WiMAX standard, which establishes a communication link between BS and ranging subscriber stations (RSSs). Since RSS is located far away from the BS and uplink signal is corrupted by noise, the IR process becomes inferior. Thus, the authors propose an improved dynamic threshold-based IR method comprising three phases: (i) code detection phase, (ii) filtering phase, and (iii) parameter estimation phase. In the first two phases, the received signal is pre-processed to eliminate noise and unselected codes from the codeset by employing adaptive filter and weight-based code detection method. Then, the noise-free signal is considered for the final phase, which improves accuracy. The iterative expectationmaximisationalgorithm is involved in the estimation of channel coefficient and timing offset. All possible active paths are detected by setting a dynamic threshold value for each channel by the enhanced dynamic threshold method. For all active channels, round trip delay, and power level are estimated by considering particular channel coefficient and timing offset. Extensive simulation of proposed work shows considerable improvements in performance.
A maximum-likelihood approach to the blur identification problem is presented. The expectation-maximization algorithm is proposed to optimize the nonlinear likelihood function in an efficient way. In order to improve ...
详细信息
A maximum-likelihood approach to the blur identification problem is presented. The expectation-maximization algorithm is proposed to optimize the nonlinear likelihood function in an efficient way. In order to improve the performance of the identification algorithm, low-order parametric image and blur models are incorporated into the identification method. The resulting iterative technique simultaneously identifies and restores noisy blurred images.
暂无评论