We are aiming at using EEG source localization in the framework of a Brain Computer Interface project. We propose here a new reconstruction procedure, targeting source (or equivalently mental task) differentiation. EE...
详细信息
ISBN:
(纸本)0819455008
We are aiming at using EEG source localization in the framework of a Brain Computer Interface project. We propose here a new reconstruction procedure, targeting source (or equivalently mental task) differentiation. EEG data can be thought of as a collection of time continuous streams from sparse locations. The measured electric potential on one electrode is the result of the superposition of synchronized synaptic activity from sources in all the brain volume. Consequently, the EEG inverse problem is a highly underdetermined (and ill-posed) problem. Moreover, each source contribution is linear with respect to its amplitude but non-linear with respect to its localization and orientation. In order to overcome these drawbacks we propose a novel two-step inversion procedure. The solution is based on a double scale division of the solution space. The first step uses a coarse discretization and has the sole purpose of globally identifying the active regions, via a sparse approximation algorithm. The second step is applied only on the retained regions and makes use of a fine discretization of the space, aiming at detailing the brain activity. The local configuration of sources is recovered using an iterative stochastic estimator with adaptive joint minimum energy and directional consistency constraints.
In this paper the performance of two wideband synthetic aperture radar (SAR) imaging methods fromincompletedata sets are compared quantitatively and qualitatively. The first approach uses nonuniform fast Fourier tra...
详细信息
ISBN:
(纸本)9781424489022
In this paper the performance of two wideband synthetic aperture radar (SAR) imaging methods fromincompletedata sets are compared quantitatively and qualitatively. The first approach uses nonuniform fast Fourier transform (NUFFT) SAR to form images from nonuniform spatial and frequency data points. The second approach benefits from the emerging compressed sensing (CS) methodology to recover raw datafrom undersampled measurements. The results of our experimental tests show that CS has a better performance in terms of error and image contrast while NUFFT SAR has lower computational complexity.
We postulate that under anisoplanatic imaging conditions involving imaging through turbulent media over a wide-area there exists the possibility of spatial frequency content that is normally lost outside the aperture ...
详细信息
ISBN:
(纸本)0819445592
We postulate that under anisoplanatic imaging conditions involving imaging through turbulent media over a wide-area there exists the possibility of spatial frequency content that is normally lost outside the aperture of an imaging instrument under unperturbed viewing conditions, being aliased into the aperture. Simulation is presented that reinforces this premise. We apply restoration algorithms that were designed to correct non-uniform distortions, to a real image sequence to the effect of noticing the de-aliased super-frequency content. We claim this to be super-resolution, and that it is only possible under anisoplanatic imaging scenarios, where the point spread function of the image is position dependent as a result of the atmospheric turbulence.
Background Fingerprint biometrics play an essential role in authentication. It remains a challenge to match fingerprints with the minutiae or ridges missing. Many fingerprints failed to match their targets due to the ...
详细信息
Background Fingerprint biometrics play an essential role in authentication. It remains a challenge to match fingerprints with the minutiae or ridges missing. Many fingerprints failed to match their targets due to the incompleteness. Result In this work, we modeled the fingerprints with Bezier curves and proposed a novel algorithm to detect and restore fragmented ridges in incomplete fingerprints. In the proposed model, the Bezier curves' control points represent the fingerprint fragments, reducing the data size by 89% compared to image representations. The representation is lossless as the restoration from the control points fully recovering the image. Our algorithm can effectively restore incomplete fingerprints. In the SFinGe synthetic dataset, the fingerprint image matching score increased by an average of 39.54%, the ERR (equal error rate) is 4.59%, and the FMR1000 (false match rate) is 2.83%, these are lower than 6.56% (ERR) and 5.93% (FMR1000) before restoration. In FvC2004 DB1 real fingerprint dataset, the average matching score increased by 13.22%. The ERR reduced from 8.46% before restoration to 7.23%, and the FMR1000 reduced from 20.58 to 18.01%. Moreover, We assessed the proposed algorithm against FDP-M-net and U-finger in SFinGe synthetic dataset, where FDP-M-net and U-finger are both convolutional neural network models. The results show that the average match score improvement ratio of FDP-M-net is 1.39%, U-finger is 14.62%, both of which are lower than 39.54%, yielded by our algorithm. Conclusions Experimental results show that the proposed algorithm can successfully repair and reconstruct ridges in single or multiple damaged regions of incomplete fingerprint images, and hence improve the accuracy of fingerprint matching.
The possibility of obtaining spatial frequency information normally excluded by an aperture has been surmised, experimentally obtained in the laboratory, and observed in processed real world imagery. This opportunity ...
详细信息
ISBN:
(纸本)0819455008
The possibility of obtaining spatial frequency information normally excluded by an aperture has been surmised, experimentally obtained in the laboratory, and observed in processed real world imagery. This opportunity arises through the intervention of a turbulent mass between the stationary wide-area object of interest and the short exposure, imaging instrument, but the frequency information is aliased, and must-be de-aliased to render it useful. We present evidence of super-resolution in real-world surveillance imagery that is processed by hierarchical registration algorithms. These algorithms have been enhanced over those we previously reported. We discuss these enhancements and give examples of the use of the algorithm to gain information about the turbulence. To further reinforce the presence of super-resolution we present two methods for creating imagery warped by Kolmogorov turbulent phase screens, so that the results can be confirmed against true images.
When imaging through the atmosphere, the resulting image contains not only the desired scene, but also the adverse effects of all the turbulent air mass between the camera and the scene. These effects are viewed as a ...
详细信息
ISBN:
(纸本)9780819482969
When imaging through the atmosphere, the resulting image contains not only the desired scene, but also the adverse effects of all the turbulent air mass between the camera and the scene. These effects are viewed as a combination of non-uniform blurring and random shifting of each point in the received short-exposure image. Corrections for both aspects of this combined distortion have been tackled reasonably successfully by previous efforts. We presented in an earlier paper a more robust method of restoring the geometry by redefining the place of the prototype frame and by reducing the adverse effect of averaging in the processing sequence. We present here a variant of this method using a Minimum Sum of Squared Differences (MSSD) cross-correlation registration algorithm implemented on a Graphics Processing Unit (GPU). The raw speed-up achieved using GPU code is in the order of x1000. Two orders of magnitude speed-up on the complete algorithm will allow for better fine tuning of this method and for experimentation with various registration algorithms.
A likelihood-based approach to density modification for macromolecular crystallography is presented. The approach can be applied in many cases where some information is available about the electron density at various ...
详细信息
ISBN:
(纸本)0819437689
A likelihood-based approach to density modification for macromolecular crystallography is presented. The approach can be applied in many cases where some information is available about the electron density at various points in the unit cell. The most important aspect of the method consists of likelihood functions that represent the probability that a particular value of electron density is consistent with prior expectations for the electron density at that point in the unit cell. Such likelihood functions are combined with likelihood functions based on experiment and with any prior knowledge about structure factors to form a combined likelihood function for each structure factor. An approach for maximizing the combined likelihood function that is simple acid rapid is developed. The density modification approach is applied to real and model data and is shown to result in substantially greater improvement in map quality than existing methods.
The maximum sub-array algorithm has been implemented within a field programmable gate array as an efficient centroiding method for wavefront slope estimation. However, a convenient platform for this work is a graphics...
详细信息
ISBN:
(纸本)9780819492173
The maximum sub-array algorithm has been implemented within a field programmable gate array as an efficient centroiding method for wavefront slope estimation. However, a convenient platform for this work is a graphics processor unit (GPU). Translation of the maximum subarray algorithm to a GPU has been performed and shows significant performance gains compared to a single-core CPU. Recently, this algorithm has been applied to radio telescope images acquired for the Australian square kilometer array pathfinder project. This paper provides an overview of the maximum subarray algorithm and shows how this can be utilized for optical and radio telescope applications
The problem on tomographic imagereconstructionfrom limited-view-angle projection data is dealt with. Based on the Helgason-Ludwig consistency condition, a procedure for completion of incomplete projection data is in...
详细信息
The problem on tomographic imagereconstructionfrom limited-view-angle projection data is dealt with. Based on the Helgason-Ludwig consistency condition, a procedure for completion of incomplete projection data is introduced to improve the reconstructed image quality. Using a regularization method, an algorithm robust to the presence of noise is presented. The performance of the algorithm was confirmed in the numerical simulation. The reconstructed images were quantitatively improved by this procedure with a realistic amount of computation.
Over the last ten years progress has been made in developing inverse scattering algorithms that go beyond the range of applicability of the first Born and Rytov approximations. Our efforts have focussed on a nonlinear...
详细信息
ISBN:
(纸本)0819437689
Over the last ten years progress has been made in developing inverse scattering algorithms that go beyond the range of applicability of the first Born and Rytov approximations. Our efforts have focussed on a nonlinear filtering technique which appears at first glance to be straightforward to implement and offer the possibility of recovering strongly scattering structures. Upon applying the method to various simulated and real data sets, its performance has been inconsistent. In this paper we discuss the various numerical concerns that have arisen from executing a nonlinear filter on limited sampled noisy data and clarify the potential advantages and limitations of this approach.
暂无评论