This paper presents two novel approaches to speckle reduction in SAR images. The former relies on the multiplicative speckle model as an MMSE filtering performed in the wavelet domain by means of an adaptive shrink-ag...
详细信息
ISBN:
(纸本)0819442682
This paper presents two novel approaches to speckle reduction in SAR images. The former relies on the multiplicative speckle model as an MMSE filtering performed in the wavelet domain by means of an adaptive shrink-age of the detail coefficients of an undecimated decomposition. Each coefficient is shrunk by the variance ratio of the noise-free coefficient to the noisy one. All the above quantities are analytically calculated from the speckled image, the noise variance, and the wavelet filters only, without resorting to any model to describe the underlying backscatter. Estimation of the local statistics driving the filter is expedited and layered processing allows to extend adaptivity also across the spatial scale. The latter is not model-based and provides a blind estimation of the backscatter underlying the speckled image stated as a problem of matching pursuits. The local adaptive MMSE estimator is obtained as an expansion in series of a finite number of "prototype" estimators, fitting the spatial features of the different statistical classes encountered, e.g., edges and textures. Such estimators are calculated in a fuzzy fashion through an automatic training procedure. The space-varying coefficients of the expansion are stated as degrees of fuzzy membership of a pixel to each of the estimators. A thorough performance comparison is carried out with the Gamma-MAP filter and with the Rational Laplacian Pyramid (RLP) filter, recently introduced by three of the authors. On simulated speckled images both the proposed filters gain almost 3 dB SNR with respect to conventional local-statistics (Lee/Kuan) filtering. Experiments carried out on widespread test SAR images and on a speckled mosaic image, comprising synthetic shapes, textures. and details from true SAR images, demonstrate that the visual quality of the results is excellent in terms of both background smoothing and preservation of edge sharpness. textures, and point targets. The absence of decimation in the wave
Classified vector quantization (CVQ) is used for coding images that achieves good perceptual results while reducing the computational load of the process. In this paper, image is sub-divided into 4/spl times/4 pixel b...
详细信息
Classified vector quantization (CVQ) is used for coding images that achieves good perceptual results while reducing the computational load of the process. In this paper, image is sub-divided into 4/spl times/4 pixel blocks (vectors). Each vector is classified into an edge vector and a shade vector. Both edge vectors and shade vectors are used to design the codebooks of CVQ by Fuzzy C-Means (FCM) method. By doing so, the CVQ-FCM method can preserve the edge of image, make good image quality, and reduce the processing time while constructing the codebooks.
The following topics are dealt with: communication systems; electronic devices; control systems and applications; computer networks; signal processing; educational systems; imageprocessing; artificial intelligence; e...
The following topics are dealt with: communication systems; electronic devices; control systems and applications; computer networks; signal processing; educational systems; imageprocessing; artificial intelligence; electrical machines; fault detection and diagnosis; circuits and devices; mobile communications; power systems.
As an effort to improve the video quality of a digital TV (HDTV) system, a robust motion adaptive deinterlacing algorithms based on motion decision feedback rules are proposed, which, in principle, utilizes the motion...
详细信息
As an effort to improve the video quality of a digital TV (HDTV) system, a robust motion adaptive deinterlacing algorithms based on motion decision feedback rules are proposed, which, in principle, utilizes the motion decision information of the past and the current pictures. With the proposed algorithm, visual artifacts due to the failure in detecting a fast repetitive motion embedded in an interlaced video sequence can be significantly reduced. Hence, with the proposed algorithms, much more pleasing video quality can be realized for the HDTV systems than typical motion adaptive deinterlacing algorithms.
We propose sliding-window multiedge detectors and reflectivity estimators for complex SAR images. The novel detectors and estimators allow to take into account additive observation noise and colored signal (speckle) a...
详细信息
We propose sliding-window multiedge detectors and reflectivity estimators for complex SAR images. The novel detectors and estimators allow to take into account additive observation noise and colored signal (speckle) and noise processes; furthermore, they employ an exponential data weighting to improve spatial resolution. In the multiedge case, simulation results demonstrate a substantial performance improvement over existing methods when the speckle is colored and additive noise is present.
We present our new low-complexity compression algorithm for lossless coding of video sequences. This new coder produces better compression ratios than lossless compression of individual images by exploiting temporal a...
详细信息
We present our new low-complexity compression algorithm for lossless coding of video sequences. This new coder produces better compression ratios than lossless compression of individual images by exploiting temporal as well as spatial and spectral redundancy. Key features of the coder are a pixel-neighborhood backward-adaptive temporal predictor, an intra-frame spatial predictor and a differential coding scheme of the spectral components. The residual error is entropy coded by a context-based arithmetic encoder. This new lossless video encoder outperforms state-of-the-art lossless image compression techniques, enabling more efficient video storage and communications.
Natural images are characterized by high correlation between their RGB color components. Most representation and compression techniques reduce the redundancies between color components by transforming the color primar...
详细信息
Natural images are characterized by high correlation between their RGB color components. Most representation and compression techniques reduce the redundancies between color components by transforming the color primaries into a decorrelated color space, such as YIQ or YUV. In this paper a different approach to color information analysis is considered. Since the high correlation of color channels implicitly suggests a localized functional relation between the components, it could be used in an alternative framework by approximating subordinate colors as functions of a base color. This way, only a reduced number of parameters is required for coding the color information. Compression results are presented and compared with JPEG, and the parameters that affect the coding quality are studied and discussed. The results show the advantages of the new correlation-based approach over the YIQ/YUV decorrelation techniques as used in JPEG and related applications.
In this paper we present a low frequency image adaptive watermarking scheme. Using the GenLOT transform for image decomposition in the watermarking scheme we obtain higher energy compaction in the low frequency coeffi...
详细信息
In this paper we present a low frequency image adaptive watermarking scheme. Using the GenLOT transform for image decomposition in the watermarking scheme we obtain higher energy compaction in the low frequency coefficients. As a result, we improve the robustness of operations that remove high frequency components. in order to embed the watermark with minimum loss in image fidelity, a visual mask based on local image characteristics, such as textures and edges, is incorporated in the watermarking algorithm. Experimental results show that the proposed scheme is robust against DCT and DWT based compression and common imageprocessing operations.
This study addresses the problem of the objective performance evaluation of image retrieval systems. Considering the color feature, a tool for synthetic image databases generation is proposed. This allows to control t...
详细信息
This study addresses the problem of the objective performance evaluation of image retrieval systems. Considering the color feature, a tool for synthetic image databases generation is proposed. This allows to control the number and location of the dominant colors in the Lab space and provides also their spatial coherency. It is then possible to sort exactly the images for the color feature. We also propose a new similarity measure to compare images described by the color feature. Based on the assumption of a Gaussian distribution for each dominant color, each image is modeled by the sum of these Gaussian distributions. The similarity measure performs a Kullman's distance between two modeled distributions. The objective performance evaluation based on the synthetic database is done for comparing our image retrieval system (IMALBUM) which uses the new similarity measure and MPEG7 approach. Experiments on MPEG7 database are also presented as a subjective evaluation and discussed.
Unsupervised learning algorithms essentially transform input examples into neural representations aiming to reveal interesting aspects of data. Such useful representations may have different properties emphasizing dis...
详细信息
Unsupervised learning algorithms essentially transform input examples into neural representations aiming to reveal interesting aspects of data. Such useful representations may have different properties emphasizing distinct characteristics of the data considered. Despite their uniqueness, these algorithms share common ties. Our perspective on unsupervised learning is that of Helmholtz's approach to vision, considering learning as a minimization problem solved in the presence of a generative model inverting the process of creating representations. We elucidate this point of view by comparing and reviewing three models performing representational learning.
暂无评论