Different unsupervised Bayesian classification algorithms can be associated to a multiscale image analysis procedure leading to improvements, both in computation time and classification performances. Two kinds of algo...
详细信息
ISBN:
(纸本)0819416274;9780819416278
Different unsupervised Bayesian classification algorithms can be associated to a multiscale image analysis procedure leading to improvements, both in computation time and classification performances. Two kinds of algorithms are used for the classification itself: (1) local methods on a pixel-by-pixel basis and (2) global methods, which require a Markov random field model for the whole class image. Unsupervised Bayesian classification requires two steps, one for the parameter estimation of each local or global mode and one for the Bayesian classification itself. A Gaussian density with parameters depending on the class is assumed for the pixels. In a multiscale analysis scheme, the image is decomposed by successive filtering and downsampling, which allows to separate homogeneous areas and edges according to a pyramidal structure. One scale pyramid containing smaller and smaller smoothed images and one wavelet pyramid with the complementary information concerning details are built. Unsupervised Bayesian classification is done at each level of the scale pyramid, from top to bottom, by taking into account pixels which are assumed well classified at the previous level. The wavelet pyramid can be used to help the classification by defining if a classified pixel belongs to an homogeneous area or not. The homogeneity criterion consists in a variance comparison at each stage and a thresholding. A comparison has been made on very noisy synthetic images, which permits to measure the improvements and drawbacks brought by the multiscale analysis in local and global classification.
The purpose of this study is to apply a recently developed wavelet based de-noising filter to the analysis of human electroencephalogram (EEG) signals, and measure its performance. The data used contained subject EEG ...
详细信息
ISBN:
(纸本)0819416274;9780819416278
The purpose of this study is to apply a recently developed wavelet based de-noising filter to the analysis of human electroencephalogram (EEG) signals, and measure its performance. The data used contained subject EEG responses to two different stimuli using the `odd-ball' paradigm. Electrical signals measured at standard locations on the scalp were processed to detect and identify the Evoked Response Potentials (ERP's). First, electrical artifacts emitting from the eyes were identified and removed. Second, the mean signature for each type of response was extracted and used as a matched filter to define baseline detector performance for the noisy data. Third, a nonlinear filtering procedure based on the wavelet extrema representation was used to de-noise the signals. Overall detection rates for the de-noised signals were then compared to the baseline performance. It was found that while the filtered signals have significantly lower noise than the raw signals, detector performance remains comparable. We therefore conclude that all of the information that is important to matched filter detection is preserved by the filter. The implication is that the wavelet based filter eliminates much of the noise while retaining ERP's.
We show how the separable two-dimensional wavelet representation leads naturally to an efficient multiresolution tomographic reconstruction algorithm. This algorithm is similar to the conventional filtered backproject...
详细信息
Transform coding at low bit rates introduces artifacts associated with the basis functions of the transform. For example, decompressed images based on the DCT (discrete cosine transform) - like JPEG - exhibit blocking...
详细信息
ISBN:
(纸本)0819416274;9780819416278
Transform coding at low bit rates introduces artifacts associated with the basis functions of the transform. For example, decompressed images based on the DCT (discrete cosine transform) - like JPEG - exhibit blocking artifacts at low bit rates. This paper proposes a post-processing scheme to enhance decompressed images that is potentially applicable in several situations. In particular, the method works remarkably well in `deblocking' of DCT compressed images. The method is non-linear, computationally efficient, and spatially adaptive - and has the distinct feature that it removes artifacts while yet retaining sharp features in the images. An important implication of this result is that images coded using the JPEG standard can be efficiently post-processed to give significantly improved visual quality in the images.
High quality image compression algorithms are capable of achieving transmission or storage rates of 0.5 to 1.0 bits/pixel with low degradation in image quality. In order to obtain even lower bit rates, the authors rel...
详细信息
wavelets of compact support are an important tool in many areas of signal analysis. A starting point for the construction of such wavelets is the scaling function, the solution of the dilation equation. We study the d...
详细信息
ISBN:
(纸本)0819416274;9780819416278
wavelets of compact support are an important tool in many areas of signal analysis. A starting point for the construction of such wavelets is the scaling function, the solution of the dilation equation. We study the dilation equation φ(X) = ΣKCKφ(2X-K) where K Ε {0...m}N, φ:RN &rarr R, CK Ε R. This paper gives a set of sufficient conditions on CK under which the solution of the dilation equation has a specific degree of regularity. We will construct φ through infinite products of 2N associated matrices with entries in terms of CK. The conditions for regularity are based on certain sum rules that triangularize all of the associated matrices and on certain inequalities that control the eigenvalues of the matrices. The net effect of the sum rules is to specify the coefficients CK in terms of a binomial interpolation of their values at the corners of the N-cube, {0,m}N. The inequalities are based on sums of the coefficients at the corners of the various faces of the N-cube. The number of derivatives that φ possesses and the Holder exponent of the last derivative can be determined for the sums.
A methodology for synthesizing parallel computational structures has been applied to the Discrete wavelet Transform algorithm. It is based on linear space-time mapping with constraint driven localization. The data dep...
详细信息
ISBN:
(纸本)0819416274;9780819416278
A methodology for synthesizing parallel computational structures has been applied to the Discrete wavelet Transform algorithm. It is based on linear space-time mapping with constraint driven localization. The data dependence analysis, localization of global variables, and space-time mapping is presented, as well as one realization of a 3-octave systolic array. The DWT algorithm may not be described by a set of Uniform or Affine Recurrence Equations (UREs, AREs), thus it may not be efficiently mapped onto a regular array. However it is still possible to map the DWT algorithm to a systolic array with local communication links by using first a non-linear index space transformation. The array derived here has latency of 3M/2, where M is the input sequence length, and similar area requirements as solutions proposed elsewhere. In the general case of an arbitrary number of octaves, linear space-time mapping leads to inefficient arrays of long latency due to problems associated with multiprojection.
Spectral radius of sets of matrices is a fundamental concept in studying the regularity of compactly supported wavelets. Here we review the basic properties of spectral radius and describe how to increase the efficien...
详细信息
ISBN:
(纸本)0819416274;9780819416278
Spectral radius of sets of matrices is a fundamental concept in studying the regularity of compactly supported wavelets. Here we review the basic properties of spectral radius and describe how to increase the efficiency of estimation of a lower bound for it. Spectral radius of sets of matrices can be defined by generalizing appropriate definitions of spectral radius of a single matrix. One definition, referred to as generalized spectral radius, is constructed as follows. Let Σ be a collection of m square matrices of same size. Suppose Ln(Σ) is the set of products of length n of elements Σ. Define pn(Σ) = maxAΕLn [p(A)]1/n where p(A) is the usual spectral radius of a matrix. Then the generalized spectral radius of Σ is p(Σ) = lim supn&rarr∞pn(Σ). The standard method for estimating p(Σ), through pn(Σ), involves mn matrix calculations, one per each element of Ln(Σ). We will describe a method which reduces this cost to mn/n or less.
wavelet analysis is currently being investigated as an image enhancement tool for use in mammography. Although this approach to imageprocessing appears to have great promise, there remain major uncertainties regardin...
详细信息
ISBN:
(纸本)0819416274
wavelet analysis is currently being investigated as an image enhancement tool for use in mammography. Although this approach to imageprocessing appears to have great promise, there remain major uncertainties regarding an optimal form of wavelet based algorithms. It is, therefore, desirable to have a quantitative method for evaluating a wavelet based imageprocessing algorithm. Optimization of algorithms prior to evaluation using standard Receiver Operating Characteristic method is made possible. A mathematical method has been developed where the input signal is a gaussian with added random noise. An enhancement factor (EF) is obtained from input and output signal-to-noise ratios, SNRi and SNRo, (EF = SNRo/SNRi). The development and testing of this method is described, and a practical application in given showing the major features of a wavelet based imageprocessing algorithm based on the Frazier-Jawerth transform.
The perceived quality of images reconstructed from low bit rate compression is severely degraded by the appearance of transform coding artifacts. This paper proposes a method for producing higher quality reconstructed...
详细信息
ISBN:
(纸本)0819414778
The perceived quality of images reconstructed from low bit rate compression is severely degraded by the appearance of transform coding artifacts. This paper proposes a method for producing higher quality reconstructed images based on a stochastic model for the image data. Quantization (scalar or vector) partitions the transform coefficient space and maps all points in a partition cell to a representative reconstruction point, usually taken as the centroid of the cell. The proposed image estimate technique selects the reconstruction point within the quantization partition cell which results in a reconstructed image which best fits a non- Gaussian Markov random field image model. The estimation of the best reconstructed image results in a convex constrained optimization problem which can be solved iteratively. Experimental results are shown for images compressed using scalar quantization of block DCT and using vector quantization of subband wavelet transform. The proposed image decompression provides a reconstructed image with reduced visibility of transform coding artifacts and superior perceived quality.
暂无评论