This paper illustrates the use of "averaging" to improve the convergence rate of adaptive sign regressor and sign error multiuser detectors. The ingenious concept of averaging was invented by Polyak in 1990 ...
详细信息
ISBN:
(纸本)0780370414
This paper illustrates the use of "averaging" to improve the convergence rate of adaptive sign regressor and sign error multiuser detectors. The ingenious concept of averaging was invented by Polyak in 1990 - this paper analyses the performance of averaging in the sign error and sign regressor adaptive blind multiuser detection algorithms in DS/CDMA systems.
Iterated transformation theory (ITT) coding, also known as fractal coding, in its original form, allows fast decoding but suffers from long encoding times. During the encoding step, a large number of block best-matchi...
详细信息
Iterated transformation theory (ITT) coding, also known as fractal coding, in its original form, allows fast decoding but suffers from long encoding times. During the encoding step, a large number of block best-matching searches have to be performed which leads to a computationally expensive process. Because of that, most of the research efforts carried on this held are focused on speeding up the encoding algorithm. Many different methods and algorithms have been proposed, from simple classifying methods to multi-dimensional nearest key search. We present in this paper a new method that significantly reduces the computational load of ITT-based image coding. Both domain and range blocks of the image are transformed into the frequency domain (which has proven to be more appropriate for ITT coding). Domain blocks are then used to train a two-dimensional Kohonen neural network (KNN) forming a codebook similar to vector quantization coding. The property of KNN land self-organizing feature maps in general) which maintains the input space (transformed domain blocks) topology allows to perform a neighboring search to find the piecewise transformation between domain and range blocks. (C) 2001 Elsevier Science B.v. All rights reserved.
A new signal processing method is developed for solving the multi-line fitting problem in a two dimensional image. We first reformulate the former problem in a special parameter estimation framework such that a first ...
详细信息
ISBN:
(纸本)0780370414
A new signal processing method is developed for solving the multi-line fitting problem in a two dimensional image. We first reformulate the former problem in a special parameter estimation framework such that a first order or a second order polynomial phase signal structure is obtained. Then, the recently developed algorithms in that formalism (and particularly the downsampling technique for high resolution frequency estimation) can be exploited to produce accurate estimates fox line parameters. This method is able to estimate the parameters of parallel lines with different offsets and handles the quantization noise effect which can not be done by the sensor array processing technique introduced by Aghajan et al. Simulation results are presented to demonstrate the usefulness of the proposed method.
Signal processingalgorithms and architectures can use dynamic reconfiguration to exploit variations in signal statistics with the objectives of improved performance and reduced power consumption. Parameters provide a...
详细信息
ISBN:
(纸本)0780370414
Signal processingalgorithms and architectures can use dynamic reconfiguration to exploit variations in signal statistics with the objectives of improved performance and reduced power consumption. Parameters provide a simple and formal way to characterize incremental changes to a computation and its computing mechanism. This paper examines five parameterized computations which are typically implemented in hardware for a wireless multimedia terminal: 1) motion estimation, 2) discrete cosine transform, 3) Lempel-Ziv lossless compression, 4) 3D graphics light rendering and 5) viterbi decoding, Each computation is examined for the capability of dynamically adapting the algorithm and architecture parameters to variations in their respective input signals. Dynamically reconfigurable low-power implementations of each computation are currently underway.
The aim of investigation was developing the data fusion algorithms dealing with the aerial and cosmic pictures taken in different seasons from the differing view points, or formed by differing kinds of sensors (visibl...
详细信息
ISBN:
(纸本)0819440752
The aim of investigation was developing the data fusion algorithms dealing with the aerial and cosmic pictures taken in different seasons from the differing view points, or formed by differing kinds of sensors (visible, IR, SAR). This task couldn't be solved using the traditional correlation based approaches, thus we chose the structural juxtaposition of the stable characteristic details of pictures as the general technique for images matching and fusion. The structural matching usually was applied in the expert systems where the rather reliable results were based on the target specific algorithms. In the contrast to such classifiers our algorithm deals with the aerial and cosmic photographs of arbitrary contents for which the application specific algorithms couldn't be used. To deal with the arbitrary images we chose a structural description alphabet based on. the simple contour components: arcs, angles, segments of straight lines, line branching. This alphabet is applicable to the arbitrary images, and its elements due to their simplicity are stable under different image transformations and distortions. To distinguish between the similar simple elements in the huge multitudes of image contours we applied the hierarchical contour descriptions: we grouped the contour elements belonging to the uninterrupted lines or to the separate image regions. Different types of structural matching were applied: the ones based on the simulated annealing and on the restricted examination of all hypotheses. The matching results reached were reliable both for the multiple season and multiple sensor images.
In this work we propose a new wavelet transform based speckle denoising algorithm for SAR images. The algorithm will explicitly account for the signal dependent nature of the noise by studying the variances of detail ...
详细信息
ISBN:
(纸本)0819441856
In this work we propose a new wavelet transform based speckle denoising algorithm for SAR images. The algorithm will explicitly account for the signal dependent nature of the noise by studying the variances of detail wavelet coefficients. The algorithm will use the analysis of variance ANOvA technique to check if variances are due to means belonging to the same population or not. If neighboring variances indicate belonging to the same population, then it's a smooth region and coefficient should be smoothed. If neighboring variances indicate the presence of two different populations, then coefficient is due to image feature and should be preserved. This approach will provide the flexibility of adjusting to region intensity level and thus no need for the fixed threshold concept. The algorithm will take advantage of the fact that wavelet transform creates three detail sub-images and a coarse sub-image. Each detail sub-image is associated with frequency contents due to certain edge location and orientation. The algorithm will also consider using cross-information from all three-detail sub-images to decide whether coefficients are due to a feature and thus should be preserved, or they are due to noise and should be smoothed. Simulations will show that our algorithm will provide better performance in terms of PSNR, ENL, and visually than currently existing techniques.
We develop algorithms for computing block-recursive Zak transforms and Weyl-Heisenberg expansions, which achieve p/logL and (logM+p)/(logN+logL+1) multiplicative complexity reduction, respectively, over direct computa...
详细信息
ISBN:
(纸本)0780370414
We develop algorithms for computing block-recursive Zak transforms and Weyl-Heisenberg expansions, which achieve p/logL and (logM+p)/(logN+logL+1) multiplicative complexity reduction, respectively, over direct computations, where p' = pM, and N - p' is the number of overlapping samples in subsequent signal segments. For each transform we offer a choice of two algorithms that is based on two different implementations of the Zak transform of the time-evolving signal. These two algorithm classes exhibit typical trade-offs between computational complexity and memory requirements.
A system for selecting a single best view image chip from an IR video sequence and compression of the chip for transmission is presented. Moving object detection was done using the algorithm described in [1]. Eigenspa...
详细信息
ISBN:
(纸本)0780370414
A system for selecting a single best view image chip from an IR video sequence and compression of the chip for transmission is presented. Moving object detection was done using the algorithm described in [1]. Eigenspace classification has been implemented for best view selection. Fast algorithms for image chip compression have been developed in the wavelet domain by combining a non-iterative zerotree coding method with 2D-DPCM for both low and high frequency subbands and compared against existing schemes.
Tuberculosis (TB) and other mycobacteriosis are serious illnesses which control is mainly based on presumptive diagnosis. Besides of clinical suspicion, the diagnosis of mycobacteriosis must be done through genus spec...
详细信息
ISBN:
(纸本)0819441856
Tuberculosis (TB) and other mycobacteriosis are serious illnesses which control is mainly based on presumptive diagnosis. Besides of clinical suspicion, the diagnosis of mycobacteriosis must be done through genus specific smears of clinical specimens. However, these techniques lack of sensitivity and consequently clinicians must wait culture results as much as two months. Computer analysis of digital images from these smears could improve sensitivity of the test and, moreover, decrease workload of the micobacteriologist. Bacteria segmentation of particular species entails a complex process. Bacteria shape is not enough as a discriminant feature, because there are many species that share the same shape. Therefore the segmentation procedure requires to be improved using the color image information. In this paper we present two segmentation procedures based on fuzzy rules and phase-only correlation techniques respectively that will provide the basis of a future automatic particle' screening.
A major drawback of block-based still image or video compression methods at low rates are the visible block boundaries that are also known as blocking artifacts. Several methods have been proposed in the literature to...
详细信息
ISBN:
(纸本)0780370414
A major drawback of block-based still image or video compression methods at low rates are the visible block boundaries that are also known as blocking artifacts. Several methods have been proposed in the literature to reduce these artifacts for video sequences. However, most are simply adaptations of still image blocking artifact reduction methods, which do not exploit temporal information. In this paper, we propose a novel multi-frame blocking artifact reduction method that incorporates temporal information effectively. This method uses the spatial correlations that exist between the successive frames to define constraint sets at multiple frames and provides a Projections Onto Convex Sets (POCS) solution. The proposed method operates solely on transform domain (DCT) data, and hence provides a solution that is compatible with the observed video. It does not need to make any spatial smoothness assumptions, which are typical with blocking artifact reduction algorithms for still images.
暂无评论