Lossy compression techniques provide far greater compression ratios than lossless and are, therefore, usually preferred in imageprocessingapplications. However, as more and more applications of digitalimage process...
详细信息
ISBN:
(纸本)0819422355
Lossy compression techniques provide far greater compression ratios than lossless and are, therefore, usually preferred in imageprocessingapplications. However, as more and more applications of digitalimageprocessing have to combine image compression and highly automated image analysis, it becomes of critical importance to study the interrelations existing between image compression and feature extraction. In this contribution we present a clear and systematic comparison of contemporary general purpose lossy image compression techniques with respect to fundamental features, namely lines and edges detected in images. To this end, a representative set of benchmark edge detection and line extraction operators is applied to original and compressed images. The effects are studied in detail, delivering clear guidelines which combination of compression technique and edge detection algorithm is best used for specific applications.
In astronomical imaging techniques, the relative level of the zero-frequency component of an image is usually unknown relative to all other components. This problem arises because the overall object brightness cannot ...
详细信息
ISBN:
(纸本)0819422355
In astronomical imaging techniques, the relative level of the zero-frequency component of an image is usually unknown relative to all other components. This problem arises because the overall object brightness cannot easily be separated from background when viewing small, faint objects. This affects image interpretability and therefore is a problem that is ubiquitous in high-resolution astronomical imaging. Potential solutions to this problem include various interpolation techniques and image constraint techniques. These approaches are described, and performance is evaluated with an optimal interpolator that accounts for sample density, signal-to-noise ratio, and the object's overall shape. Novel analytic expressions are obtained which provide insight into the limitations of any restoration approach, and practical means for achieving those limits.
Motivated by recent interest in intelligent transportation systems, this paper considers the problem of tracking diverse vehicles as they traverse a roadway instrumented with video cameras. From vehicle tracks it is s...
详细信息
ISBN:
(纸本)0819422355
Motivated by recent interest in intelligent transportation systems, this paper considers the problem of tracking diverse vehicles as they traverse a roadway instrumented with video cameras. From vehicle tracks it is straightforward to compute basic traffic parameters such as flow, speed, and concentration. The vehicles to be tracked can be dense and pie assume that computational resources are limited, Therefore, we cannot consider three dimensional processing but rather must partition the problem as much as possible into one or two dimensional problems. The key simplifying aspect is that the vehicles follow known tracks (the lanes).
Linear prediction schemes, such as JPEG or BJPEG, are simple and normally result in a significant reduction in source entropy. Occasionally the entropy of the prediction error becomes greater than that of the original...
详细信息
ISBN:
(纸本)0819422355
Linear prediction schemes, such as JPEG or BJPEG, are simple and normally result in a significant reduction in source entropy. Occasionally the entropy of the prediction error becomes greater than that of the original image. Such situations frequently occur when the image data has discrete gray-levels located within certain intervals. To alleviate this problem, various authors have suggested different preprocessing methods. However, the techniques reported requires two-pass. In this paper, we extend the definition of Lehmer-type(1,2) inversions from premutations to multiset permutations and present a one-pass algorithm based on inversions of a multiset permutation. We obtain comparable results when we applied JPEG and even better results when we applied BJPEG on preprocessed image, which is treated as multiset permutation.
This paper briefly describes the field of diagnostic radar imaging, and discusses how the techniques of mathematical morphology may be brought to bear on the various forms of data commonly used in it, to accomplish di...
详细信息
ISBN:
(纸本)0819422355
This paper briefly describes the field of diagnostic radar imaging, and discusses how the techniques of mathematical morphology may be brought to bear on the various forms of data commonly used in it, to accomplish different types of tasks. These tasks include tests of data quality, analyses of clutter and radar parameters, and analyses of scattering centers and the mechanisms that give rise to them. Examples are given of morphological processing that can be performed on raw or reconstructed phase history data, on high resolution range profiles, on single magnitude images or sequences of images, and on images based on the image domain phase. New results are described which suggest that the image phase itself may carry useful information about scattering mechanism types. As this paper shows, mathematical morphology provides a little-utilized, yet rich set of tools for the analysis of shape-related phenomena in diagnostic imaging radar data.
Penumbral images of neutron distributions from laser fusion experiments provided by large aperture imaging systems are degraded by signal-dependent noise. Most imageprocessing algorithms assume that the signal and th...
详细信息
ISBN:
(纸本)0819422355
Penumbral images of neutron distributions from laser fusion experiments provided by large aperture imaging systems are degraded by signal-dependent noise. Most imageprocessing algorithms assume that the signal and the noise are stationary. The purpose of this paper is to introduce a new approach using a locally adaptative non-stationary filter. This method modifies the stationary Wiener filter approach by trading off noise removal against resolution. We shall describe the method and show some results.
An iterative algorithm has been presented that sends the information of a real and non-negative image to its Fourier phase. The result is a complex image with uniform amplitude in the Fourier domain. Thus in the frequ...
详细信息
ISBN:
(纸本)0819422355
An iterative algorithm has been presented that sends the information of a real and non-negative image to its Fourier phase. The result is a complex image with uniform amplitude in the Fourier domain. Thus in the frequency domain all the information is in the phase and the amplitude is a constant for all frequencies. In the space domain the image, although complex, has the desired property of having an absolute value the same as the original real image. Matched filtering is a common procedure for image recognition with the conjugate of the Fourier transform of the model(desired image) being the frequency response of the filter. It is shown that using these new complex images instead of the ordinary real images makes the output of the filter very peaked in the case of a match and widely spread in the case of a mis-match. Thus the new filter has a highly superior performance over the conventional matched filtering. It is also shown that due to the above properties the new filter performs very well when filter's input is highly corrupted by additive noise.
The FBI has formulated national standards for digitization and compression of gray-scale fingerprint images. The compression algorithm for the digitized images is based on adaptive uniform scalar quantization of a dis...
详细信息
ISBN:
(纸本)0819422355
The FBI has formulated national standards for digitization and compression of gray-scale fingerprint images. The compression algorithm for the digitized images is based on adaptive uniform scalar quantization of a discrete wavelet transform subband decomposition, a technique referred to as the wavelet/scalar quantization method. The algorithm produces archival-quality images at compression ratios of around 15 to 1 and will allow the current database of paper fingerprint cards to be replaced by digitalimagery. A compliance testing program is also being implemented to ensure high standards of image quality and interchangeability of data between different implementations. We will review the current status of the FBI standard, including the compliance testing process and the details of the first-generation encoder.
A structural codebook design algorithm is proposed in this paper, which is aimed at considerably reducing the computational burden during the coding process of vector quantization (VQ) while having almost no extra sto...
详细信息
ISBN:
(纸本)0819422355
A structural codebook design algorithm is proposed in this paper, which is aimed at considerably reducing the computational burden during the coding process of vector quantization (VQ) while having almost no extra storage cost. Kohonen's self-organizing feature map (SOFM) algorithm is improved as to tackle two problems in codebook design. Since the address of currently coded block is often the same as that of the previously coded neighboring blocks, especially in the flat regions, an address-dependent vector quantization (ADVQ) scheme is proposed to further reduce the bit rate and computational complexity. The simulation results show that the ADVQ scheme can achieve a bit-rate reduction of 37% for a typical image ''Lena'' while the computational complexity is reduced by a factor of 20.
暂无评论