A new compression algorithm for fingerprint images is introduced. A modified wavelet packet scheme which uses a fixed decomposition structure, matched to the statistics of fingerprint images, is presented. A technique...
详细信息
A new compression algorithm for fingerprint images is introduced. A modified wavelet packet scheme which uses a fixed decomposition structure, matched to the statistics of fingerprint images, is presented. A technique for determining the most important coefficients is introduced. The algorithm uses both hard and soft thresholding schemes to make the procedure fast and efficient. The bit allocation for each subimage of the modified coefficients is determined. Each subimage uses a different quantization technique based on its entropy. Then, a lossless compression technique, Huffman, is used to obtain further compression. The algorithm results in a high compression ratio and a high reconstructed image quality with a low computational cost, compared to other existing algorithms. The performance of the proposed algorithm is compared to that of other decomposition techniques: ordinary wavelet transform (OWT), entropy-based best basis selection (E-BBB), wavelet/scalar quantization (WSQ) and JPEG.
Finding the right wavelet is a crucial problem in wavelet-based image compression. Several criteria for choosing the right wavelet have been presented during the last years, but have never been evaluated together unde...
详细信息
Finding the right wavelet is a crucial problem in wavelet-based image compression. Several criteria for choosing the right wavelet have been presented during the last years, but have never been evaluated together under equivalent conditions. We review some of them, present one additional criterion, the shift variance of compressed impulses, and show the results of an extensive evaluation using parameterized orthonormal wavelet filters of lengths 4, 6 and 8. We can show that under these conditions only few of the proposed criteria prove relevant.
Fractal image coding has been successfully applied to encode digital images at low bit rates. The usual coding technique is to partition a given image into a number of blocks (range blocks). Each block of the partitio...
详细信息
Fractal image coding has been successfully applied to encode digital images at low bit rates. The usual coding technique is to partition a given image into a number of blocks (range blocks). Each block of the partition is expressed as the contractive transformation of another part of the image. However, such scheme does not take into account of the local smoothness in most images. We propose a novel fractal image coding scheme which estimates the parameters of the current block from those of the adjacent blocks. Each block is examined based on a criterion called the minimum edge difference (MED). If the MED criterion is fulfilled, a joint optimization of the adjacent and current block parameters or a predictive coding of the scaling and offset parameters of the current blocks is performed. Experiments show that a reduction of about 20% in the bit rates can be achieved with nearly no loss in the PSNR.
This paper presents a design of high-performance Distributed Arithmetic Processor (DAP) for the efficient implementation of HD video compression algorithms. A Multi-Bit Pipelined Carry-Save DAP (MPCS-DAP) structure ha...
详细信息
This paper presents a design of high-performance Distributed Arithmetic Processor (DAP) for the efficient implementation of HD video compression algorithms. A Multi-Bit Pipelined Carry-Save DAP (MPCS-DAP) structure have been proposed where two pipelined carry-save accumulators implement the divided look-up table (LUT) DA structure with multi-bit processing in parallel. This structure enables flexible high speed operations at low power consumption based on the pipelined bit-sequential operations. A novel two-bit adder circuitry has been used extensively and HSPICE simulation of the critical path shows that it operates at 200 MHz clock speed at 3.3 volts with 0.8 /spl mu/m double metal CMOS technology. It has also shown that an array of MPCS-DAP can implements 2D DCT/IDCT at 200 M samples per second.
There is great interest in the storage and browsing of large collections of video clips, due to the current explosion in demand for digital video. Many new algorithms and standards (MPEG, H.261, H.263, etc.) have been...
详细信息
There is great interest in the storage and browsing of large collections of video clips, due to the current explosion in demand for digital video. Many new algorithms and standards (MPEG, H.261, H.263, etc.) have been developed in recent years for the coding of video. However, most of these techniques lose performance when there exists significant camera motion, a common element of some classes of video such as surveillance and sporting events. A mosaic-based approach to video compression, initially presented by Irani, Hsu, and Anandan (1995), creates a world-oriented panoramic representation of the scene for compression. This paper proposes and evaluates a specific mosaic-based algorithm for video compression that uses a static mosaic with a wavelet-based coding scheme for the mosaic and the temporal residual information. The mosaic-based approach is compared to a global motion compensation approach and the algorithms detailed in this paper are compared to the H.263 TMN5 coder.
This study presents a comparison of the results of three different compression schemes when applied to remotely sensed Arctic radiance images. Images obtained via the Special Sensor Microwave/Imager (SSM/I) are compre...
详细信息
This study presents a comparison of the results of three different compression schemes when applied to remotely sensed Arctic radiance images. Images obtained via the Special Sensor Microwave/Imager (SSM/I) are compressed using three different compression schemes at varying levels of compression and are evaluated using the ice concentration and ice type derived from the radiance values as metrics. The results show that the DRIQ compression scheme consistently performed better than the MRVQ and DCAVQ algorithms.
Abstract only given. Presents an algorithm for text compression that exploits the properties of the words in a dictionary to produce an encryption of given text. The basic idea is to define a unique encryption or sign...
详细信息
Abstract only given. Presents an algorithm for text compression that exploits the properties of the words in a dictionary to produce an encryption of given text. The basic idea is to define a unique encryption or signature of each word in the dictionary by replacing certain characters in the words by a special character "*" and retain a few characters so that the word is still retrievable. The question is whether we can develop a better signature of the text before compression so that the compressed signature uses less storage than the original compressed text. This indeed is possible as our experimental results confirm. For any cryptic text the most frequently used character is "*" and the standard compression algorithms can effectively exploit this redundancy in an effective way. Our algorithm produces the best lossless compression rate reported to date in the literature. One basic assumption of our algorithm is that the system has access to a dictionary of words used in all the texts along with a corresponding "cryptic" dictionary. The cost of this dictionary is amortized over the compression savings for all the text files handled by the organization. If two organizations wish to exchange information using our compression algorithm, they must share a common dictionary. We used ten text files from the English text domain to test our algorithm.
We construct a new adaptive basis that provides precise frequency localization and good spatial localization. We develop a compression algorithm that exploits this basis to obtain the most economical representation of...
详细信息
We construct a new adaptive basis that provides precise frequency localization and good spatial localization. We develop a compression algorithm that exploits this basis to obtain the most economical representation of an image in terms of textured patterns with different orientations, frequencies, sizes, and positions. The technique directly works in the Fourier domain and has potential applications for compression of richly textured images.
In this paper, a method for a parallel pipelined implementation of baseline JPEG compression and decompression is introduced for use on 4 Texas Instruments' TMS320C40 digital signal processors. A PC host with 2 So...
详细信息
In this paper, a method for a parallel pipelined implementation of baseline JPEG compression and decompression is introduced for use on 4 Texas Instruments' TMS320C40 digital signal processors. A PC host with 2 Sonitech dual 'C40 boards is used for this implementation. The DCT is implemented using 2 'C40s, while quantization and Huffman encoding are each performed using a single 'C40. This novel implementation provides significant speedup over a sequential implementation by pipelining 8/spl times/8 blocks, through the parallel implementation.
暂无评论