We describe a color image compression and decompression scheme suitable for high resolution printers. The proposed scheme requires only two image rows in memory at any time, and hence is suitable for low-cost, high-re...
详细信息
We describe a color image compression and decompression scheme suitable for high resolution printers. The proposed scheme requires only two image rows in memory at any time, and hence is suitable for low-cost, high-resolution printing systems. The compression ratio can be specified and is achieved exactly. Compound document images consisting of continuous-tone, natural regions mixed with synthetic graphics or text are handled with uniformly high quality. While the target compression ratios are moderate, the quality requirements are extremely high: the compressed and decompressed printed image needs to be virtually indistinguishable from the original printed image. The scheme combines a lossless block coding technique with a wavelet block codec. The wavelet block codec uses a new and simple entropy coding technique that is more suitable for the specific block-structure, compression target, and discrete wavelet transform used.
A new image compression algorithm is proposed, based on independent embedded block coding with optimized truncation of the embedded bit-streams (EBCOT). The algorithm exhibits state-of-the-art compression performance ...
详细信息
A new image compression algorithm is proposed, based on independent embedded block coding with optimized truncation of the embedded bit-streams (EBCOT). The algorithm exhibits state-of-the-art compression performance while producing a bit-stream with a rich feature set, including resolution and SNR scalability together with a random access property. The algorithm has modest complexity and is extremely well suited to applications involving remote browsing of large compressed images. The algorithm lends itself to explicit optimization with respect to MSE as well as more realistic psychovisual metrics, capable of modeling the spatially varying visual masking phenomenon.
Portable consumer products need low-cost, low-power chips for realization of signal processing and image compression functions. In this paper a modification of such a chip for a digital video camera is presented. The ...
详细信息
Portable consumer products need low-cost, low-power chips for realization of signal processing and image compression functions. In this paper a modification of such a chip for a digital video camera is presented. The chip contains an image sensing array composed of photodiodes, a two-dimensional DCT processor and an entropy coding section. The processor is designed in a switched-current (SI) technique and contains current mirrors and memory cells. In order to examine these elementary cells, an experimental chip in AMS 0.8 /spl mu/m technology was fabricated.
It is well known that the band-splitting method strongly influences the efficiency of subband image coding. Further, the coding efficiency is very dependent on the entropy-coding model. Particularly in the embedded ze...
详细信息
It is well known that the band-splitting method strongly influences the efficiency of subband image coding. Further, the coding efficiency is very dependent on the entropy-coding model. Particularly in the embedded zero tree wavelet (EZW), set partitioning in hierarchical trees (SPIHT), or the space-frequency quantization (SFQ) algorithm, the spatial dependency defined by the quad tree set of coefficients is employed and they give a good performance. In this work, we describe the adaptive decomposition method based on a rate-distortion characteristic of the quad tree set of coefficients that is independent of the band-splitting. Experimental results for some images show that our coder gives a better coding efficiency and it generally tends to be effective for octave decomposition in the lower bit rate application and for uniform decomposition in the high frequency subband in the higher case.
Emerging wireless networks and multimedia developments are making compressed image transmission over noisy channels more widespread. However, most image compression algorithms have been developed without considering e...
详细信息
Emerging wireless networks and multimedia developments are making compressed image transmission over noisy channels more widespread. However, most image compression algorithms have been developed without considering error robustness. While they are usually efficient in terms of compression, they are very sensitive to channel errors. In this paper, we propose a robust image compression algorithm based on lattice vector quantization where the dimension of the vector quantizer is matched to each processed subband in a wavelet based coder. The method also employs vector indexation in order to reduce or even eliminate the entropy coding stage, which is usually responsible for the poor performance of image coders in noisy environments. The proposed method yields compression performance levels similar to those achieved by the current JPEG-2000 standard verification model, but performs substantially better in terms of error resilience.
SAR raw data compression is necessary to reduce the huge amount of data for downlink and the required memory on board. In view of interferometric and polarimetric applications for SAR data it becomes more and more imp...
详细信息
SAR raw data compression is necessary to reduce the huge amount of data for downlink and the required memory on board. In view of interferometric and polarimetric applications for SAR data it becomes more and more important to pay attention to phase errors caused by data compression. Here, a detailed comparison of block adaptive quantization in time domain (BAQ) and in frequency domain (FFT-BAQ) is given. Inclusion of raw data compression in the processing chain allows an efficient use of the FFT-BAQ and makes implementation for on-board data compression feasible. The FFT-BAQ outperforms the BAQ in terms of signal-to-quantization noise ratio and phase error and allows a direct decimation of the oversampled data equivalent to FIR-filtering in time domain. Impacts on interferometric phase and coherency are also given.
Joint source-channel decoding based on residual source redundancy is an effective paradigm for error-resilient data compression. While previous work only considered fixed rate systems, the extension of these technique...
详细信息
Joint source-channel decoding based on residual source redundancy is an effective paradigm for error-resilient data compression. While previous work only considered fixed rate systems, the extension of these techniques for variable-length encoded data was previously independently proposed by the authors, Park and Miller (see Proc. of Conf. on Info. Sciences and Systems, Princeton, N.J., 1998) and by Demir and Sayood (see Proc. of the Data Compression Conf., Snowbird, U.T., p.139-48, 1998). In this paper, we describe and compare the performance of a computationally complex exact maximum a posteriori (MAP) decoder, its efficient approximation, an alternative approximate MAP decoder, and an improved version of this decoder suggested here. Moreover, we evaluate several source and channel coding configurations. Our results show that the approximate MAP technique from Park et al. outperforms other approximate methods and provides substantial error protection to variable-length encoded data.
An error resilient technique is proposed for bit-plane based image coding in this research. First, fast resynchronization is achieved with the suffix-rich Huffman code to minimize the length of error propagation. Afte...
详细信息
An error resilient technique is proposed for bit-plane based image coding in this research. First, fast resynchronization is achieved with the suffix-rich Huffman code to minimize the length of error propagation. After regaining synchronization, the decoder may not align each symbol correctly due to the wrong number of previously decoded symbols in the error region. Thus, a scheme to identify probable locations of correctly decoded symbols is investigated so that these symbols can be shifted to their proper positions. This scheme exploits the correlation between decoded symbols of the current subband and their parent subband. It is demonstrated by experiments that the proposed error resilient technique results in a more robust codec with little sacrifice in coding efficiency.
Recent progress in context modeling and adaptive entropy coding of wavelet coefficients has probably been the most important catalyst for the rapidly maturing area of wavelet image compression technology. In this pape...
详细信息
Recent progress in context modeling and adaptive entropy coding of wavelet coefficients has probably been the most important catalyst for the rapidly maturing area of wavelet image compression technology. In this paper we identify statistical context modeling of wavelet coefficients as the determining factor of rate-distortion performance of wavelet codecs. We propose a new context quantization algorithm for minimum conditional entropy. The algorithm is a dynamic programming process guided by Fisher's linear discriminant. It facilitates high-order context modeling and adaptive entropy coding of embedded wavelet bit streams, and leads to superb compression performance in both lossy and lossless cases.
Summary form only given. We present an efficient algorithm for compressing the data necessary to represent an arbitrary cutting plane extracted from a three-dimensional curvilinear data set. The cutting plane techniqu...
详细信息
Summary form only given. We present an efficient algorithm for compressing the data necessary to represent an arbitrary cutting plane extracted from a three-dimensional curvilinear data set. The cutting plane technique is an important visualization method for time-varying 3D simulation results since the data sets are often so large. An efficient compression algorithm for these cutting planes is especially important when the simulation running on a remote server is being tracked or the data set is stored on a remote server. Various aspects of the visualization process are considered in the algorithm design, such as the inherent data reduction in going from 3D to 2D when generating a cutting plane, the numerical accuracy required in the cutting plane, and the potential to decimate the triangle mesh. After separating each floating point number into mantissa and exponent, a block sorting algorithm and an entropy coding algorithm are used to perform lossless compression.
暂无评论