Efficient low-complexity block entropy coding requires careful exploitation of specific data characteristics to circumvent the practical difficulties associated with large alphabets. Two recent image coding methods, a...
详细信息
Efficient low-complexity block entropy coding requires careful exploitation of specific data characteristics to circumvent the practical difficulties associated with large alphabets. Two recent image coding methods, alphabet and group partitioning (AGP) and set partitioning in hierarchical trees (SPIHT) can be viewed as block entropy coding methods, which are successful because of the manner in which they partition the alphabet into sets and encode these sets very efficiently. Here we present analysis and numerical results to show that AGP and SPIHT are indeed efficient block entropy coders.
In this paper a context based lossless image compression algorithm is presented. It consist of an adaptive median-FIR predictor, a conditional context based error feed back process and a new error representation. The ...
详细信息
In this paper a context based lossless image compression algorithm is presented. It consist of an adaptive median-FIR predictor, a conditional context based error feed back process and a new error representation. The prediction error is encoded by a context-based arithmetic encoder. Experimental results show that for a set of 18 images of different kinds, the compression performance of the proposed algorithm is very close to that of CALIC and is better than LOCO and S+P. This paper also presents an algorithmic study of the proposed algorithm. The contribution of each of the building blocks to the compression performance is studied. It has been shown that these building blocks can be incorporated into further development of lossless image compression algorithms.
Matching pursuit is a powerful and flexible optimisation technique that has found applications in different areas, including image and video compression. This paper proposes an underlying probabilistic model for the e...
详细信息
Matching pursuit is a powerful and flexible optimisation technique that has found applications in different areas, including image and video compression. This paper proposes an underlying probabilistic model for the entropy coding of the parameters generated by the matching pursuit algorithm in the context of image coding. It also distinguishes between orthogonal and fully-orthogonal matching pursuit, and provides a formulation for rate-distortion optimized matching pursuit.
This paper describes a method to randomly generate vectors of symbol probabilities so that the corresponding discrete memoryless source has a prescribed entropy. One application is to Monte Carlo simulation of the per...
详细信息
This paper describes a method to randomly generate vectors of symbol probabilities so that the corresponding discrete memoryless source has a prescribed entropy. One application is to Monte Carlo simulation of the performance of noiseless variable length source coding. (C) 2000 The Franklin Institute. Published by Elsevier Science Ltd. All rights reserved.
We combine a context classification scheme with adaptive prediction and entropy coding to produce an adaptive lossless image coder. In this coder, we maximize the benefits of adaptivity using both adaptive prediction ...
详细信息
We combine a context classification scheme with adaptive prediction and entropy coding to produce an adaptive lossless image coder. In this coder, we maximize the benefits of adaptivity using both adaptive prediction and entropy coding. The adaptive prediction is closely tied with the classification of contexts within the image. These contexts are defined with respect to the local edge, texture or gradient characteristics as well as local activity within small blocks of the image. For each context an optimal predictor is found which is used for the prediction of all pixels belonging to that particular context. Once the predicted values have been removed from the original image, a clustering algorithm is used to design a separate, optimal entropy coding scheme for encoding the prediction residual. Blocks of residual pixels are classified into a finite number of classes and members of each class are encoded using the entropy coder designed for that particular class. The combination of these two powerful techniques produces some of the best lossless coding results reported so far.
Arithmetic coding in H.263 is based on models that assign a fixed probability to each possible value of some syntax element. In this paper, the effect of adapting the models according to the dynamically changing stati...
详细信息
Arithmetic coding in H.263 is based on models that assign a fixed probability to each possible value of some syntax element. In this paper, the effect of adapting the models according to the dynamically changing statistics is analyzed. Simulation results show improvements in all studied cases.
We explore the transform coefficients of various fractal-based schemes for statistical dependence and exploit correlations to improve the compression capabilities of these schemes. In most of the standard fractal-base...
详细信息
We explore the transform coefficients of various fractal-based schemes for statistical dependence and exploit correlations to improve the compression capabilities of these schemes. In most of the standard fractal-based schemes, the transform coefficients exhibit a degree of linear dependence that can be exploited by using an appropriate vector quantizer such as the LBG algorithm. Additional compression is achieved by lossless Huffman coding of the quantized coefficients.
In this paper, a new and compact improved quadtree (compact-IQ) representation is presented for representing binary images. Then, efficient set operations on compact-IQs are also presented. Some experimentations are c...
详细信息
In this paper, a new and compact improved quadtree (compact-IQ) representation is presented for representing binary images. Then, efficient set operations on compact-IQs are also presented. Some experimentations are carried out to demonstrate the memory-efficiency and computational advantages of the proposed method. The experimental results reveal that the proposed image representation has 11.04-18.80% compression improvement and 24.39-36.94% computation-time improvement for set operations when compared to the recently published method on the constant bit-length linear quadtrees (CBLQ) (T.W. Lin, Set operations on constant bit-length linear quadtrees, Pattern Recognition 30(7) (1997) 1239-1249). In addition, geometric operations (area and centroid) on the compact-Iq and the performance comparison with JBIG are also investigated. (C) 2000 Elsevier Science Publishers B.V. All rights reserved.
Recently, a wealth of algorithms for the efficient coding of 3D triangle meshes have been published. All these focus on achieving the most compact code for the connectivity data. The geometric data, i.e. the vertex co...
详细信息
We study high-fidelity image compression with a given tight L/sub /spl infin// bound. We propose some practical adaptive context modeling techniques to correct prediction biases caused by quantizing prediction residue...
详细信息
We study high-fidelity image compression with a given tight L/sub /spl infin// bound. We propose some practical adaptive context modeling techniques to correct prediction biases caused by quantizing prediction residues, a problem common to the existing DPCM-type predictive near-lossless image coders. By incorporating the proposed techniques into the near-lossless version of CALIC that is considered by many as the state-of-the-art algorithm, we were able to increase its PSNR by 1 dB or more and/or reduce its bit rate by 10% or more, more encouragingly, at bit rates around 1.25 bpp or higher, our method obtained competitive PSNR results against the best L/sub 2/-based wavelet coders, while obtaining much smaller L/sub /spl infin// bound.
暂无评论