Chain coding is widely used in a variety of image processing applications. In this paper, we present a new chain coding scheme: context-based relative-directional chain coding (CRCC). It applies a novel context modeli...
详细信息
ISBN:
(纸本)9780780397538
Chain coding is widely used in a variety of image processing applications. In this paper, we present a new chain coding scheme: context-based relative-directional chain coding (CRCC). It applies a novel context modeling with adaptive arithmetic coding to encode contour images. The proposed context modeling is based on the relative-directional chain representation. It provides a favorable conditional probability for the next encoding pixel so that the arithmetic coding is efficiently performed. The experimental results shows that the CRCC coding scheme overall outperforms the Chain coding (CC), Differential Chain coding (DCC) and Differential Chain coding Mode-8 (DCC-8). More importantly, the proposed CRCC coding scheme completely avoids the cost in computation of eight-directional chain codes and its associated sequence of differences.
This paper proposes a new algorithm based on the Context-Tree Weighting (CTW) method for universal compression of a finite-alphabet sequence x(1)(n) with side information y(1)(n) available to both the encoder and deco...
详细信息
This paper proposes a new algorithm based on the Context-Tree Weighting (CTW) method for universal compression of a finite-alphabet sequence x(1)(n) with side information y(1)(n) available to both the encoder and decoder. We prove that with probability one the compression ratio converges to the conditional entropy rate for jointly stationary ergodic sources. Experimental results with Markov chains and English texts show the effectiveness of the algorithm.
This paper investigates the algorithmic complexity of rate distortion optimization and arithmetic coding in the new H. 264 video coding standard and proposes a hardware accelerator to reduce it by more than an order o...
详细信息
This paper investigates the algorithmic complexity of rate distortion optimization and arithmetic coding in the new H. 264 video coding standard and proposes a hardware accelerator to reduce it by more than an order of magnitude. The accelerator incorporates arithmetic coding and decoding engines and efficiently handles all the context information required by RDO and CABAC in H.264. The bit stream generated by the hardware is equivalent to that generated by the JM 9.4 reference implementation. The ISA of a controlling scalar 32-bit RISC CPU has been extended with custom RDO/CABAC instructions and the accelerator prototyped in a state-of-the-art FPGA technology.
The paper details a scheme for lossless compression of short data series larger than 50 Bytes. The method uses arithmetic coding and context modeling with a low-complexity data model. A data model that takes 32 kBytes...
详细信息
The paper details a scheme for lossless compression of short data series larger than 50 Bytes. The method uses arithmetic coding and context modeling with a low-complexity data model. A data model that takes 32 kBytes of RAM already cuts the data size in half. The compression scheme just takes a few pages of source code, is scalable in memory size, and may be useful in sensor or cellular networks to spare bandwidth. As we demonstrate the method allows for battery savings when applied to mobile phones.
In this paper, we present a joint source channel decoding technique for CABAC encoded data based on sequential decoding. The proposed method is developed to be compatible with CABAC features, i.e., handling binary ari...
详细信息
ISBN:
(纸本)0819461172
In this paper, we present a joint source channel decoding technique for CABAC encoded data based on sequential decoding. The proposed method is developed to be compatible with CABAC features, i.e., handling binary arithmetic coding, adaptive probabilities, and context modeling. Redundancy due to the binarization step is exploited and no extra redundancy is compulsory. An hypothesis test allowing to adjust the trade-off between decoding complexity and error resilience performances is proposed. Simulation results show that our method outperforms those using classical sequential algorithms.
The ultraspectral sounder data features strong correlations in disjoint spectral regions due to the same type of absorbing gases. This paper compares the compression performance of two robust data preprocessing scheme...
详细信息
ISBN:
(纸本)0819462896
The ultraspectral sounder data features strong correlations in disjoint spectral regions due to the same type of absorbing gases. This paper compares the compression performance of two robust data preprocessing schemes, namely Bias-Adjusted reordering (BAR) and Minimum Spanning Tree (MST) reordering, in the context of entropy coding. Both schemes can take advantage of the strong correlations for achieving higher compression gains. The compression methods consist of the BAR or MST preprocessing schemes followed by linear prediction with context-free or context-based arithmetic coding (AC). Compression experiments on the NASA AIRS ultraspectral sounder data. set show that MST without bias-adjustment produces lower compression ratios than BAR and bias-adjusted MST for both context-free and context-based AC. Bias-adjusted MST outperforms BAR for context-free arithmetic coding, whereas BAR outperforms MST for context-based arithmetic coding. BAR with context-based AC yields the highest average compression ratios in comparison to MST with context-free or context-based AC.
Binary image compression is desirable for a wide range of applications, such as digital libraries, map archives, fingerprint databases, facsimile, etc. In this paper, we present a new highly efficient algorithm for lo...
详细信息
ISBN:
(纸本)9781424400379
Binary image compression is desirable for a wide range of applications, such as digital libraries, map archives, fingerprint databases, facsimile, etc. In this paper, we present a new highly efficient algorithm for lossless binary image compression. The proposed algorithm introduces a new method, Direct Redundancy Elimination, to efficiently exploit the two-dimensional redundancy of an image, as well as a novel Dynamic Context Model to improve the efficiency of arithmetic coding. Simulation results show that the proposed algorithm has comparable compression ratio to JBIG standard. In many cases, the proposed algorithm outperforms the JBIG standard.
In this paper, a novel maximum a posteriori (MAP) estimation approach is employed for error correction of arithmetic codes with a forbidden symbol. The system is founded on the principle of joint source channel coding...
详细信息
In this paper, a novel maximum a posteriori (MAP) estimation approach is employed for error correction of arithmetic codes with a forbidden symbol. The system is founded on the principle of joint source channel coding, which allows one to unify the arithmetic decoding and error correction tasks into a single process, with superior performance compared to traditional separated techniques. The proposed system improves the performance in terms of error correction with respect to a separated source and channel coding approach based on convolutional codes, with the additional great advantage of allowing complete flexibility in adjusting the coding rate. The proposed MAP decoder is tested in the case of image transmission across the additive white Gaussian noise channel and compared against standard forward error correction techniques in terms of performance and complexity. Both hard and soft decoding are taken into account, and excellent results in terms of packet error rate and decoded image quality are obtained.
An improvement scheme,so named the Two-Pass Improved Encoding Scheme(TIES),for the application to image compression through the extension of the existing concept of fractal image compression(FIC),which capitalizes on ...
详细信息
An improvement scheme,so named the Two-Pass Improved Encoding Scheme(TIES),for the application to image compression through the extension of the existing concept of fractal image compression(FIC),which capitalizes on the self-similarity within a given image to be compressed,is proposed in this *** this paper,we first briefly explore the existing image compression technology based on FIC, before proceeding to establish the concept behind the TIES *** then devise an effective encoding and decoding algorithm for the implementation of TIES through the consideration of the domain pool of an image,domain block transformation,scaling and intensity variation,range block approximation using linear combinations,and finally the use of an arithmetic compression algorithm to store the final data as close to source entropy as *** then conclude by explicitly comparing the performance of this implementation of the TIES algorithm against that of FIC under the same conditions.
In this paper, a large-alphabet-oriented scheme is proposed for both Chinese and English text compression. Our scheme parses Chinese text with the alphabet defined by Big-5 code, and parses English text with some rule...
详细信息
In this paper, a large-alphabet-oriented scheme is proposed for both Chinese and English text compression. Our scheme parses Chinese text with the alphabet defined by Big-5 code, and parses English text with some rules designed here. Thus, the alphabet used for English is not a word alphabet. After a token is parsed out from the input text, zero-, first-, and second-order Markov models are used to estimate the occurrence probabilities of this token. Then, the probabilities estimated are blended and accumulated in order to perform arithmetic coding. To implement arithmetic coding under a large alphabet and probability-blending condition, a way to partition count-value range is studied. Our scheme has been programmed and can be executed as a software package. Then, typical Chinese and English text files are compressed to study the influences of alphabet size and prediction order. On average, our compression scheme can reduce a text file's size to 33.9% for Chinese and to 23.3% for English text. These rates are comparable with or better than those obtained by popular data compression packages. Copyright (C) 2005 John Wiley & Sons, Ltd.
暂无评论