This paper proposes a fitting-by-splitting algorithm (abbreviated as the FS algorithm), which is composed of an index-splitting (IS) algorithm and a probability-fitting (PF) algorithm, to effectively achieve the finit...
详细信息
This paper proposes a fitting-by-splitting algorithm (abbreviated as the FS algorithm), which is composed of an index-splitting (IS) algorithm and a probability-fitting (PF) algorithm, to effectively achieve the finite-precision arithmeticcoding code named as the FS-AC code. The FS algorithm generates the FS-AC codes of arbitrarily specified length without the need of post-appended sentinel symbol or pre-affixed side-information bits. Its IS process can split the input symbols into paired indices to make the residual information space be reused as effectively as possible at the end of arithmetically encoding a fixed-precision code. And, the PF process performs after each IS operation for enhancing the reuse efficiency via a fast adaptation process of probability table. Through the integration of IS and PF processes, not only the coding efficiency of proposed finite-precision AC codec can be close to that of unlimited precision AC codec especially for our proposed binary AC codecs. And also, consecutive FS-AC codes can be mutually independent such that the error propagation can be almost blocked to an AC code in problem. Hence, our new AC codecs can be appropriate for generating finite-precision AC codes in the high-speed networks. (c) 2004 Elsevier Inc. All rights reserved.
In video compression, throughputs of entropy encoders based on arithmeticcoding are limited. This article presents the architecture of the entropy coder able to process in each clock cycle much more binary symbols th...
详细信息
In video compression, throughputs of entropy encoders based on arithmeticcoding are limited. This article presents the architecture of the entropy coder able to process in each clock cycle much more binary symbols than previous works. The architecture takes advantage of the multisymbol implementation of the binaryarithmetic coder (BAC) developed earlier. To balance high throughputs of the BAC, fast implementations of binarization, context modeling, and probability model (PM) update are developed. The main improvement in the symbol rate stems from the decomposition of the processing path into many parallel ones. Critical paths associated with state transitions are shortened since each path updates the PM only for one context selected in each clock cycle. The negative impact on the symbol rate is compensated by the context-based symbol reordering. Although paths have variable bin/symbol rates, the applied buffering strategy improves the continuity of two data streams directed to the BAC, separately for context-coded and bypass-mode symbols. The entropy coder synthesized for the 90-nm TSMC technology consumes 273-k gates and operates at 570 MHz. It achieves the average symbol rate of 13.08 bins per clock cycle and the throughput of 7455 Mbins/s for high-quality H.265/HEVC compression.
In JPEG image compression algorithms, an entropy coder encodes an end-of-block (EOB) marker for each discrete cosine transform (DCT) block. The well-known Huffman coder encodes one such marker for each block. However,...
详细信息
In JPEG image compression algorithms, an entropy coder encodes an end-of-block (EOB) marker for each discrete cosine transform (DCT) block. The well-known Huffman coder encodes one such marker for each block. However, the binaryarithmetic coder, which provides greater compression but is not as well known, encodes several EOB tests, one after each nonzero ac coefficient of the block. We present a modification to the JPEG arithmeticcoding algorithm to reduce the number of EOB tests. Our algorithm encodes two extra zero ac coefficients for a small percentage of blocks, but it encodes fewer EOB tests. As a result, we reduce the total image code size by about 1.4% on average. In this process, we also study the effectiveness of the DCT and statistical models of this coder. (C) 2004 Society of Photo-Optical Instrumentation Engineers.
Context modeling algorithm for motion vectors presented in this paper is an improvement of the current one used in Context-Based Adaptive binary arithmetic coding (CABAC). In our algorithm for coding the vertical moti...
详细信息
ISBN:
(纸本)9781424418343
Context modeling algorithm for motion vectors presented in this paper is an improvement of the current one used in Context-Based Adaptive binary arithmetic coding (CABAC). In our algorithm for coding the vertical motion vector difference (MVD) components, we take advantage of the coded data of the horizontal MVD components as one of the references for context modeling. Moreover, we adopt different schemes according to the encoding partition sizes. For small block sizes, we only consider the correlation among the neighboring blocks. Whereas for large block sizes, we also employ the inter-correlation between the two MVD components in the current block to improve the probability estimation of symbols. These strategies enhance the accuracy of the context model selection in motion vector coding, thus elevate the efficiency of the context-based arithmetic coder. Experimental results show that the proposed algorithm improves compression performance compared to the original CABAC scheme.
To combat errors effect occurring in arithmetic code streams, a forbidden symbol is added to the symbol set to accomplish error detection ability in the arithmetic codes. Specially, through splitting the subinterval o...
详细信息
ISBN:
(纸本)0780391284
To combat errors effect occurring in arithmetic code streams, a forbidden symbol is added to the symbol set to accomplish error detection ability in the arithmetic codes. Specially, through splitting the subinterval occupied by the forbidden symbol into two parts, and placing each at one end of the encoding interval, the error detecting capability is improved without compression efficiency loss. Simulated tests show this algorithm may detect errors more quickly than previously proposed methods.
作者:
Pranitha, K.Kavya, G.Anna Univ
SA Engn Coll Dept Informat & Commun Engn Chennai 600077 India Anna Univ
SA Engn Coll Dept Elect & Commun Engn Chennai 600077 India
Satellite communication is popular due to the advancement in storing and transmitting satellite images. Image compression can be defined as the science of lowering the number of bits necessary to represent an image. I...
详细信息
Satellite communication is popular due to the advancement in storing and transmitting satellite images. Image compression can be defined as the science of lowering the number of bits necessary to represent an image. In satellite image processing, image degradation is considered a challenging problem. The Consultative Committee for Space Data Systems (CCSDS) Image Data Compression (IDC) standard CCSDS-122.0-B-1 is the transformation-dependent image compression method developed especially for usage in on-board space platforms. It holds de-correlation, quantization and entropy encoding phases. This paper introduces a new optimized memory organization for Discrete Wavelet Transform (DWT) to perform spatial de-correlation with fewer memory requirements on an FPGA device. Also, the proposed optimized DWT is integrated with a hybrid post-processing and entropy encoder module to reduce the spatial redundancies between the wavelet coefficients and compress the de-correlated data with high compression performance. A high-throughput hardware implementation of the binaryarithmetic entropy Coder (BAEC) is also provided to perform lossless compression with low implementation complexity. To provide the convenience and compactness, the proposed system will be implemented in the Xilinx working platform by developing a Verilog code. The proposed model is then evaluated on the Arty Z7-20 development board. The proposed approach is analyzed on different performance parameters such as throughput and frequency. The proposed design allowed a maximum operating frequency of 250 MHz, leading to a throughput of 156.25 Msamples/sec on Zynq. In addition, the structure of the proposed method overtakes the conventional design in terms of memory requirements and area.
暂无评论