"Virtual Sliding Window" algorithm presented in this paper is an adaptive mechanism for estimating the probability of ones at the output of binary non-stationary sources. It is based on "Imaginary slidi...
详细信息
ISBN:
(纸本)1424402158
"Virtual Sliding Window" algorithm presented in this paper is an adaptive mechanism for estimating the probability of ones at the output of binary non-stationary sources. It is based on "Imaginary sliding window" idea proposed by ***. The proposed algorithm was used as an alternative adaptation mechanism in Context-Based Adaptive binary arithmetic coding (CABAC) - an entropy coding scheme of H.264/AVC standard for video compression. The "virtual sliding window" algorithm was integrated into an open-source codec supporting H.264/AVC standard. Comparison of the "virtual sliding window" algorithm with the original adaptation mechanism from CABAC is presented. Test results for standard video sequences are included. These results indicate that using the proposed algorithm improves rate-distortion performance compared to the original CABAC adaptation mechanism. Besides improvement in rate-distortion performances the "Virtual Sliding Window" algorithm has one more advantage. CABAC uses a finite state machine (FSM) for estimation of the probability of ones at the output of a binary source. Transitions for FSM are defined by a table stored in memory. The disadvantage of CABAC consists infrequent reference to this table (one time for every binary symbol encoding), which is critical for DSP implementation. The "Virtual Sliding Window" algorithm allows to avoid using the table of transitions.
Context modeling algorithm for motion vectors presented in this paper is an improvement of the current one used in Context-Based Adaptive binary arithmetic coding (CABAC). In our algorithm for coding the vertical moti...
详细信息
ISBN:
(纸本)9781424418343
Context modeling algorithm for motion vectors presented in this paper is an improvement of the current one used in Context-Based Adaptive binary arithmetic coding (CABAC). In our algorithm for coding the vertical motion vector difference (MVD) components, we take advantage of the coded data of the horizontal MVD components as one of the references for context modeling. Moreover, we adopt different schemes according to the encoding partition sizes. For small block sizes, we only consider the correlation among the neighboring blocks. Whereas for large block sizes, we also employ the inter-correlation between the two MVD components in the current block to improve the probability estimation of symbols. These strategies enhance the accuracy of the context model selection in motion vector coding, thus elevate the efficiency of the context-based arithmetic coder. Experimental results show that the proposed algorithm improves compression performance compared to the original CABAC scheme.
Context-Based Adaptive binary arithmetic coding (CABAC) as a normative part of the new ITU-T/ISO/IEC standard H.264/AVC for video compression is presented. By combining an adaptive binary arithmetic coding technique w...
详细信息
Context-Based Adaptive binary arithmetic coding (CABAC) as a normative part of the new ITU-T/ISO/IEC standard H.264/AVC for video compression is presented. By combining an adaptive binary arithmetic coding technique with context modeling, a high degree of adaptation and redundancy reduction is achieved. The CABAC framework also includes a novel low-complexity method for binary arithmetic coding and probability estimation that is well suited for efficient hardware and software implementations. CABAC significantly outperforms the baseline entropy coding method of H.264/AVC for the typical area of envisaged target applications. For a set of test sequences representing typical material used in broadcast Applications and for a range of acceptable video quality of about 30 to 38 dB, average bit-rate savings of 9%-14% are achieved.
This paper proposes a fitting-by-splitting algorithm (abbreviated as the FS algorithm), which is composed of an index-splitting (IS) algorithm and a probability-fitting (PF) algorithm, to effectively achieve the finit...
详细信息
This paper proposes a fitting-by-splitting algorithm (abbreviated as the FS algorithm), which is composed of an index-splitting (IS) algorithm and a probability-fitting (PF) algorithm, to effectively achieve the finite-precision arithmeticcoding code named as the FS-AC code. The FS algorithm generates the FS-AC codes of arbitrarily specified length without the need of post-appended sentinel symbol or pre-affixed side-information bits. Its IS process can split the input symbols into paired indices to make the residual information space be reused as effectively as possible at the end of arithmetically encoding a fixed-precision code. And, the PF process performs after each IS operation for enhancing the reuse efficiency via a fast adaptation process of probability table. Through the integration of IS and PF processes, not only the coding efficiency of proposed finite-precision AC codec can be close to that of unlimited precision AC codec especially for our proposed binary AC codecs. And also, consecutive FS-AC codes can be mutually independent such that the error propagation can be almost blocked to an AC code in problem. Hence, our new AC codecs can be appropriate for generating finite-precision AC codes in the high-speed networks. (c) 2004 Elsevier Inc. All rights reserved.
To combat errors effect occurring in arithmetic code streams, a forbidden symbol is added to the symbol set to accomplish error detection ability in the arithmetic codes. Specially, through splitting the subinterval o...
详细信息
ISBN:
(纸本)0780391284
To combat errors effect occurring in arithmetic code streams, a forbidden symbol is added to the symbol set to accomplish error detection ability in the arithmetic codes. Specially, through splitting the subinterval occupied by the forbidden symbol into two parts, and placing each at one end of the encoding interval, the error detecting capability is improved without compression efficiency loss. Simulated tests show this algorithm may detect errors more quickly than previously proposed methods.
In JPEG image compression algorithms, an entropy coder encodes an end-of-block (EOB) marker for each discrete cosine transform (DCT) block. The well-known Huffman coder encodes one such marker for each block. However,...
详细信息
In JPEG image compression algorithms, an entropy coder encodes an end-of-block (EOB) marker for each discrete cosine transform (DCT) block. The well-known Huffman coder encodes one such marker for each block. However, the binaryarithmetic coder, which provides greater compression but is not as well known, encodes several EOB tests, one after each nonzero ac coefficient of the block. We present a modification to the JPEG arithmeticcoding algorithm to reduce the number of EOB tests. Our algorithm encodes two extra zero ac coefficients for a small percentage of blocks, but it encodes fewer EOB tests. As a result, we reduce the total image code size by about 1.4% on average. In this process, we also study the effectiveness of the DCT and statistical models of this coder. (C) 2004 Society of Photo-Optical Instrumentation Engineers.
We describe an alternative mechanism for approximate binary arithmetic coding. The quantity that Is approximated is the ratio between the probabilities of the two symbols, Analysis Is given to show that the inefficien...
详细信息
We describe an alternative mechanism for approximate binary arithmetic coding. The quantity that Is approximated is the ratio between the probabilities of the two symbols, Analysis Is given to show that the inefficiency so introduced Is less than 0.7% on average;and in practice the compression loss is negligible.
暂无评论