arithmetic coding is the most powerful technique for statistical lossless encoding that has attracted much attention in recent years. In this paper, we presents a new implementation of bit-level arithmetic coding by u...
详细信息
arithmetic coding is the most powerful technique for statistical lossless encoding that has attracted much attention in recent years. In this paper, we presents a new implementation of bit-level arithmetic coding by use of integer additions and shifts. The new algorithm has less computation complexity and is more flexible to use, and thus is very suitable for software and hardware design. We also discuss the application of the algorithm to the data encryption.
In this article, we make a comparative study for a new approach compression between discrete cosine transform (DCT) and discrete wavelet transform (DWT). We seek the transform proper to vector quantization to compress...
详细信息
In this article, we make a comparative study for a new approach compression between discrete cosine transform (DCT) and discrete wavelet transform (DWT). We seek the transform proper to vector quantization to compress the EMG signals. To do this, we initially associated vector quantization and DCT, then vector quantization and DWT. The coding phase is made by the SPIHT coding (set partitioning in hierarchical trees coding) associated with the arithmetic coding. The method is demonstrated and evaluated on actual EMG data. Objective performance evaluations metrics are presented: compression factor, percentage root mean square difference and signal to noise ratio. The results show that method based on the DWT is more efficient than the method based on the DCT.
In this paper, a novel state-based dynamic multi-alphabet arithmetic coding algorithm which adapts efficiently to the locally occurring symbol statistics is presented. The proposed coding algorithm is applicable to so...
详细信息
In this paper, a novel state-based dynamic multi-alphabet arithmetic coding algorithm which adapts efficiently to the locally occurring symbol statistics is presented. The proposed coding algorithm is applicable to sources such as raw images or transformed images that locally produce a smaller set of symbols from a large alphabet, or any other source characterized by very large alphabet size and highly skewed distributions. The performance of the proposed algorithm is compared with two standard entropy coding schemes, CA-2D-VLC and CABAC, that have recently appeared in the literature, in terms of compression ratio, peak signal-to-noise ratio and subjective quality. The simulation results obtained are encouraging, paving the way for further research and hardware implementation of this algorithm. It is found that the proposed algorithm achieves an increase in the compression ratio of about 45% over the compared standards for the similar peak signal-to-noise ratio and subjective quality.
In the arithmetic coding of the multilevel gray-scale image (128 levels) up to now, the fixed model for the gray-scale image is applied. This paper proposes a method to improve the compression ratio by introducing the...
详细信息
In the arithmetic coding of the multilevel gray-scale image (128 levels) up to now, the fixed model for the gray-scale image is applied. This paper proposes a method to improve the compression ratio by introducing the minimum description length (MDL) into the encoder to describe more precisely the local property of the image and by adaptively selecting the probabilistic model. In the course of this development, a practical form of MDL criterion is presented to be adaptively applied to the encoding as an approximation to the MDL criterion proposed by Rissanen. A problem in the arithmetic coding of the multilevel gray-scale image using a probabilistic model with a large number of parameters is the deterioration of the compression ratio. To solve this problem, this paper proposes a convergence acceleration method for the average code length for the arithmetic encoding of the high-level gray-scale image. Then, based on the proposed acceleration method, the MDL criterion is introduced into the arithmetic coding of the multilevel gray-scale image. Finally, the effectiveness of the proposed coding method is demonstrated by computer simulation.
In the series of AVS coding standard, logarithmic domain based adaptive binary arithmetic coder(LBAC) is used with a high datacompression ratio and a hardware-efficient structure. However, the existing software implem...
详细信息
In the series of AVS coding standard, logarithmic domain based adaptive binary arithmetic coder(LBAC) is used with a high datacompression ratio and a hardware-efficient structure. However, the existing software implementation of LBAC is of high complexity, especially for the decoder. Based on the analysis of the mapping mechanism between the logarithmic domain and the original domain, this paper proposes an original domain based acceleration algorithm for LBAC(ODB-LBAC). This algorithm simplifies the existing decoding process of LBAC and is friendly for software implementation. Experimental results show that, ODB-LBAC achieves a speedup of 5.65% and 18.28% in a fully optimized AVS2 decoder, for random-access and all-intra configurations, respectively. At present, this algorithm has been adopted into the AVS3 reference software as the reference implementation of LBAC.
arithmetic coding is a widely applied compression tool with superior coding efficiency to other entropy coding methodsHowever, it suffers from the error resilience and complexityIn this paper, the integer implementati...
详细信息
arithmetic coding is a widely applied compression tool with superior coding efficiency to other entropy coding methodsHowever, it suffers from the error resilience and complexityIn this paper, the integer implementation of binary arithmetic coding with forbidden symbol for error resilience is studiedcoding redundancies for employing different quantization coefficients in probability representation and cost effective backtracking distance in bits for maximum a posteriori(MAP) decoding are studied in depthWe observe that the optimal quantization coefficients are independent of forbidden symbol and the probabilities of source and the cost effective backtracking distance is related to the source entropy and the given forbidden symbol probabilitiesSuch observations are also demonstrated by extensive experiments.
In the fast algorithm of arithmetic coding proposed by Jiang, the normalization is controlled by the width of the coding range and the output codes are in bits style. But the bit-stuffing technique to
In the fast algorithm of arithmetic coding proposed by Jiang, the normalization is controlled by the width of the coding range and the output codes are in bits style. But the bit-stuffing technique to
This paper presents an arithmetic coding scheme for DCT coefficients in video compression, in which the number of non-zero coefficients, significant map and level information for a DCT block are used as coding element...
详细信息
ISBN:
(纸本)9781479957521
This paper presents an arithmetic coding scheme for DCT coefficients in video compression, in which the number of non-zero coefficients, significant map and level information for a DCT block are used as coding elements. To exploit the statistical correlations, an hierarchical dependency context model (HDCM) is proposed, where the number of non-zero coefficients and scanned position are used to capture the magnitude varying tendency of DCT coefficients. Then a new binary arithmetic coding using HDCM (HDCMBAC) is proposed to code the coding elements. Experimental results demonstrate that HDCMBAC can achieve the similar coding performance as CABAC at low and high QPs. Meanwhile the context modeling and arithmetic decoding in HDCMBAC can be carried out in parallel, since the context dependency only exists among different parts of basic coding elements in HDCM.
Joint source-channel (JSC) coding is an important alternative to classic separate coding in wireless applications that require robustness without feedback or under stringent delay constraints. JSC schemes based on ari...
详细信息
ISBN:
(纸本)9781617388767
Joint source-channel (JSC) coding is an important alternative to classic separate coding in wireless applications that require robustness without feedback or under stringent delay constraints. JSC schemes based on arithmetic coding can be implemented with finite-state encoders (FSE) generating finite-state codes (FSC). The performance of an FSC is primarily characterized by its free distance, which can be computed with efficient algorithms. This work shows how all FSEs corresponding to a set of initial parameters (source probabilities, arithmetic precision, design rate) can be ordered in a tree data structure. Since an exhaustive search of the code with the largest free distance is very difficult in most cases, a criterion for optimized exploration of the tree of FSEs is provided. Three methods for exploring the tree are proposed and compared with respect to the speed of finding a code with the largest free distance.
Medical imaging in hospitals requires fast and efficient image compression to support the clinical work flow and to save costs. Least-squares autoregressive pixel prediction methods combined with arithmetic coding con...
详细信息
ISBN:
(纸本)9781479923427
Medical imaging in hospitals requires fast and efficient image compression to support the clinical work flow and to save costs. Least-squares autoregressive pixel prediction methods combined with arithmetic coding constitutes the state of the art in lossless image compression. However, a high computational complexity of both prevents the application of respective CPU implementations in practice. We present a massively parallel compression system for medical volume images which runs on graphics cards. Image blocks are processed independently by separate processing threads. After pixel prediction with specialized border treatment, prediction errors are entropy coded with an adaptive binary arithmetic coder. Both steps are designed to match particular demands of the parallel hardware architecture. Comparisons with current image and video coders show efficiency gains of 3.3-13.6% while compression times can be reduced to a few seconds.
暂无评论