Classical lossless compression algorithm highly relies on artificially designed encoding and quantification strategies for general purposes. With the rapid development of deep learning, data-driven methods based on th...
详细信息
Classical lossless compression algorithm highly relies on artificially designed encoding and quantification strategies for general purposes. With the rapid development of deep learning, data-driven methods based on the neural network can learn features and show better performance on specific data domains. We propose an efficient deep lossless compression algorithm, which uses arithmetic coding to quantify the network output. This scheme compares the training effects of Bi-directional Long Short-Term Memory (Bi-LSTM) and Transformers on minute-level power data that are not sparse in the time-frequency domain. The model can automatically extract features and adapt to the quantification of the probability distribution. The results of minute-level power data show that the average compression ratio (CR) is 4.06, which has a higher compression ratio than the classical entropy coding method.
Due to the development of CT (Computed Tomography), MRI (Magnetic Resonance Imaging), PET (Positron Emission Tomography), EBCT (Electron Beam Computed Tomography), SMRI (Stereotactic Magnetic Resonance Imaging), etc. ...
详细信息
Due to the development of CT (Computed Tomography), MRI (Magnetic Resonance Imaging), PET (Positron Emission Tomography), EBCT (Electron Beam Computed Tomography), SMRI (Stereotactic Magnetic Resonance Imaging), etc. has enhanced the distinguishing rate and scanning rate of the imaging equipments. The diagnosis and the process of getting useful information from the image are got by processing the medical images using the wavelet technique. Wavelet transform has increased the compression rate. Increasing the compression performance by minimizing the amount of image data in the medical images is a critical task. Crucial medical information like diagnosing diseases and their treatments is obtained by modern radiology techniques. Medical Imaging (MI) process is used to acquire that information. For lossy and lossless image compression, several techniques were developed. Image edges have limitations in capturing them if we make use of the extension of 1-D wavelet transform. This is because wavelet transform cannot effectively transform straight line discontinuities, as well geographic lines in natural images cannot be reconstructed in a proper manner if 1-D transform is used. Differently oriented image textures are coded well using Curvelet Transform. The Curvelet Transform is suitable for compressing medical images, which has more curvy portions. This paper describes a method for compression of various medical images using Fast Discrete Curvelet Transform based on wrapping technique. After transformation, the coefficients are quantized using vector quantization and coded using arithmetic encoding technique. The proposed method is tested on various medical images and the result demonstrates significant improvement in performance parameters like Peak Signal to Noise Ratio (PSNR) and Compression Ratio (CR).
arithmetic coding is the most powerful technique for statistical lossless encoding that has attracted much attention in recent years. In this paper, we presents a new implementation of bit-level arithmetic coding by u...
详细信息
arithmetic coding is the most powerful technique for statistical lossless encoding that has attracted much attention in recent years. In this paper, we presents a new implementation of bit-level arithmetic coding by use of integer additions and shifts. The new algorithm has less computation complexity and is more flexible to use, and thus is very suitable for software and hardware design. We also discuss the application of the algorithm to the data encryption.
In this article, we make a comparative study for a new approach compression between discrete cosine transform (DCT) and discrete wavelet transform (DWT). We seek the transform proper to vector quantization to compress...
详细信息
In this article, we make a comparative study for a new approach compression between discrete cosine transform (DCT) and discrete wavelet transform (DWT). We seek the transform proper to vector quantization to compress the EMG signals. To do this, we initially associated vector quantization and DCT, then vector quantization and DWT. The coding phase is made by the SPIHT coding (set partitioning in hierarchical trees coding) associated with the arithmetic coding. The method is demonstrated and evaluated on actual EMG data. Objective performance evaluations metrics are presented: compression factor, percentage root mean square difference and signal to noise ratio. The results show that method based on the DWT is more efficient than the method based on the DCT.
In this paper, a novel state-based dynamic multi-alphabet arithmetic coding algorithm which adapts efficiently to the locally occurring symbol statistics is presented. The proposed coding algorithm is applicable to so...
详细信息
In this paper, a novel state-based dynamic multi-alphabet arithmetic coding algorithm which adapts efficiently to the locally occurring symbol statistics is presented. The proposed coding algorithm is applicable to sources such as raw images or transformed images that locally produce a smaller set of symbols from a large alphabet, or any other source characterized by very large alphabet size and highly skewed distributions. The performance of the proposed algorithm is compared with two standard entropy coding schemes, CA-2D-VLC and CABAC, that have recently appeared in the literature, in terms of compression ratio, peak signal-to-noise ratio and subjective quality. The simulation results obtained are encouraging, paving the way for further research and hardware implementation of this algorithm. It is found that the proposed algorithm achieves an increase in the compression ratio of about 45% over the compared standards for the similar peak signal-to-noise ratio and subjective quality.
In the arithmetic coding of the multilevel gray-scale image (128 levels) up to now, the fixed model for the gray-scale image is applied. This paper proposes a method to improve the compression ratio by introducing the...
详细信息
In the arithmetic coding of the multilevel gray-scale image (128 levels) up to now, the fixed model for the gray-scale image is applied. This paper proposes a method to improve the compression ratio by introducing the minimum description length (MDL) into the encoder to describe more precisely the local property of the image and by adaptively selecting the probabilistic model. In the course of this development, a practical form of MDL criterion is presented to be adaptively applied to the encoding as an approximation to the MDL criterion proposed by Rissanen. A problem in the arithmetic coding of the multilevel gray-scale image using a probabilistic model with a large number of parameters is the deterioration of the compression ratio. To solve this problem, this paper proposes a convergence acceleration method for the average code length for the arithmetic encoding of the high-level gray-scale image. Then, based on the proposed acceleration method, the MDL criterion is introduced into the arithmetic coding of the multilevel gray-scale image. Finally, the effectiveness of the proposed coding method is demonstrated by computer simulation.
In the series of AVS coding standard, logarithmic domain based adaptive binary arithmetic coder(LBAC) is used with a high datacompression ratio and a hardware-efficient structure. However, the existing software implem...
详细信息
In the series of AVS coding standard, logarithmic domain based adaptive binary arithmetic coder(LBAC) is used with a high datacompression ratio and a hardware-efficient structure. However, the existing software implementation of LBAC is of high complexity, especially for the decoder. Based on the analysis of the mapping mechanism between the logarithmic domain and the original domain, this paper proposes an original domain based acceleration algorithm for LBAC(ODB-LBAC). This algorithm simplifies the existing decoding process of LBAC and is friendly for software implementation. Experimental results show that, ODB-LBAC achieves a speedup of 5.65% and 18.28% in a fully optimized AVS2 decoder, for random-access and all-intra configurations, respectively. At present, this algorithm has been adopted into the AVS3 reference software as the reference implementation of LBAC.
arithmetic coding is a widely applied compression tool with superior coding efficiency to other entropy coding methodsHowever, it suffers from the error resilience and complexityIn this paper, the integer implementati...
详细信息
arithmetic coding is a widely applied compression tool with superior coding efficiency to other entropy coding methodsHowever, it suffers from the error resilience and complexityIn this paper, the integer implementation of binary arithmetic coding with forbidden symbol for error resilience is studiedcoding redundancies for employing different quantization coefficients in probability representation and cost effective backtracking distance in bits for maximum a posteriori(MAP) decoding are studied in depthWe observe that the optimal quantization coefficients are independent of forbidden symbol and the probabilities of source and the cost effective backtracking distance is related to the source entropy and the given forbidden symbol probabilitiesSuch observations are also demonstrated by extensive experiments.
In the fast algorithm of arithmetic coding proposed by Jiang, the normalization is controlled by the width of the coding range and the output codes are in bits style. But the bit-stuffing technique to
In the fast algorithm of arithmetic coding proposed by Jiang, the normalization is controlled by the width of the coding range and the output codes are in bits style. But the bit-stuffing technique to
Joint source-channel (JSC) coding is an important alternative to classic separate coding in wireless applications that require robustness without feedback or under stringent delay constraints. JSC schemes based on ari...
详细信息
ISBN:
(纸本)9781617388767
Joint source-channel (JSC) coding is an important alternative to classic separate coding in wireless applications that require robustness without feedback or under stringent delay constraints. JSC schemes based on arithmetic coding can be implemented with finite-state encoders (FSE) generating finite-state codes (FSC). The performance of an FSC is primarily characterized by its free distance, which can be computed with efficient algorithms. This work shows how all FSEs corresponding to a set of initial parameters (source probabilities, arithmetic precision, design rate) can be ordered in a tree data structure. Since an exhaustive search of the code with the largest free distance is very difficult in most cases, a criterion for optimized exploration of the tree of FSEs is provided. Three methods for exploring the tree are proposed and compared with respect to the speed of finding a code with the largest free distance.
暂无评论