The adaptive dependency source model, as described in this article, provides a simple way to benefit from the interdependency between the successive characters in a text. For most files, when compared with either the ...
详细信息
The adaptive dependency source model, as described in this article, provides a simple way to benefit from the interdependency between the successive characters in a text. For most files, when compared with either the adaptive arithmetic coding model by Witten et al., or Gallager's adaptive Huffman encoding model, the adaptive dependency model produces a 12 to 15 percent increase in compression efficiency. [ABSTRACT FROM AUTHOR]
arithmetic coding is the data compression techniques, which encodes the data by generating the code string that represents a functional value between 0 and 1. In this paper, we propose a modified-Adaptive Binary-RC (R...
详细信息
arithmetic coding is the data compression techniques, which encodes the data by generating the code string that represents a functional value between 0 and 1. In this paper, we propose a modified-Adaptive Binary-RC (Range Coder) or M-ABRC. Our algorithm minimizes the multiplication bit capacity through introducing the VLSI architecture, proposed algorithm uses the LUP (Look UP Table)-VSW (Virtual Sliding Window) for the probability estimation. In order to achieve the higher compression rate, our method M-ABRC has been implemented, this in terms provides the better adoption probability in encoding phase and also gives the absolute estimation of low-EBS(entropy binary sources). Moreover In order to evaluate the algorithm we have compared with the several existing technique, comparison takes place based on the two parameter i.e. device utilization and the power dissipation (static and dynamic). (C) 2019 Elsevier B.V. All rights reserved.
In this paper, the problems arising in modelling digital gray-level images for noiseless compression are discussed. An alphabet reduction model for compressing gray-level images using arithmetic coding is proposed. Th...
详细信息
In this paper, the problems arising in modelling digital gray-level images for noiseless compression are discussed. An alphabet reduction model for compressing gray-level images using arithmetic coding is proposed. The byte image source is divided into eight bitplanes. For each bitplane, a finite state macliine model is generated. The results are compared to traditional compression methods. The proposed algorithm improves the coding efficiency with lower complexity
An improvement scheme, so named the Two-Pass Improved Encoding Scheme (TIES), for the application to image compression through the extension of the existing concept of fractal image compression (FIC), which capitalize...
详细信息
An improvement scheme, so named the Two-Pass Improved Encoding Scheme (TIES), for the application to image compression through the extension of the existing concept of fractal image compression (FIC), which capitalizes on the self-similarity within a given image which is to be compressed, is proposed in this paper. This paper first briefly explores the existing image compression technology based on FIC, before exploring the areas which can be improved and hence establishing the concept behind the TIES algorithm. An effective encoding and decoding algorithm for the implementation of TIES is developed, through the consideration of the domain pool, block scaling and transformation, range block approximation using linear combinations and arithmetic encoding for storing data as close to source entropy as possible. The performance of TIES is then explicitly compared against that of FIC under the same conditions. Finally, due to the long encoding time required by TIES, this paper then proceeds to propose parallelized versions of the two TIES algorithms, before finally concluding with an empirical analysis of the speedup and scalability of the parallelized TIES algorithms, as well as compare the effect of parallelization between the two.
A new Modified Discrete Wavelets Packets Transform (MDWPT) based method for the compression of Surface EMG signal (s-EMG) data is presented. A Modified Discrete Wavelets Packets Transform (MDWPT) is applied to the dig...
详细信息
A new Modified Discrete Wavelets Packets Transform (MDWPT) based method for the compression of Surface EMG signal (s-EMG) data is presented. A Modified Discrete Wavelets Packets Transform (MDWPT) is applied to the digitized s-EMG signal. A Discrete Cosine Transforms (DCT) is applied to the MDWPT coefficients (only on detail coefficients). The MDWPT+ DCT coefficients are quantized with a Uniform Scalar Dead-Zone Quantizer (USDZQ). An arithmetic coder is employed for the entropy coding of symbol streams. The proposed approach was tested on more than 35 actuals S-EMG signals divided into three categories. The proposed approach was evaluated by the following parameters: Compression Factor (CF), Signal to Noise Ratio (SNR), Percent Root mean square Difference (PRD), Mean Frequency Distortion (MFD) and the Mean Square Error (MSE). Simulation results show that the proposed coding algorithm outperforms some recently developed s-EMG compression algorithms.
Data compression and encryption are the two fundamental steps needed for the secure transmission of large amount of data. Compression is needed because of the limitations in storage capacity and bandwidth requirements...
详细信息
ISBN:
(纸本)9781509049677
Data compression and encryption are the two fundamental steps needed for the secure transmission of large amount of data. Compression is needed because of the limitations in storage capacity and bandwidth requirements. In normal cases, the compressed data is encrypted and transmitted. But, this sequential approach is time consuming and computationally expensive. In this paper, a study on simultaneous compression and encryption scheme, based on the compression ratio and time, is done for different sized input sets. In simultaneous scheme, compression and encryption can be done at single step which reduces time for the entire operation. Compression is done using arithmetic coding and encryption is done using XOR encryption. The XOR encryption introduces some randomness because of the presence of pseudorandom number generator. Thus the scheme is more secure against attacks. Also, the integrity of the seed value from the random number generator is also preserved using the DSA algorithm.
Data compression plays a key role in optimizing the use of memory storage space and also reducing latency in data transmission. In this paper, we are interested in lossless compression techniques because their perform...
详细信息
Data compression plays a key role in optimizing the use of memory storage space and also reducing latency in data transmission. In this paper, we are interested in lossless compression techniques because their performance is exploited with lossy compression techniques for images and videos generally using a mixed approach. To achieve our intended objective, which is to study the performance of lossless compression methods, we first carried out a literature review, a summary of which enabled us to select the most relevant, namely the following: arithmetic coding, LZW, Tunstall’s algorithm, RLE, BWT, Huffman coding and Shannon-Fano. Secondly, we designed a purposive text dataset with a repeating pattern in order to test the behavior and effectiveness of the selected compression techniques. Thirdly, we designed the compression algorithms and developed the programs (scripts) in Matlab in order to test their performance. Finally, following the tests conducted on relevant data that we constructed according to a deliberate model, the results show that these methods presented in order of performance are very satisfactory:- LZW- arithmetic coding- Tunstall algorithm- BWT + RLELikewise, it appears that on the one hand, the performance of certain techniques relative to others is strongly linked to the sequencing and/or recurrence of symbols that make up the message, and on the other hand, to the cumulative time of encoding and decoding.
It seems reasonable to expect from a good compression method that its output should not be further compressible, because it should behave essentially like random data. We investigate this premise for a variety of know...
详细信息
It seems reasonable to expect from a good compression method that its output should not be further compressible, because it should behave essentially like random data. We investigate this premise for a variety of known lossless compression techniques, and find that, surprisingly, there is much variability in the randomness, depending on the chosen method. arithmetic coding seems to produce perfectly random output, whereas that of Huffman or Ziv-Lempel coding still contains many dependencies. In particular, the output of Huffman coding has already been proven to be random under certain conditions, and we present evidence here that arithmetic coding may produce an output that is identical to that of Huffman.
arithmetic coding is employed in image and video coding schemes to reduce the statistical redundancy of symbols emitted by coding engines. Most arithmetic coders proposed in the literature generate variable-length cod...
详细信息
ISBN:
(纸本)9781479957521
arithmetic coding is employed in image and video coding schemes to reduce the statistical redundancy of symbols emitted by coding engines. Most arithmetic coders proposed in the literature generate variable-length codes, i.e., they produce one long codeword of variable size. This requires renormalization operations to control the internal registers of the coder and the propagation of carry bits. This paper introduces an arithmetic coder that generates fixed-length codewords. The main advantage of the proposed coder is that it avoids renormalization procedures, which reduces computational complexity. Also, it uses a variable-size sliding window mechanism to estimate with high precision the probability of the emitted symbols. Experimental results indicate that the proposed coder achieves coding efficiency superior to those coders employed in JPEG2000 and HEVC while having lower computational costs. When integrated in a JPEG2000 implementation, the proposed coder achieves coding gains between 0.5 to 1 dB at medium and high rates, and speedups between 1.1 to 1.3 in the bitplane coding stage.
In this paper, an improved soft in soft out (SISO) iterative decoding scheme for joint source-channel coding is presented. It is realized as the iterative soft decoding of arithmetic code based on sequential decoding ...
详细信息
ISBN:
(纸本)9781479934331
In this paper, an improved soft in soft out (SISO) iterative decoding scheme for joint source-channel coding is presented. It is realized as the iterative soft decoding of arithmetic code based on sequential decoding to successively prune the decoding tree. Making use of the forecasted forbidden symbols, an error-resistant arithmetic code with an improved a posteriori probability (APP) metric is adopted to further enhance the error correction performance. Simulation results have validated the superiority of our scheme in terms of packet error rate for the AWGN channel.
暂无评论