In the present world data compression is used in every field. Through data compression the bits required to represent a message will be reduced. By compressing the given data, we can save the storage capacity, files a...
详细信息
In the present world data compression is used in every field. Through data compression the bits required to represent a message will be reduced. By compressing the given data, we can save the storage capacity, files are transferred at high speed, storage hardware is decreased so that its cost is also decreased, and storage bandwidth is decreased. There are many methods to compress the data. But in this paper, we are discussing about Huffman coding and arithmetic coding. For various input streams we are comparing adaptive Huffman coding and arithmetic coding and we will observe which technique will be more efficient to compress the data.
Recent years have seen tremendous increase in the production, transportation and storage of color images. A new method for loss less image compression of color images is presented in this paper. This method is based o...
详细信息
ISBN:
(纸本)9781479970698
Recent years have seen tremendous increase in the production, transportation and storage of color images. A new method for loss less image compression of color images is presented in this paper. This method is based on wavelet transform and support vector machines (SVM). The wavelet transform is applied to the luminance (Y) and chrominance (Cb, Cr) components of the original color image. Then SVM regression dependence could learn from training data and compression achieved using less training point (support vector) to represent the original data and eliminate redundancy. In addition, an effective entropy coder based on run-length and arithmetic encoders is used to encode vector and weight support. Our compression algorithm is applied to a test set of images of size 1024 * 1024 encoded on 24 bits. To evaluate our results, we calculated the peak signal noise ratio (PSNR) and their ratios datasets compression (CR). Experimental results show that the performance of the compression algorithm to achieve much improvement.
In recent years, many wavelet coders that use various spatially adaptive coding technique to compress the image. Level of flexibility and the coding efficiency are two crucial issues in spatially adaptive methods. So ...
详细信息
ISBN:
(纸本)9783642240423;9783642240430
In recent years, many wavelet coders that use various spatially adaptive coding technique to compress the image. Level of flexibility and the coding efficiency are two crucial issues in spatially adaptive methods. So in this paper "spherical coder" is introduced. The objective of this paper is to combine the spherical tree with the wavelet lifting technique and compare the performance of the spherical tree between the different coding technique such as arithmetic, Huffman and run length. The Comparison is made by using PSNR and Compression Ratio (CR). It is shown the Spherical tree in wavelet lifting with the arithmetic coder gives high CR value.
Path profiles provide a more accurate characterization of a program's dynamic behavior than basic block or edge profiles, but are relatively more expensive to collect. This has limited their use in practice despit...
详细信息
ISBN:
(纸本)9781595935755
Path profiles provide a more accurate characterization of a program's dynamic behavior than basic block or edge profiles, but are relatively more expensive to collect. This has limited their use in practice despite demonstrations of their advantages over edge profiles for a wide variety of applications. We present a new algorithm called preferential path profiling (PPP), that reduces the overhead of path profiling. PPP leverages the observation that most consumers of path profiles are only interested in a subset of all program paths. PPP achieves low overhead by separating interesting paths from other paths and assigning a set of unique and compact numbers to these interesting paths. We draw a parallel between arithmetic coding and path numbering, and use this connection to prove an optimality result for the compactness of path numbering produced by PPP. This compact path numbering enables our PPP implementation to record path information in an array instead of a hash table. Our experimental results indicate that PPP reduces the runtime overhead of profiling paths exercised by the largest (ref) inputs of the SPEC CPU2000 benchmarks from 50% on average (maximum of 132%) to 15% on average (maximum of 26%) as compared to a state-of-the-art path profiler.
This paper considers large-scale OneMax and RoyalRoad optimization problems with up to 10(7) binary variables within a compact Estimation of Distribution Algorithms (EDA) framework. Building upon the compact Genetic A...
详细信息
ISBN:
(纸本)9781450311779
This paper considers large-scale OneMax and RoyalRoad optimization problems with up to 10(7) binary variables within a compact Estimation of Distribution Algorithms (EDA) framework. Building upon the compact Genetic Algorithm (cGA), the continuous domain Population-Based Incremental Learning algorithm (PBILc) and the arithmetic-coding EDA, we define a novel method that is able to compactly solve regular and noisy versions of these problems with minimal memory requirements, regardless of problem or population size. This feature allows the algorithm to be run in a conventional desktop machine. Issues regarding probability model sampling, arbitrary precision of the arithmetic-coding decompressing scheme, incremental fitness function evaluation and updating rules for compact learning, are presented and discussed.
In this paper, a performance comparison of an embedded zero tree wavelet (EZW) based codec is done on the basis of Huffman and arithmetic coding. The sub-band decomposition coefficients are coded into multilayer bit s...
详细信息
ISBN:
(纸本)9781467310499;9781467310475
In this paper, a performance comparison of an embedded zero tree wavelet (EZW) based codec is done on the basis of Huffman and arithmetic coding. The sub-band decomposition coefficients are coded into multilayer bit stream and then it is entropy coded using Huffman and arithmetic code. The comparison results shows that the arithmetic coded bit stream results in improved bit rate than Huffman coding at the same threshold.
Methods for embedment of stream cipher rules into compressive Elias-type entropy coders are presented. Such systems have the ability to simultaneously compress and encrypt input data;thereby providing a fast and secur...
详细信息
ISBN:
(纸本)9780819486370
Methods for embedment of stream cipher rules into compressive Elias-type entropy coders are presented. Such systems have the ability to simultaneously compress and encrypt input data;thereby providing a fast and secure means for data compression. Procedures for maintaining compressive performance are articulated with focus on further compression optimizations. Furthermore, a novel method is proposed which exploits the compression process to hide cipherstream information in the case of a known plaintext attack. Simulations were performed on images from a variety of classes in order to grade and compare the compressive and computational costs of the novel system relative to traditional compression-followed-by-encryption methods.
Relational datasets are being generated at an alarmingly rapid rate across organizations and industries. Compressing these datasets could significantly reduce storage and archival costs. Traditional compression algori...
详细信息
ISBN:
(纸本)9781450342322
Relational datasets are being generated at an alarmingly rapid rate across organizations and industries. Compressing these datasets could significantly reduce storage and archival costs. Traditional compression algorithms, e.g., gzip, are suboptimal for compressing relational datasets since they ignore the table structure and relationships between attributes. We study compression algorithms that leverage the relational structure to compress datasets to a much greater extent. We develop SQUISH, a system that uses a combination of Bayesian Networks and arithmetic coding to capture multiple kinds of dependencies among attributes and achieve near-entropy compression rate. SQUISH also supports user-defined attributes: users can instantiate new data types by simply implementing five functions for a new class interface. We prove the asymptotic optimality of our compression algorithm and conduct experiments to show the effectiveness of our system: SQUISH achieves a reduction of over 50% in storage size relative to systems developed in prior work on a variety of real datasets.
The ultraspectral sounder data features strong correlations in disjoint spectral regions due to the same type of absorbing gases. This paper compares the compression performance of two robust data preprocessing scheme...
详细信息
ISBN:
(纸本)0819462896
The ultraspectral sounder data features strong correlations in disjoint spectral regions due to the same type of absorbing gases. This paper compares the compression performance of two robust data preprocessing schemes, namely Bias-Adjusted reordering (BAR) and Minimum Spanning Tree (MST) reordering, in the context of entropy coding. Both schemes can take advantage of the strong correlations for achieving higher compression gains. The compression methods consist of the BAR or MST preprocessing schemes followed by linear prediction with context-free or context-based arithmetic coding (AC). Compression experiments on the NASA AIRS ultraspectral sounder data. set show that MST without bias-adjustment produces lower compression ratios than BAR and bias-adjusted MST for both context-free and context-based AC. Bias-adjusted MST outperforms BAR for context-free arithmetic coding, whereas BAR outperforms MST for context-based arithmetic coding. BAR with context-based AC yields the highest average compression ratios in comparison to MST with context-free or context-based AC.
In this paper, we have assigned a unique range for both, a number of characters and groups using the characters. Long textual message which have to encrypt, is subdivided into a number of groups with n characters. The...
详细信息
ISBN:
(纸本)9783642178771
In this paper, we have assigned a unique range for both, a number of characters and groups using the characters. Long textual message which have to encrypt, is subdivided into a number of groups with n characters. Then the group of characters is encrypted into two floating point numbers by using arithmetic coding, where they are automatically compressed. Depending on key, the data bits from text are placed to some suitable nonlinear pixel and bit positions about the image. In the proposed technique, the key characters are alphanumeric and key length is also variable. Using the symmetric key technique again they are decrypted into original text message from the watermarked image.
暂无评论