Embedded zerotree wavelet (EZW) coding, introduced by J. M. Shapiro, is a very effective and computationally simple technique for image compression, Here we offer an alternative explanation of the principles of its op...
详细信息
Embedded zerotree wavelet (EZW) coding, introduced by J. M. Shapiro, is a very effective and computationally simple technique for image compression, Here we offer an alternative explanation of the principles of its operation, so that the reasons for its excellent performance can be better understood, These principles are partial ordering by magnitude with a set partitioning sorting algorithm, ordered bit plane transmission, and exploitation of self-similarity across different scales of an image wavelet transform, Moreover, we present a new and different implementation based on set partitioning in hierarchical trees (SPIHT), which provides even better performance than our previously reported extension of EZW that surpassed the performance of the original EZW, The image coding results, calculated from actual file sizes and images reconstructed by the decoding algorithm, are either comparable to or surpass previous results obtained through much more sophisticated and computationally complex methods, In addition, the new coding and decoding procedures are extremely fast, and they can be made even faster, with only small loss in performance, by omitting entropy coding of the bit stream by arithmetic code.
Context weighting procedures are presented for sources with models (structures) in four different classes. Although the procedures are designed for universal data compression purposes, their generality allows applicat...
详细信息
Context weighting procedures are presented for sources with models (structures) in four different classes. Although the procedures are designed for universal data compression purposes, their generality allows application in the area of classification.
Block Truncation coding (ETC) is a simple and fast image compression algorithm which achieves constant bit rate of 2.0 bits per pixel, The method is however suboptimal. In the present paper we propose a modification o...
详细信息
Block Truncation coding (ETC) is a simple and fast image compression algorithm which achieves constant bit rate of 2.0 bits per pixel, The method is however suboptimal. In the present paper we propose a modification of ETC in which the compression ratio will be improved by coding the quantization data and the bit plane by arithmetic coding with an adaptive modelling scheme. The results compare favorable with other ETC variants. The bit rate for test image Lena is 1.53 bits per pixel with the mean square error of 16.51.
We describe a sequential universal data compression procedure for binary tree sources that performs the ''double mixture.'' Using a context tree, this method weights in an efficient recursive way the c...
详细信息
We describe a sequential universal data compression procedure for binary tree sources that performs the ''double mixture.'' Using a context tree, this method weights in an efficient recursive way the coding distributions corresponding to all bounded memory tree sources, and achieves a desirable coding distribution for tree sources with an unknown model and unknown parameters, Computational and storage complexity of the proposed procedure are both linear in the source sequence length, We derive a natural upper bound on the cumulative redundancy of our method for individual sequences. The three terms in this bound can be identified as coding, parameter, and model redundancy, The bound holds for all source sequence lengths, not only for asymptotically large lengths, The analysis that leads to this bound is based on standard techniques and turns out to be extremely simple. Our upper bound on the redundancy shows that the proposed context-tree weighting procedure is optimal in the sense that it achieves the Rissanen (1984) lower bound.
A threshold scheme for secret sharing can protect a secret with high reliability and flexibility. These advantages can be achieved only when all the participants are honest, i.e. all the participants willing to pool t...
详细信息
A threshold scheme for secret sharing can protect a secret with high reliability and flexibility. These advantages can be achieved only when all the participants are honest, i.e. all the participants willing to pool their shadows shall always present the true ones. Cheating detection is an important issue in the secret sharing scheme. However, cheater identification is more effective than cheating detection in realistic applications. If some dishonest participants exist, the other honest participants will obtain a false secret, while the cheaters may individually obtain the true one. The paper presents a method to enforce the security of any threshold scheme with the ability to detect cheating and identify cheaters. By applying a one-way hashing function along with the use of arithmetic coding, the proposed method can be used to deterministically detect cheating and identify the cheaters, no matter how many cheaters are involved in the secret reconstruction.
In this paper, the problems arising in modelling digital gray-level images for noiseless compression are discussed. An alphabet reduction model for compressing gray-level images using arithmetic coding is proposed. Th...
详细信息
In this paper, the problems arising in modelling digital gray-level images for noiseless compression are discussed. An alphabet reduction model for compressing gray-level images using arithmetic coding is proposed. The byte image source is divided into eight bitplanes. For each bitplane, a finite state macliine model is generated. The results are compared to traditional compression methods. The proposed algorithm improves the coding efficiency with lower complexity
A new method (the ‘binary indexed tree’) is presented for maintaining the cumulative frequencies which are needed to support dynamic arithmetic data compression. It is based on a decomposition of t...
详细信息
A new method (the ‘binary indexed tree’) is presented for maintaining the cumulative frequencies which are needed to support dynamic arithmetic data compression. It is based on a decomposition of the cumulative frequencies into portions which parallel the binary representation of the index of the table element (or symbol). The operations to traverse the data structure are based on the binary coding of the index. In comparison with previous methods, the binary indexed tree is faster, using more compact data and simpler code. The access time for all operations is either constant or proportional to the logarithm of the table size. In conjunction with the compact data structure, this makes the new method particularly suitable for large symbol alphabets.
The use of arithmetic coding for binary image compression achieves a high compression ratio while the running time remains rather slow. A composite modelling method presented in this paper reduces the size of the data...
详细信息
The use of arithmetic coding for binary image compression achieves a high compression ratio while the running time remains rather slow. A composite modelling method presented in this paper reduces the size of the data to be coded by arithmetic coding. The method is to code the uniform areas with less computation and apply arithmetic coding to the areas with more variation.
arithmetic coding, in conjunction with a suitable probabilistic model, can provide nearly optimal data compression. In this article we analyze the effect that the model and the particular implementation of arithmetic ...
详细信息
arithmetic coding, in conjunction with a suitable probabilistic model, can provide nearly optimal data compression. In this article we analyze the effect that the model and the particular implementation of arithmetic coding have on the code length obtained. Periodic scaling is often used in arithmetic coding implementations to reduce time and storage requirements;it also introduces a recency effect which can further affect compression. Our main contribution is introducing the concept of weighted entropy and using it to characterize in an elegant way the effect that periodic scaling has on the code length. We explain why and by how much scaling increases the code length for files with a homogeneous distribution of symbols, and we characterize the reduction in code length due to scaling for files exhibiting locality of reference. We also give a rigorous proof that the coding effects of rounding scaled weights, using integer arithmetic, and encoding end-of-file are negligible.
We give a new paradigm for lossless image compression, with four modular components: pixel sequence, prediction, error modeling, and coding. We present two new methods (called MLP and PPPM) for lossless compression, b...
详细信息
We give a new paradigm for lossless image compression, with four modular components: pixel sequence, prediction, error modeling, and coding. We present two new methods (called MLP and PPPM) for lossless compression, both involving linear prediction, modeling prediction errors by estimating the variance of a Laplace distribution, and coding using arithmetic coding applied to precomputed distributions. The MLP method is both progressive and parallelizable. We give results showing that our methods perform significantly better than other currently used methods for lossless compression of high resolution images, including the proposed JPEG standard. We express our results both in terms of the compression ratio and in terms of a useful new measure of compression efficiency, which we call compression gain.
暂无评论