This paper considers lossless image compression and presents a learned compression system that can achieve state-of-the-art lossless compression performance but uses only 59K parameters, which is one or two order of m...
详细信息
This paper considers lossless image compression and presents a learned compression system that can achieve state-of-the-art lossless compression performance but uses only 59K parameters, which is one or two order of magnitudes less than other learned systems proposed recently in the literature. The explored system is based on a learned pixel-by-pixel lossless image compression method, where each pixel's probability distribution parameters are obtained by processing the pixel's causal neighborhood (i.e. previously encoded/decoded pixels) with a simple neural network comprising 59K parameters. This causality causes the decoder to operate sequentially, i.e. the neural network has to be evaluated for each pixel sequentially, which increases decoding time significantly with common GPU software and hardware. To reduce the decoding time, parallel decoding algorithms are proposed and implemented. The obtained lossless image compression system is compared to traditional and learned systems in the literature in terms of compression performance, encoding-decoding times and computational complexity.
Context-based Adaptive Binary Arithmetic coding (CABAC) is adopted as an entropy coding tool for main profile of the video coding standard H.264/AVC, CABAC achieves higher degree of redundancy reduction by estimating ...
详细信息
Context-based Adaptive Binary Arithmetic coding (CABAC) is adopted as an entropy coding tool for main profile of the video coding standard H.264/AVC, CABAC achieves higher degree of redundancy reduction by estimating the conditional probability of each binary symbol which is the input to the arithmetic coder. This paper presents an entropy coding method based on CABAC. In the proposed method, the binary symbol is coded using more precisely estimated conditional probability, thereby leading to performance improvement. We apply our method to the standard and evaluate its performance for different video sources and various quantization parameters (QP). Experiment results show that our method outperforms the original CABAC in term of coding efficiency, and the average bit-rate savings are up to 1.2%.
This paper describes a new approach to fixed-rate entropy-constrained vector quantization (FEVQ) for stationary memoryless sources where the structure of codewords are derived from a variable-length scalar quantizer. ...
详细信息
This paper describes a new approach to fixed-rate entropy-constrained vector quantization (FEVQ) for stationary memoryless sources where the structure of codewords are derived from a variable-length scalar quantizer. We formulate the quantization search operation as a zero-one integer-optimization problem, and show that the resulting integer program can be closely approximated by solving a simple linear program. The result is a Lagrange formulation which adjoins the constraint on the entropy (codeword length) to the distortion. Unlike the previously known methods with a fixed Lagrange multiplier, we use an iterative algorithm to optimize the underlying objective function, while updating the Lagrange multiplier until the constraint on the overall rate is satisfied. The key feature of the new method is the substantial reduction in the number of iterations in comparison with previous related methods. In order to achieve some packing gain, we combine the process of trellis-coded quantization with that of FEVQ. This results in an iterative application of the Viterbi algorithm on the underlying trellis for selecting the Lagrange multiplier. Numerical results are presented which demonstrate substantial improvement in comparison with the alternative methods reported in the literature.
In the absence of channel noise, variable-length quantizers perform better than fixed-rate Lloyd-Max quantizers for any source with a non-uniform density function. However, channel errors can lead to a loss of synchro...
详细信息
In the absence of channel noise, variable-length quantizers perform better than fixed-rate Lloyd-Max quantizers for any source with a non-uniform density function. However, channel errors can lead to a loss of synchronization resulting in a propagation of error. To avoid having variable rate, one can use a vector quantizer selected as a sub-set of high probability points in the Cartesian product of a set of scalar quantizers and represent its elements with binary code-words of the same length (quantizer shaping). We choose these elements from a lattice, resulting in a higher quantization gain in comparison to simply using the Cartesian product of a set of scalar quantizers. We introduce a class of lattices which have a low encoding complexity, and at the same time result in a noticeable quantization gain. We combine the procedure of lattice encoding with that of quantizer shaping using hierarchical dynamic programming. In addition, by devising appropriate partitioning and merging rules, we obtain sub-optimum schemes of low complexity and small performance degradation. The proposed methods show a substantial improvement in performance and/or a reduction in the complexity with respect to the best known results. Copyright (C) 2003 AEI.
In generally text compression techniques cannot be used directly in image compression because the model of text and image are different. Recently, a new class of text compression, namely, block-sorting algorithm which...
详细信息
In generally text compression techniques cannot be used directly in image compression because the model of text and image are different. Recently, a new class of text compression, namely, block-sorting algorithm which involves Burrows and Wheeler transformation (BWT) gives excellent results in text compression. However, if we apply it directly into image compression, the result is poor. Surprisingly, good results can be obtained if we employ a prediction model such as the one defined in JPEG standard before the BWT algorithm. Thus, the predictive model plays critical role - in the compression process. To further improve the compression efficiency, we use Gradient Adjusted Prediction (GAP). Experimental results show that the proposed method is better than lossless JPEG and some LZ-based compression methods.
This paper presents an audio compression method based on wavelets sub-band quantization and coding, and proposes a coder based on that method. The proposed coder uses the wavelets packets transform in order to obtain ...
详细信息
This paper presents an audio compression method based on wavelets sub-band quantization and coding, and proposes a coder based on that method. The proposed coder uses the wavelets packets transform in order to obtain the critical bands of the human auditory system. Some results of the MPEG-layer 2 psychoacoustic model are used in the wavelets coefficients coding. The MPEG results are transformed to the wavelet domain in order to determinate the quantizer type and the quantization levels number for each wavelet sub-band. The transform method of these results is also proposed. The coder uses scalar and vector quantization methods according with the sensibility of the human auditory system for each wavelet sub-band. The entropy coding is also used in order to improve the performance of the proposed coder. The results of the subjective evaluation demonstrate that the proposed coder achieve transparent coding of the monophonic CD signals at bit rates of 80-96 Kbit/seg.
Context modeling is widely used in image coding to improve the compression performance. However, with no special treatment, the expected compression gain will be cancelled by the model cost introduced by high order co...
详细信息
Context modeling is widely used in image coding to improve the compression performance. However, with no special treatment, the expected compression gain will be cancelled by the model cost introduced by high order context models. Context quantization is an efficient method to deal with this problem. In this paper, we analyze the general context quantization problem in detail and show that context quantization is similar to a common vector quantization problem. If a suitable distortion measure is defined, the optimal context quantizer can be designed by a Lloyd style iterative algorithm. This context quantization strategy is applied to an embedded wavelet coding scheme in which the significance map symbols and sign symbols are directly coded by arithmetic coding with context models designed by the proposed quantization algorithm. Good coding performance is achieved.
This article introduces the key technologies involved in four hypothetical probability estimators for Context-based Adaptive Binary Arithmetic coding (CABAC). The focus is on the selected adaptation rate performed in ...
详细信息
This article introduces the key technologies involved in four hypothetical probability estimators for Context-based Adaptive Binary Arithmetic coding (CABAC). The focus is on the selected adaptation rate performed in these estimators, which are selected based on coding efficiency and memory considerations, and also the relationship with the current size of the coding block. The proposed scheme can linearly realize the quantitative representation of probabilistic prediction and describes the scalability potential for higher accuracy. Besides a description of the design concept, this work also discusses motivation and implementation aspects, which are based on simple operations such as bitwise operations and single subsampling for subinterval updates. The experimental results verify the effectiveness of the proposed CABAC method specified in Versatile Video coding (VVC).
Many modem chemoinformatics systems for small molecules rely on large fingerprint vector representations, where the components of the vector record the presence or number of occurrences in the molecular graphs of part...
详细信息
Many modem chemoinformatics systems for small molecules rely on large fingerprint vector representations, where the components of the vector record the presence or number of occurrences in the molecular graphs of particular combinatorial features, such as labeled paths or labeled trees. These large fingerprint vectors are often compressed to much shorter fingerprint vectors using a lossy compression scheme based on a simple modulo procedure. Here, we combine statistical models of fingerprints with integer entropy codes, such as Golomb and Elias codes, to encode the indices or the run lengths of the fingerprints. After reordering the fingerprint components by decreasing frequency order, the indices are monotone-increasing and the run lengths are quasi-monotone-increasing, and both exhibit power-law distribution trends. We take advantage of these statistical properties to derive new efficient, lossless, compression algorithms for monotone integer sequences: monotone value (MOV) coding and monotone length (MOL) coding. In contrast to lossy systems that use 1024 or more bits of storage per molecule, we can achieve lossless compression of long chemical fingerprints based on circular substructures in slightly over 300 bits per molecule, close to the Shannon entropy limit, using a MOL Elias Gamma code for run lengths. The improvement in storage comes at a modest computational cost. Furthermore, because the compression is lossless, uncompressed similarity (e.g., Tanimoto) between molecules can be computed exactly from their compressed representations, leading to significant improvements in retrival performance, as shown on six benchmark data sets of druglike molecules.
Context-based adaptive arithmetic coding (CAAC) has high coding efficiency and is adopted by the majority of advanced compression algorithms. In this paper, five new techniques are proposed to further improve the perf...
详细信息
Context-based adaptive arithmetic coding (CAAC) has high coding efficiency and is adopted by the majority of advanced compression algorithms. In this paper, five new techniques are proposed to further improve the performance of CAAC. They make the frequency table (the table used to estimate the probability distribution of data according to the past input) of CAAC converge to the true probability distribution rapidly and hence improve the coding efficiency. Instead of varying only one entry of the frequency table, the proposed range-adjusting scheme adjusts the entries near to the current input value together. With the proposed mutual-learning scheme, the frequency tables of the contexts highly correlated to the current context are also adjusted. The proposed increasingly adjusting step scheme applies a greater adjusting step for recent data. The proposed adaptive initialization scheme uses a proper model to initialize the frequency table. Moreover, a local frequency table is generated according to local information. We perform several simulations on edge-directed prediction-based lossless image compression, coefficient encoding in JPEG, bit plane coding in JPEG 2000, and motion vector residue coding in video compression. All simulations confirm that the proposed techniques can reduce the bit rate and are beneficial for data compression.
暂无评论