The compression of matrices where the majority of the entries are a fixed constant (most typically zero), usually referred to as sparse matrices, has received much attention. We evaluate the performance of existing me...
详细信息
ISBN:
(纸本)0818684062
The compression of matrices where the majority of the entries are a fixed constant (most typically zero), usually referred to as sparse matrices, has received much attention. We evaluate the performance of existing methods, and consider how arithmetic coding can be applied to the problem to achieve better compression. The result is a method that gives better compression than existing methods, and still allows constant-time access to individual elements if required. Although for concreteness we express our method in terms of two-dimensional matrices where the majority of the values are zero, it is equally applicable to matrices of any number of dimensions and where the fixed known constant is any value. We assume that the number of dimensions and their ranges are known, but will not assume that any information is available externally regarding the number of non-zero entries.
By asking afresh exactly what it is the arithmetic coder must do, we show how much of the complexity of current coders can be dispensed with. In particular, we eliminate all multiplicative operations in both the encod...
详细信息
ISBN:
(纸本)0818684062
By asking afresh exactly what it is the arithmetic coder must do, we show how much of the complexity of current coders can be dispensed with. In particular, we eliminate all multiplicative operations in both the encoder and decoder, replacing them by comparisons and additions. The essence of the proposal is a simple piecewise integer mapping. Graf (1997) has made use of a similar integer mapping in his proposal for a fast entropy coder. Our work is related to but independent of his. As in all non-exact coders, some inefficiency is introduced. We give an analysis that shows the average loss caused by the revised coder to be bounded in an expected sense by 0.0861 bits per symbol, which for most compression applications is just one or two percent. As an additional modification, we discuss a mechanism that allows multi-bit output of codewords without compromising the precision of the probability estimates that may be employed. Finally, we give performance results that show that in combination the two improvements yield a coder as much as 40% faster than previous benchmark arithmetic coders.
We address efficient context modeling in arithmetic coding for wavelet image compression. Quantized highpass wavelet coefficients are first mapped into a binary source, followed by high order context modeling in arith...
详细信息
ISBN:
(纸本)0819424358
We address efficient context modeling in arithmetic coding for wavelet image compression. Quantized highpass wavelet coefficients are first mapped into a binary source, followed by high order context modeling in arithmetic coding. A blending technique is used to combine results of context modeling of different orders into a single probability estimate. Experiments show that an arithmetic coder with efficient context modeling is capable of achieving a 10% bitrate saving (or 0.5 dB gain in PSNR) over a zeroth order adaptive arithmetic coder in high performance wavelet image coders.
In the state-of-the-art video coding standard-High Efficiency Video coding (HEVC), context-adaptive binary arithmetic coding (CABAC) is adopted as the entropy coding tool. In CABAC, the binarization processes are manu...
详细信息
ISBN:
(纸本)9781479970612
In the state-of-the-art video coding standard-High Efficiency Video coding (HEVC), context-adaptive binary arithmetic coding (CABAC) is adopted as the entropy coding tool. In CABAC, the binarization processes are manually designed, and the context models are empirically crafted, both of which incur that the probability distribution of the syntax elements may not be estimated accurately, and restrict the coding efficiency. In this paper, we adopt a convolutional neural network-based arithmetic coding (CNNAC) strategy, and conduct studies on the coding of the DC coefficients for HEVC intra coding. Instead of manually designing binarization process and context model, we propose to directly estimate the probability distribution of the value of the DC coefficient using densely connected convolutional networks. The estimated probability together with the real DC coefficient are then input into a multi-level arithmetic codec to fulfill entropy coding. Simulation results show that our proposed CNNAC leads to on average 22.47% bits saving compared with CABAC for the bits of DC coefficients, which corresponds to 1.6% BD-rate reduction than the HEVC anchor.
Aiming at the problems of the context-dilution and complicated context-quantization of high-order context, this paper proposes a new context arithmetic coding using weighted-based context modeling. By classifying the ...
详细信息
ISBN:
(纸本)9780819483065
Aiming at the problems of the context-dilution and complicated context-quantization of high-order context, this paper proposes a new context arithmetic coding using weighted-based context modeling. By classifying the weights with non-uniform quantization, conventional high-order context-based arithmetic coding method can be approximated as low-order arithmetic coding. Compared with the existing high-order context modeling, the proposed method not only decreases the complexity of computation but also effectively improves the performance of entropy coding. Experimental results show that the algorithm with proposed weighted-based arithmetic coding method performs better than SPECK, SPIHT and JPEG2000.
This paper studies the joint security and performance enhancement of secure arithmetic coding (AC) for digital rights management applications. The proposed cryptosystem incorporates the interval splitting AC with a si...
详细信息
ISBN:
(纸本)9781424417650
This paper studies the joint security and performance enhancement of secure arithmetic coding (AC) for digital rights management applications. The proposed cryptosystem incorporates the interval splitting AC with a simple bit-wise XOR operation step. Security analysis results show that the proposed scheme provides satisfactory level of security against the cipher-only attack, the chosen-plaintext attack and the chosen-ciphertext attack. Due to the elimination of the input symbol-wise permutation step, our proposed scheme can be extended conveniently to any context-based coding scenarios. In addition, the implementation complexity of our proposed scheme is lower than the original secure AC. Finally, we suggest a selective encryption version of our proposed scheme, which further reduces the implementation complexity.
Slepian-Wolf problem regards the compression of multiple correlated information sources that do not communicate with each other. In real applications, the information sources such as video sequences are usually with m...
详细信息
ISBN:
(纸本)9781450361750
Slepian-Wolf problem regards the compression of multiple correlated information sources that do not communicate with each other. In real applications, the information sources such as video sequences are usually with memory where they are interdependent between symbols. Existing researches mainly consider the memory-less sources and employ channel coding to solve the Slepian-Wolf problem. In this paper, we use distributed arithmetic coding instead of channel coding to solve the Slepian-Wolf problem, benefiting the advantages of source coding in memory sources by eliminating redundancy between symbols. The proposed scheme is very competitive comparing to the existing schemes when applied to memory sources. Simulation results show that the proposed scheme has performance gains when applied to the first-order memory sources with the different overlapping factor. We also analyzed the performance applied to the second-order memory sources with different block lengths.
The popularity of parallel platforms, such as general purpose graphics processing units (GPGPUs) for large-scale simulations is rapidly increasing, however the I/O bandwidth and storage capacity of these massively-par...
详细信息
ISBN:
(纸本)9783540874744
The popularity of parallel platforms, such as general purpose graphics processing units (GPGPUs) for large-scale simulations is rapidly increasing, however the I/O bandwidth and storage capacity of these massively-parallel cards remain the major bottle necks. We propose a novel approach for post-processing of simulation data directly on CPGPUs by efficient data size reduction immediately after simulation that can considerably reduce the influence of these bottlenecks on the overall simulation performance, and present current performance results.
This paper presents speech feature extraction of Telugu language through proper compression. Compression is provided to speech using Digital arithmetic coding and features are extracted by MFCC then classification is ...
详细信息
ISBN:
(纸本)9789380544199
This paper presents speech feature extraction of Telugu language through proper compression. Compression is provided to speech using Digital arithmetic coding and features are extracted by MFCC then classification is done by ANN. Speech feature extraction and feature classification are the major steps in ASR. This paper presents a technique to extract the speech features after speech compression. A technique with arithmetic coding and MFCC is done by reducing the average number of bits. arithmetic coding and MFCC stands out in terms of magnificence and potency. A text dependent Telugu ASR is designed. Features extraction process is done for 140 bits/frame and 80 bits /frame and features extracted arc LSP, Pitch prediction filter, code base indexes, gain, synchronization, FEC, future expansion. The proposed technique AC with MFCC has been compared with various existing techniques like ADPCM, LD-CELP, CS-AELP, CELP, LPC, and MFCC. The performance of the proposed technique has proved better in terms of bit rate, word error rate and compression ratio.
A contextual lightweight arithmetic coder is proposed for lossless compression of medical imagery. Context definition uses causal data from previous symbols coded, an inexpensive yet efficient approach. To further red...
详细信息
ISBN:
(数字)9781510612501
ISBN:
(纸本)9781510612501;9781510612495
A contextual lightweight arithmetic coder is proposed for lossless compression of medical imagery. Context definition uses causal data from previous symbols coded, an inexpensive yet efficient approach. To further reduce the computational cost, a binary arithmetic coder with fixed-length codewords is adopted, thus avoiding the normalization procedure common in most implementations, and the probability of each context is estimated through bitwise operations. Experimental results are provided for several medical images and compared against state-of-the-art coding techniques, yielding on average improvements between nearly 0.1 and 0.2 bps.
暂无评论