Data-compression implementations are particularly sensitive to internal faults because most inherent redundancy in the input data is minimised by the source-coding process. Fault-tolerance techniques are presented for...
详细信息
Data-compression implementations are particularly sensitive to internal faults because most inherent redundancy in the input data is minimised by the source-coding process. Fault-tolerance techniques are presented for protecting a lossless compression algorithm, arithmetic coding, that is vulnerable to temporary hardware failures. The fundamental arithmetic operations are protected by low-cost residue codes, employing new fault-tolerance methods for multiplications and additions, as recently reported. However, additional fault-tolerant design techniques are developed to protect critical steps such as normalisation and rounding, bit stuffing and index selection. These approaches integrate well with residue codes. Normalisation and rounding after multiplication are protected by efficiently modifying the multiplier to produce residue segments. The decoding step that selects the next symbol is checked by comparing local values with estimates already calculated in other parts of the decoding structure, whereas bit stuffing, a procedure for limiting very long carry propagations, is checked by modified residue values. Overhead complexity issues are discussed as rough estimates.
Entropy coding is a fundamental technology in video coding that removes statistical redundancy among syntax elements. In high efficiency video coding (HEVC), context-adaptive binary arithmetic coding (CABAC) is adopte...
详细信息
Entropy coding is a fundamental technology in video coding that removes statistical redundancy among syntax elements. In high efficiency video coding (HEVC), context-adaptive binary arithmetic coding (CABAC) is adopted as the primary entropy coding method. The CABAC consists of three steps: binarization, context modeling, and binary arithmetic coding. As the binarization processes and context models are both manually designed in CABAC, the probability of the syntax elements may not be estimated accurately, which restricts the coding efficiency of CABAC. To address the problem, we propose a convolutional neural network-based arithmetic coding (CNNAC) method and apply it to compress the syntax elements of the intra-predicted residues in HEVC. Instead of manually designing the binarization processes and context models, we propose directly estimating the probability distribution of the syntax elements with a convolutional neural network (CNN), as CNNs can adaptively build complex relationships between inputs and outputs by training with a lot of data. Then, the values of the syntax elements, together with their estimated probability distributions, are fed into a multi-level arithmetic codec to perform entropy coding. In this paper, we have utilized the CNNAC to code the syntax elements of the DC coefficient;the lowest frequency AC coefficient;the second, third, fourth, and fifth lowest frequency AC coefficients;and the position of the last non-zero coefficient in the HEVC intra-predicted residues. The experimental results show that our proposed method achieves up to 6.7% BD-rate reduction and an average of 4.7% BD-rate reduction compared to the HEVC anchor under all intra (AI) configuration.
Past research in the field of cryptography has not given much consideration to arithmetic coding as a feasible encryption technique, with studies proving compression-specific arithmetic coding to be largely unsuitable...
详细信息
Past research in the field of cryptography has not given much consideration to arithmetic coding as a feasible encryption technique, with studies proving compression-specific arithmetic coding to be largely unsuitable for encryption. Nevertheless, adaptive modeling, which offers a huge model, variable in structure, and as completely as possible a function of the entire text that has been transmitted since the time the model was initialized, is a suitable candidate for a possible encryption-compression combine. The focus of the work presented in this paper has been to incorporate recent results of chaos theory, proven to be cryptographically secure, into arithmetic coding, to devise a convenient method to make the structure of the model unpredictable and variable in nature, and yet to retain, as far as is possible, statistical harmony, so that compression is possible. A chaos-based adaptive arithmetic coding-encryption technique has been designed, developed and tested and its implementation has been discussed. For typical text files, the proposed encoder gives compression between 67.5% and 70.5%, the zeroth-order compression suffering by about 6% due to encryption, and is not susceptible to previously carried out attacks on arithmetic coding algorithms.
Nagaraj et al. [1,2] present a skewed-non-linear generalized Luroth series (s-nGLS) framework. S-nGLS uses non-linear maps for GLS to introduce a security parameter a which is used to build a keyspace for image or dat...
详细信息
Nagaraj et al. [1,2] present a skewed-non-linear generalized Luroth series (s-nGLS) framework. S-nGLS uses non-linear maps for GLS to introduce a security parameter a which is used to build a keyspace for image or data encryption. The map introduces non-linearity to the system to add an "encryption key parameter". The skew is added to achieve optimal compression efficiency. s-nGLS used as such for joint encryption and compression is a weak candidate, as explained in this communication. First, we show how the framework is vulnerable to known plaintext based attacks and that a key of size 256 bits can be broken within 1000 trials. Next, we demonstrate that the proposed non-linearity exponentially increases the hardware complexity of design. We also discover that s-nGlS cannot be implemented as such for large bitstreams. Finally, we demonstrate how correlation of key parameter with compression performance leads to further key vulnerabilities. (C) 2012 Elsevier B. V. All rights reserved.
In this paper, an application of wavelet packet-enhanced arithmetic coding to compress the electric power disturbance data is proposed. In the proposed method, the wavelet packet is first applied in anticipation that ...
详细信息
In this paper, an application of wavelet packet-enhanced arithmetic coding to compress the electric power disturbance data is proposed. In the proposed method, the wavelet packet is first applied in anticipation that the disturbance signal can be optimally decomposed into higher frequency components and lower frequency ones on a best wavelet basis. Then, the arithmetic coding approach is utilized to reduce the redundancy of data encoding, thereby lowering down the cost related with data storage and transmission. This integrated method has been tested on different scenarios and the results are compared with other published techniques.
Enhanced aacPlus is an audio codec which is composed of advanced audio coding (AAC), spectral band replication (SBR), and parametric stereo (PS) for efficient audio coding at low bit rates. We propose a new coding sch...
详细信息
Enhanced aacPlus is an audio codec which is composed of advanced audio coding (AAC), spectral band replication (SBR), and parametric stereo (PS) for efficient audio coding at low bit rates. We propose a new coding scheme for lossless bit rate reduction of PS in enhanced aacPlus. We first determine the optimal contexts for context-based coding of quantized stereo parameter indexes in PS. Then we propose a coding scheme which is based on context-based coding using the determined optimal contexts in conjunction with arithmetic coding. The proposed scheme has normal and memory-reduced versions, both of which are shown to guarantee significant and consistent lossless bit rate reduction of PS. The saved bits by the proposed scheme can be used for the other modules of enhanced aacPlus, AAC and SBR.
arithmetic coding is an attractive technique for lossless data compression. The most important thing in arithmetic coding is to construct a good modeler that always provides accurate probability estimation for incomin...
详细信息
arithmetic coding is an attractive technique for lossless data compression. The most important thing in arithmetic coding is to construct a good modeler that always provides accurate probability estimation for incoming data. However, the characteristics of various types of source data bear a lot of uncertainty and are hard to be extracted, so we integrate fuzzy logic and grey theory to develop a smart fuzzy-grey-tuning modeler to deal with the problem of probability estimation. The average compression efficiency of the proposed method is better than other lossless compression methods, such as the Huffman, the approximate arithmetic, and the Lempel-Ziv, for three types of source data: text files, image files and binary tiles. Besides, the design is simple, fast, and suitable for VLSI implementation since an efficient table-look-up approach is adopted. (C) 2000 Elsevier Science B.V. All rights reserved.
This paper proposes an efficient lossless image compression scheme for still images based on an adaptive arithmetic coding compression algorithm. The algorithm increases the image coding compression rate and ensures t...
详细信息
This paper proposes an efficient lossless image compression scheme for still images based on an adaptive arithmetic coding compression algorithm. The algorithm increases the image coding compression rate and ensures the quality of the decoded image combined with the adaptive probability model and predictive coding. The use of adaptive models for each encoded image block dynamically estimates the probability of the relevant image block. The decoded image block can accurately recover the encoded image according to the code book information. We adopt an adaptive arithmetic coding algorithm for image compression that greatly improves the image compression rate. The results show that it is an effective compression technology.
A distribution matcher (DM) encodes a binary input data sequence into a sequence of symbols (codeword) with desired target probability distribution. The set of the output codewords constitutes a codebook (or code) of ...
详细信息
ISBN:
(纸本)9781538676462
A distribution matcher (DM) encodes a binary input data sequence into a sequence of symbols (codeword) with desired target probability distribution. The set of the output codewords constitutes a codebook (or code) of a DM. Constant-composition DM (CCDM) uses arithmetic coding to efficiently encode data into codewords from a constant-composition (CC) codebook. The CC constraint limits the size of the codebook, and hence the coding rate of the CCDM. The performance of CCDM degrades with decreasing output length. To improve the performance for short transmission blocks we present a class of multi-composition (MC) codes and an efficient arithmetic coding scheme for encoding and decoding. The resulting multi-composition DM (MCDM) is able to encode more data into distribution matched codewords than the CCDM and achieves lower KL divergence, especially for short block messages.
We show that high-resolution images can be encoded and decoded efficiently in parallel, We present an algorithm based on the hierarchical MLP method, used either with Huffman coding or with a new variant of arithmetic...
详细信息
We show that high-resolution images can be encoded and decoded efficiently in parallel, We present an algorithm based on the hierarchical MLP method, used either with Huffman coding or with a new variant of arithmetic coding called quasi-arithmetic coding. The coding step can be parallelized, even though the codes for different pixels are of different lengths;parallelization of the prediction and error modeling components is straightforward.
暂无评论