Block Cyclic Redundancy Check (CRC) codes represent a popular and powerful class of error detection techniques in modern data communication systems. Though efficient, CRC's can detect errors only after an entire b...
详细信息
ISBN:
(纸本)0818684062
Block Cyclic Redundancy Check (CRC) codes represent a popular and powerful class of error detection techniques in modern data communication systems. Though efficient, CRC's can detect errors only after an entire block of data has been received and processed. In this work, we propose a new "continuous" error detection scheme using arithmetic coding that provides a novel tradeoff between the amount of added redundancy and the amount of time needed to detect an error once it occurs. We demonstrate how the new error detection framework improves the overall performance of transmission systems, and show how sizeable performance gains can be attained. We focus on two popular scenarios: (i) Automatic Repeat ReQuest (ARQ) based transmission;and (ii) Forward Error Correction frameworks based on (serially) concatenated coding systems involving an inner error-correction code and an outer error-detection code.
In case of encoding a multi-alphabet source, the multi-alphabet symbol sequence can be encoded directly by a multi-alphabet arithmetic encoder, or the sequence can be first converted into several binary sequences and ...
详细信息
ISBN:
(纸本)0819431249
In case of encoding a multi-alphabet source, the multi-alphabet symbol sequence can be encoded directly by a multi-alphabet arithmetic encoder, or the sequence can be first converted into several binary sequences and then each binary sequence is encoded by binary arithmetic encoder, such as the L-R arithmetic coder. arithmetic coding, however, requires arithmetic operations for each symbol and is computationally heavy. In this paper, a binary representation method using Huffman tree is introduced to reduce the number of arithmetic operations, and a new probability approximation for L-R arithmetic coding is further proposed to improve the coding efficiency when the probability of LPS(Least Probable Symbol) is near 0.5. Simulation results show that our proposed scheme has high coding efficiency and can reduce the number of coding symbols.
In this paper we describe a lossless coding scheme for the encoding of MPEG-1 Layer III encoded audio bitstreams. Commonly known as MP3, the MPEG-1 Layer III standard has proved widely popular for the transmission of ...
详细信息
ISBN:
(纸本)0780362934
In this paper we describe a lossless coding scheme for the encoding of MPEG-1 Layer III encoded audio bitstreams. Commonly known as MP3, the MPEG-1 Layer III standard has proved widely popular for the transmission of encoded audio files (MP3's) over the Internet. However, the MPEG-1 Layer III standard has been designed with a wide range of applications in mind. As such, the frame sizes are kept small and redundancies between samples in neighboring frames are not exploited. We propose a design which uses a combination of Linear Predictive coding and arithmetic coding to exploit such redundancies. The proposed coder was tested on a number of Layer III encoded audio (MP3) files and shown to produce an average coding gain of 12.2% over the original Layer III encoded files.
A grammar transform is a transformation that converts any data sequence to be compressed into a grammar from which the original data sequence can be fully reconstructed. In a grammar-based code, a data sequence is fir...
详细信息
A grammar transform is a transformation that converts any data sequence to be compressed into a grammar from which the original data sequence can be fully reconstructed. In a grammar-based code, a data sequence is first converted into a grammar by a grammar transform and then losslessly encoded. In this paper, a greedy grammar transform is first presented;this grammar transform constructs sequentially a sequence of irreducible grammars from which the original data sequence can be recovered incrementally. Based on this grammar transform, three universal lossless data compression algorithms, a sequential algorithm, an improved sequential algorithm, and a hierarchical algorithm, are then developed. These algorithms combine the power of arithmetic coding with that of string matching. It is shown that these algorithms are all universal in the sense that they can achieve asymptotically the entropy rate of any stationary, ergodic source. Moreover, it is proved that their worst case redundancies among all individual sequences of length n are upper-bounded by c log log n / log n, where c is a constant. Simulation results show that the proposed algorithms outperform the Unix Compress and Gzip algorithms, which are based on LZ78 and LZ77, respectively.
A technique for lossless compression of seismic signals is proposed. The algorithm employed is based on the equation-error structure, which approximates the signal by minimizing the error in the least-square sense and...
详细信息
A technique for lossless compression of seismic signals is proposed. The algorithm employed is based on the equation-error structure, which approximates the signal by minimizing the error in the least-square sense and estimates the transfer characteristic as a rational function or equivalently, as an autoregressive moving average process. The algorithm is implemented in the frequency domain. The performance of the proposed technique is compared with the lossless linear predictor and the differentiator approaches for compressing seismic signals, The residual sequence of these schemes is coded using arithmetic coding. The suggested approach yields compression measures (in terms of bits per sample) lower than the lossless linear predictor and the differentiator for compressing different classes of seismic signals.
A universal lossless data compression code called the multilevel pattern matching code (MPM code) is introduced. Ln processing a finite-alphabet data string of length n, the MPM code operates at O(log log n) levels se...
详细信息
A universal lossless data compression code called the multilevel pattern matching code (MPM code) is introduced. Ln processing a finite-alphabet data string of length n, the MPM code operates at O(log log n) levels sequentially. At each level, the MPM code detects matching patterns in the input data string (substrings of the data appearing in two or more nonoverlapping positions). The matching patterns detected at each level are of a fixed length which decreases by a constant factor from level to level, until this fixed length becomes one at the final level. The MPM code represents information about the matching patterns at each level as a string of tokens, with each token string encoded by an arithmetic encoder. From the concatenated encoded token strings, the decoder can reconstruct the data string via several rounds of parallel substitutions. A O(1/log n) maximal redundancy/sample upper bound is established for the MPM code with respect to any class of finite state sources of uniformly bounded complexity. We also show that the MPM code is of linear complexity in terms of time and space requirements. The results of some MPM code compression experiments are reported.
Advanced broadcast manipulation of TV sequences and enhanced user interfaces for TV systems have resulted in an increased amount of pre- and post-editing of video sequences, where graphical information is inserted. Ho...
详细信息
Advanced broadcast manipulation of TV sequences and enhanced user interfaces for TV systems have resulted in an increased amount of pre- and post-editing of video sequences, where graphical information is inserted. However, in the current broadcasting chain, there are no provisions for enabling an efficient transmission/storage of these mixed video and graphics signals and, at this emerging stage of DTV systems, introducing new standards is not desired. Nevertheless, in the professional video communication chain between content provider and broadcaster and locally, in the DTV receiver, proprietary video-graphics compression schemes can be used to enable more efficient transmission/storage of mixed video and graphics signals. For example, in the DTV receiver case, this will lead to a significant memory-cost reduction. To preserve a high overall image quality, the video and graphics data require independent coding systems, matched with their specific visual and statistical properties. In this paper, we introduce various efficient algorithms that support both the lossless (contour, runlength and arithmetic coding) and the lossy (block predictive coding) compression of graphics data. If the graphics data are apriori mixed with video and the graphics position is unknown at compression time, an accurate detection mechanism is applied to distinguish the two signals, such that independent coding algorithms can be employed for each data-type. In the DTV memory-reduction scenario, an overall bit-rate control completes the system, ensuring a fixed compression factor of 2-3 per frame without sacrificing the quality of the graphics.
In this paper, we present a very large scale integration (VLSI) design of the adaptive binary arithmetic coding for lossless data compression and decompression. The main modules of it consist of an adaptive probabilit...
详细信息
In this paper, we present a very large scale integration (VLSI) design of the adaptive binary arithmetic coding for lossless data compression and decompression. The main modules of it consist of an adaptive probability estimation modeler (APEM), an arithmetic operation unit (AOU), and a normalization unit (NU), A new bit-stuffing technique, which simultaneously solves both the carry-over and source-termination problems efficiently, is proposed and designed in an NU, The APEM estimates the conditional probabilities of input symbols efficiently using a table lookup approach with 1.28-kbytes memory. A new formula which efficiently reflects the change of symbols' occurring probability is proposed, and a complete binary tree is used to set up the values in the probability table of an APEM, In an AOU, a simplified parallel multiplier, which requires approximately half of the area of a standard parallel multiplier while maintaining a good compression ratio, is proposed. Owing to these novel designs, the designed chip can compress any type of data with an efficient compression ratio. An asynchronous interface circuit with an 8-b first-in first-out (FIFO) buffer for input/output (I/O) communication of the chip is also designed. Thus, both I/O and compression operations in the chip can be done simultaneously. Moreover, the concept of design for testability is used and a scan path is implemented in the chip. A prototype 0.8-mu m chip has been designed and fabricated in a reasonable die size. This chip can yield a processing rate of 3 Mb/s with a clock rate of 25 MHz.
This paper deals with the reversible intraframe compression of grayscale images. With reference to a spatial DPCM scheme, prediction may be accomplished in a space varying fashion following two main strategies: adapti...
详细信息
ISBN:
(纸本)0819435929
This paper deals with the reversible intraframe compression of grayscale images. With reference to a spatial DPCM scheme, prediction may be accomplished in a space varying fashion following two main strategies: adaptive, i.e., with predictors recalculated at each pixel position, and classified, in which image blocks, or pixels are preliminarily labeled into a number of statistical classes, for which minimum MSE (MMSE) predictors are calculated. In this paper, a trade off between the above two strategies is proposed, which relies on a classified linear-regression prediction obtained through fuzzy techniques, and is followed by context based statistical modeling of the outcome prediction errors, to enhance entropy coding. A thorough performances comparison with the most advanced methods in the literature highlights the advantages of the fuzzy approach.
This paper deals with application of fuzzy and neural techniques to the reversible intraframe compression of grayscale images. With reference to a spatial DPCM scheme, prediction may be accomplished in a space varying...
详细信息
ISBN:
(纸本)0819435805
This paper deals with application of fuzzy and neural techniques to the reversible intraframe compression of grayscale images. With reference to a spatial DPCM scheme, prediction may be accomplished in a space varying fashion following two main strategies: adaptive, i.e., with predictors recalculated at each pixel position, and classified, in which image blocks, or pixels are preliminarily labeled into a number of statistical classes, for which minimum MSE predictors are calculated. Here, a trade off between the above two strategies is proposed, which relies on a space-varying linear-regression prediction obtained through fuzzy techniques, and is followed by context based statistical modeling of prediction errors, to enhance entropy coding. A thorough comparison with the most advanced methods in the literature, as well as an investigation of performance trends to work parameters, highlight the advantages of the fuzzy approach.
暂无评论