This paper presents an improved sequential MAP estimator to be used as a joint source-channel decoding technique for CABAC encoded data. The decoding process is compatible with realistic implementations of CABAC in st...
详细信息
This paper presents an improved sequential MAP estimator to be used as a joint source-channel decoding technique for CABAC encoded data. The decoding process is compatible with realistic implementations of CABAC in standards like H.264, Le, handling adaptive probabilities, context modeling and integer arithmetic coding. Soft-input decoding is obtained using an improved sequential decoding technique, which allows to obtain a tradeoff between complexity and efficiency. The algorithms are simulated in a context reminiscent of H264. Error detection is realized by exploiting on one side the properties of the binarization scheme and on the other side the redundancy left in the code string. As a result, the CABAC compression efficiency is preserved and no additional redundancy is introduced in the bit stream. Simulation results outline the efficiency of the proposed techniques for encoded data sent over AWGN and UMTS-OFDM channels.
Lossless compression is still a challenging task in the case of microarray images. This research proposes two algorithms that aim to improve the lossless compression efficiency for high spatial resolution microarray i...
详细信息
Lossless compression is still a challenging task in the case of microarray images. This research proposes two algorithms that aim to improve the lossless compression efficiency for high spatial resolution microarray images using general entropy codecs, namely Huffman and arithmetic coders and the image compression standard JPEG 2000. Using the standards ensures that decoders are available to reassess the images for future applications. Typically, microarray images have a bit-depth of 16. In proposed algorithm 1, every image's per bit-plane entropy profile is calculated to automatically determine a better threshold T to split the bit-planes into the foreground and background sub-images. T is initially set to 8. However, in algorithm 1, T is updated, balancing the average value of per bit-plane entropies of the segmented sub-images of an image for improved lossless compression results. Codecs are applied individually to the produced sub-images. Proposed algorithm 2 is designed to increase the lossless compression efficiency of any unmodified JPEG 2000-compliant encoder while reducing side information overhead. In this, pixel intensity reindexing and, thereby, changing the histograms of the same segmented sub-images obtained from algorithm 1 are implemented and confirmed to get better JPEG 2000 results in lossless mode than applying it to the original image. The lossless JPEG 2000 compression performance on microarray images is also compared to JPEG-LS in particular. The experiments are carried out to validate the methods on seven benchmark datasets, namely ApoA1, ISREC, Stanford, MicroZip, GEO, Arizona, and IBB. The average first-order entropy of the datasets above is calculated and compared for codecs and better than competitive efforts in the literature.
JPEG2000 is an upcoming compression standard for still images that has a feature set well tuned for diverse data dissemination. These features are possible due to adaptation of the discrete wavelet transform, intra-su...
详细信息
JPEG2000 is an upcoming compression standard for still images that has a feature set well tuned for diverse data dissemination. These features are possible due to adaptation of the discrete wavelet transform, intra-subband bit-plane,coding, and binary arithmetic coding in the standard. In this paper, we propose a system-level architecture capable of encoding and decoding the JPEG2000 core algorithm that has been defined in Part I of the standard. The key components include dedicated architectures for wavelet, bit plane, and arithmetic coders and memory interfacing between the coders. The system architecture has been implemented in VHDL and its performance evaluated for a set of images. The estimated area of the architecture, in 0.18-mu technology, is 3-min square and the estimated frequency of operation is 200 MHz.
In this paper, we propose a lossy data compression framework based on an approximate two-dimensional (2D) pattern matching (2D-PMC) extension of the Lempel-Ziv lossless scheme. This framework forms the basis upon whic...
详细信息
In this paper, we propose a lossy data compression framework based on an approximate two-dimensional (2D) pattern matching (2D-PMC) extension of the Lempel-Ziv lossless scheme. This framework forms the basis upon which higher level schemes relying on differential coding, frequency domain techniques, prediction, and other methods can be built. We apply our pattern matching framework to image and video compression and report on theoretical and experimental results. Theoretically, we show that the fixed database model used for video compression leads to suboptimal but computationally efficient performance. The compression ratio of this model is shown to tend to the generalized entropy. For image compression, we use a growing database model for which we provide an approximate analysis. The implementation of 2D-PMC is a challenging problem from the algorithmic point of view. We use a range of techniques and data structures such as k-d trees, generalized run length coding, adaptive arithmetic coding, and variable and adaptive maximum distortion level to achieve good compression ratios at high compression speeds. We demonstrate bit rates in the range of 0.25-0.5 bpp for high-quality images and data rates in the range of 0.15-0.5 Mbps for a baseline video compression scheme that does not use any prediction or interpolation. We also demonstrate that this asymmetric compression scheme is capable of extremely fast decompression making it particularly suitable for networked multimedia applications.
The concept of context-free grammar (CFG)-based coding is extended to the case of countable-context models, yielding context-dependent grammar (CDG)-based coding. Given a countable-context model, a greedy CDG transfor...
详细信息
The concept of context-free grammar (CFG)-based coding is extended to the case of countable-context models, yielding context-dependent grammar (CDG)-based coding. Given a countable-context model, a greedy CDG transform is proposed. Based on this greedy CDG transform, two universal lossless data compression algorithms, an improved sequential context-dependent algorithm and a hierarchical context-dependent algorithm, are then developed. It is shown that these algorithms are all universal in the sense that they can achieve asymptotically the entropy rate of any stationary, ergodic source with a finite alphabet. Moreover, it is proved that these algorithms' worst case redundancies among all individual sequences of length n from a finite alphabet are upper-bounded by d log log n/log n, as long as the number of distinct contexts grows with the sequence length n in the order of O(n(a)), where 0 < a < 1 and d are positive constants. It is further shown that for some nonstationary sources, the proposed context-dependent algorithms can achieve better expected redundancies than any existing CFG-based codes, including the Lempel-Ziv algorithm, the multilevel pattern matching algorithm, and the context-free algorithms in Part I of this series of papers.
In this letter, we propose a new image coding technique, which is a combination of space frequency quantization and a context-based modeling using fuzzy logic. The compression results showed that the proposed coder ou...
详细信息
In this letter, we propose a new image coding technique, which is a combination of space frequency quantization and a context-based modeling using fuzzy logic. The compression results showed that the proposed coder outperforms the state-of-the-art coders in the rate-distortion sense for compression of processed synthetic aperture radar amplitude data.
Recently we have proposed a coding algorithm of point cloud geometry based on a rather different approach from the popular octree representation. In our algorithm, the point cloud is decomposed in silhouettes, hence t...
详细信息
Recently we have proposed a coding algorithm of point cloud geometry based on a rather different approach from the popular octree representation. In our algorithm, the point cloud is decomposed in silhouettes, hence the name Silhouette Coder, and context adaptive arithmetic coding is used to exploit redundancies within the point cloud (intra frame coding), and also using a reference point cloud (inter frame coding). In this letter we build on our previous work and propose a context selection algorithm as a pre-processing stage. With this algorithm, the point cloud is first parsed testing a large number of candidate context locations. The algorithm selects a small number of these contexts that better reflect the current point cloud, and then encode it with this choice. The proposed method further improves the results of our previous coder, Silhouette 4D, by 10%, on average, on a dynamic point cloud dataset of the JPEG Pleno, and achieves bitrates competitive with some high quality lossy coders such as the MPEG G-PCC.
A technique for lossless compression of seismic signals is proposed. The algorithm employed is based on the equation-error structure, which approximates the signal by minimizing the error in the least-square sense and...
详细信息
A technique for lossless compression of seismic signals is proposed. The algorithm employed is based on the equation-error structure, which approximates the signal by minimizing the error in the least-square sense and estimates the transfer characteristic as a rational function or equivalently, as an autoregressive moving average process. The algorithm is implemented in the frequency domain. The performance of the proposed technique is compared with the lossless linear predictor and the differentiator approaches for compressing seismic signals, The residual sequence of these schemes is coded using arithmetic coding. The suggested approach yields compression measures (in terms of bits per sample) lower than the lossless linear predictor and the differentiator for compressing different classes of seismic signals.
A VLSI architecture of a context adaptive binary arithmetic coding (CABAC) entropy codec for high-efficiency video coding (HEVC), the next generation video coding standard, is presented and analysis of its performance...
详细信息
A VLSI architecture of a context adaptive binary arithmetic coding (CABAC) entropy codec for high-efficiency video coding (HEVC), the next generation video coding standard, is presented and analysis of its performance in terms of effective processing throughput, i.e. bin/s, is provided. For high throughput, the architecture is designed to process up to two regular bins per unit time while minimising pipeline stall due to inherent dependencies of CABAC through various optimisations including context forwarding and speculative decoding. The experiments show that the effective throughput is achieved up to 1.60 bins or 1.41 bits per cycle, which corresponds to 469.5 Mbit/s at the operating frequency of 333 MHz, under practical video coding environments.
Context weighting procedures are presented for sources with models (structures) in four different classes. Although the procedures are designed for universal data compression purposes, their generality allows applicat...
详细信息
Context weighting procedures are presented for sources with models (structures) in four different classes. Although the procedures are designed for universal data compression purposes, their generality allows application in the area of classification.
暂无评论