This paper introduces a family of integer-to-integer approximations to the Cartesian-to-polar coordinate transformation and analyzes its application to lossy compression. A high-rate analysis is provided for an encode...
详细信息
ISBN:
(纸本)9781424413973
This paper introduces a family of integer-to-integer approximations to the Cartesian-to-polar coordinate transformation and analyzes its application to lossy compression. A high-rate analysis is provided for an encoder that first uniformly scalar quantizes, then transforms to "integer polar coordinates," and finally separately entropy codes angle and radius. For sources separable in polar coordinates, the performance (at high rate) is shown to match that of entropy-constrained unconstrained polar quantization-where the angular quantization is allowed to depend on the radius. Thus, for sources separable in polar coordinates but not separable in rectangular coordinates-including certain Gaussian scale mixtures-the proposed system performs better than any transform code. Furthermore, unlike unconstrained polar quantization, integer polar coordinates are appropriate for lossless compression of integer-valued vectors. Combination of integer polar coordinates with integer-to-integer transform coding is also discussed.
This paper addresses the problem of error-resilient decoding of bitstreams produced by the CABAC (context-based adaptive binary arithmetic coding) algorithm used in the H.264 video coding standard. The paper describes...
详细信息
In this paper, we propose a high speed CAVLC (Context-based Adaptive Variable Length coding) decoder for H.264/AVC. The previous hardware architectures perform five steps in series to obtain the syntax elements to res...
详细信息
ISBN:
(纸本)9781424412211
In this paper, we propose a high speed CAVLC (Context-based Adaptive Variable Length coding) decoder for H.264/AVC. The previous hardware architectures perform five steps in series to obtain the syntax elements to restore the residual and the codeword length to get next input bitstream (we call it 'valid bits'). Since several cycles are required for the process of getting the valid bits and it has to be iterated several times during CAVLC process, the decoding time is increased. This paper proposes two techniques to reduce the computational cycles for valid bits. One is an improved reduced decoding step from five to four by combining total_coeff step and trailing_ones step into one step. The other is to get the valid bits directly by shifting additional shifter register instead of using controller and accumulator. By adopting these two techniques, the required processing time was reduced by 26% compared with previous architectures. It was designed in a hardware description language and total logic gate count was 14.2k using 0.18um standard cell library.
As today's video applications are being requested in many portable end-user devices, and these ones are far capable of holding and processing large amounts of video data, there is a need for bit rate improvement i...
详细信息
ISBN:
(纸本)9780819467188
As today's video applications are being requested in many portable end-user devices, and these ones are far capable of holding and processing large amounts of video data, there is a need for bit rate improvement in compression algorithms. The objective of this paper is to propose a hardware based post-compression enhancer situated between the Video coding Layer and the Network Abstraction Layer of H.264. Our research analyzes the resulting bit streams produced by the emerging H.264 standard. The goal is to enhance compression rates by proposing simple post-compression techniques based in symbol's statistics. The CABAC and CAVLC entropy coders used in H.264 work optimally for 1-bit symbols, and the statistical distribution among them is almost the best. Our studies reveal that the bit streams presents similar results for 8-bit symbols, and thus a post-compression using well known byte-based mechanisms will not yield better results;further more, our studies also show that they even degrade the original compression rate. Nevertheless, a non equally distribution using 6-bits symbols in 2046-bits discrete data packets is found, which can be exploited to boost compression. This distribution varies between 5.4% for the most probable symbol and 0.98% for the least probable symbol in average. Again, simple coding a few of the most probable symbols will result in bit rate reduction. A 1-bit compression enhanced used flag penalty must be introduced for each discrete packet. increasing its size in 0.049%.
Fidelity scalability involves the refinement of residual texture information. The entropy coding of texture refinement information in the scalable video coding (SVC) extension of H.264/AVC relies on a simple statistic...
详细信息
ISBN:
(纸本)9781424414369
Fidelity scalability involves the refinement of residual texture information. The entropy coding of texture refinement information in the scalable video coding (SVC) extension of H.264/AVC relies on a simple statistical model that is tuned to an encoder-specific way of quantization for generating a single fidelity enhancement layer on top of the backward compatible base layer. For fidelity enhancement layers above the first layer, we demonstrate how and why the current model fails to properly reflect the statistics of texture refinement information. By analyzing the specific properties of the typical quantization process in fidelity scalable coding of SVC, we are able to derive a generic modeling approach for coding of refinement symbols, independent of the specific choice of deadzone parameters and classification rules. Experimental results for a broad range of quantization parameters show averaged bit-rate savings of around 5% (relative to the total bit rate) by using our proposed context modeling approach for a representative set of video sequences in a test scenario including up to three fidelity enhancement layers.
Widely used digital cameras nowadays rise single sensor color filter array (CFA) which captures only one component of a pixel among the three components Red, Green and Blue (RGB). This interleaved RGB valued images ar...
详细信息
ISBN:
(纸本)0769530508
Widely used digital cameras nowadays rise single sensor color filter array (CFA) which captures only one component of a pixel among the three components Red, Green and Blue (RGB). This interleaved RGB valued images are called mosaic images. The mosaic images are of one third of the RGB image's size. The mosaic image can be compressed further to reduce storage. In this paper we propose a new efficient method of lossless compression for color mosaic images. First the mosaic image is transformed by 5/3 forward wavelet transforms, which is best suited for color mosaic data. We also proposed a low complexity Adaptive Context-based Modified Golomb-Rice coding technique to compress the coefficients of the *** lossless compression performance of the proposed method on color mosaic images is arguably the best so far among the existing lossless image codecs.
In this paper an algorithm is proposed which performs near-lossless image compression. For each pixel in a row of the image a group of value-states are considered, which have values close to that of the pixel. A trell...
详细信息
ISBN:
(纸本)9781424414369
In this paper an algorithm is proposed which performs near-lossless image compression. For each pixel in a row of the image a group of value-states are considered, which have values close to that of the pixel. A trellis is constructed for every row of the image where the nodes of the trellis are the states of the pixels of that row. The goal of the algorithm is to find a path on this trellis that creates a sequence which can be efficiently coded using run length encoding (RLE). For sections of the pixels of the row that suitable RLE cannot be achieved then minimization of the entropy is employed to complete a path on the trellis. The application of the algorithm to a wide range of standard images shows that the scheme, while having low computational complexity, is competitive with other near-lossless image compression methods.
Recently, a new class of distributed source coding (DSC) based video coders has been proposed to enable low-complexity encoding. However, to date, these low-complexity DSC-based video encoders have been unable to comp...
详细信息
ISBN:
(纸本)9780819466211
Recently, a new class of distributed source coding (DSC) based video coders has been proposed to enable low-complexity encoding. However, to date, these low-complexity DSC-based video encoders have been unable to compress as efficiently as motion-compensated predictive coding based video codecs, such as H.264/AVC, due to insufficiently accurate modeling of video data. In this work, we examine achieving H.264-like high compression efficiency with a DSC-based approach without the encoding complexity constraint. The success of H.264/AVC highlights the importance of accurately modeling the highly non-stationary video data through fine-granularity motion estimation. This motivates us to deviate from the popular approach of approaching the Wyner-Ziv bound with sophisticated capacity-achieving channel codes that require long block lengths and high decoding complexity, and instead focus on accurately modeling video data. Such a DSC-based, compression-centric encoder is an important first step towards building a robust DSC-based video coding framework.
Accurate probability estimation is a key to efficient compression in entropy coding phase of state-of-the-art video coding systems. Probability estimation can be enhanced if contexts in which symbols occur are used du...
详细信息
ISBN:
(纸本)9781424414369
Accurate probability estimation is a key to efficient compression in entropy coding phase of state-of-the-art video coding systems. Probability estimation can be enhanced if contexts in which symbols occur are used during the probability estimation phase. However, these contexts have to be carefully designed in order to avoid negative effects. Methods that use tree structures to model contexts of various syntax elements have been proven efficient in image and video coding. In this paper we use such structure to build optimised contexts for application in scalable wavelet-based video coding. With the proposed approach context are designed separately for intra-coded frames and motion-compensated frames considering varying statistics across different spatio-temporal subbands. Moreover, contexts are separately designed for different bit-planes. Comparison with compression using fixed contexts from Embedded ZeroBlock coding (EZBC) has been performed showing improvements when context modelling on tree structures is applied.
暂无评论