In this paper, we present an integrated rate control and entropy coding (IREC) scheme for JPEG 2000. Based on an efficient heap-based rate allocation algorithm, the characteristics of image content, the subband coeffi...
详细信息
In this paper, we present an integrated rate control and entropy coding (IREC) scheme for JPEG 2000. Based on an efficient heap-based rate allocation algorithm, the characteristics of image content, the subband coefficient and bitplane weighting models, and the user-specified optimization criteria, the proposed IREC scheme can selectively perform entropy coding only on those parts of image data that are likely to be included in the final bitstream. Since entropy coding is the most time and power consuming part, the IREC scheme can then reduce the overall computation and power consumption of JPEG 2000 encoding procedure. Both theoretical analysis and empirical results validate the advantages of the IREC scheme. For example, when encoding the Lenna color image with a targeted compression ratio of 128:1, compared with the separate rate control and entropy coding schemes, about 93% savings in computation and power consumption can be achieved on the entropy coding procedure.
This paper describes a new class of variable length codes (VLCs) that allow to exploit first-order source statistics while still being resilient to transmission errors. This paper extends the work of [H. Jegou et al.,...
详细信息
This paper describes a new class of variable length codes (VLCs) that allow to exploit first-order source statistics while still being resilient to transmission errors. This paper extends the work of [H. Jegou et al., (2003)] to take into account the source conditional probabilities. Theoretical performances in terms of compression efficiency and error resilience are analyzed.
Arithmetic coding achieves a superior coding rate when encoding a binary source, but its lack of speed makes it an inferior choice when true high-performance encoding is needed. This paper presents the practical imple...
详细信息
Arithmetic coding achieves a superior coding rate when encoding a binary source, but its lack of speed makes it an inferior choice when true high-performance encoding is needed. This paper presents the practical implementation of fast entropy coders for binary messages utilizing only bit shifts and table lookups. To limit code table size the proposed code lengths is limited with a type of variable-to-variable (VV) length code created from source string merging. This is referred to as "merged codes". With merged codes it is possible to achieve a desired level of speed by adjusting the number of bits read from the source at each step. The most efficient merged codes yield a coder with a worst-case inefficiency of 0.4%, relative to the Shannon entropy. Using a hybrid Golomb-VV bin coder the compression ratio that is competitive with other state-of-the-art coders, at a superior throughput is achieved.
We present a pattern recognizer to classify a variety of objects and their pose on a table from real world images. Learning of weights in a linear discriminant is based on estimating the relative information contribut...
详细信息
We present a pattern recognizer to classify a variety of objects and their pose on a table from real world images. Learning of weights in a linear discriminant is based on estimating the relative information contributed by a set of features to the final decision. Evaluation of the discriminant is very fast, allowing for about three decisions per second on datasets without segmentation difficulties like the COIL-100 database. Experiments on that database yield high recognition rates and good generalisation over pose.
This paper proposes a variable block-size transform and context-based entropy coding techniques for the enhancement layer of FGS (fine granularity scalable) video coding. First, the variable block-size transform is in...
详细信息
ISBN:
(纸本)0780385543
This paper proposes a variable block-size transform and context-based entropy coding techniques for the enhancement layer of FGS (fine granularity scalable) video coding. First, the variable block-size transform is introduced into the enhancement layer to improve the performance of FGS in terms of both visual quality and PSNR. Different from that used in the traditional single layer coding, an R-D selection algorithm is proposed to optimally decide the transform size of each block, under consideration of consistent performance at a range of bit rates. Furthermore, to fully take advantage of the characteristics and correlations of symbols coded in the FGS enhancement layer, different context models are designed for the arithmetic coding according to symbol type and transform size. Experimental results show that the coding efficiency of FGS can be increased by 0.2-0.90 dB with the proposed techniques.
The modified differential pulse coded modulation (DPCM) codec with multi-rate processing has been shown to able to efficiently code the source with monotonically decreasing spectrum at low bit rates [A.N. Kim and T.A....
详细信息
The modified differential pulse coded modulation (DPCM) codec with multi-rate processing has been shown to able to efficiently code the source with monotonically decreasing spectrum at low bit rates [A.N. Kim and T.A. Ramstad]. A practical image coder is designed based on this approach. Two dimensional DPCM is used along with decimation and interpolation to reduce the number of transmitted samples. The decimation rate depends on the signal spectrum and the bit rate. Further bit rate reduction is achieved through adaptive entropy coding. Wiener filter is appended in the decoder for minimizing distortion caused by quantization noise. The decimation filter can be implemented using simple IIR filters. The necessary side information is low. Simulation results show that the coder is able to give good compression performance at low bit rates which is superior to conventional DPCM codec and JPEG. Subjective quality can be as good as JPEG2000. While at very low bit rates the proposed codec is able to retain certain image characteristics better than JPEG2000.
A novel near-lossless compression approach to perceptual audio coding is presented in this *** algorithm is based the lossless compression approach and uses Base 2 Logarithm Algorithm to improve *** error is less than...
详细信息
A novel near-lossless compression approach to perceptual audio coding is presented in this *** algorithm is based the lossless compression approach and uses Base 2 Logarithm Algorithm to improve *** error is less than 1/256 and compression is faster than the TTAENC encoder.
In this paper, we propose a new adaptive vector quantization (AVQ) algorithm based on the rate-distortion optimization. This algorithm employs a new partial codeword updating (PCU) scheme which achieves rate-distortio...
详细信息
In this paper, we propose a new adaptive vector quantization (AVQ) algorithm based on the rate-distortion optimization. This algorithm employs a new partial codeword updating (PCU) scheme which achieves rate-distortion performance superior to that of the conventional AVQ algorithms using the full codeword updating (FCU) scheme. The PCU-AVQ only updates the codeword's components with the quantization error higher than an optimal threshold instead of replacing the whole codeword. Additionally, the mathematical relation between the Lagrangian multiplier and the approximate optimal threshold is devised to reduce the rate-distortion cost computation. The experimental results show that the proposed PCU-AVQ algorithm indeed improves the rate-distortion performance without much computational complexity penalty. The PCU-AVQ can be combined with transform coding and entropy coding for higher compression ratio, and it can be widely implemented in specific AVQ algorithms for image, video and speech coding.
Context quantization is a technique to deal with the issue of context dilution in high-order conditional entropy coding. We investigate the problem of context quantizer design under the criterion of minimum adaptive c...
详细信息
ISBN:
(纸本)9781424413973;1424414296;1424413974
Context quantization is a technique to deal with the issue of context dilution in high-order conditional entropy coding. We investigate the problem of context quantizer design under the criterion of minimum adaptive code length. A property of such context quantizers is derived for binary symbols. A fast context quantizer design algorithm for conditioning binary symbols is presented and its complexity analyzed. It is conjectured that this algorithm is optimal. The context quantization is performed in what may be perceived as a probability simplex space rather than in the space of context instances.
Hybrid variable length coding (HVLC) was recently proposed as a novel entropy coding scheme for block-based image and video compression, in observation of the inefficiency of the conventional run-level variable length...
详细信息
Hybrid variable length coding (HVLC) was recently proposed as a novel entropy coding scheme for block-based image and video compression, in observation of the inefficiency of the conventional run-level variable length coding scheme in coding consecutive nonzero transform coefficients. In HVLC, each transform block is partitioned into low-frequency and high-frequency regions, and the coefficients in the two regions are coded separately by different schemes. The partition of the transform block was performed based on a predefined, constant breakpoint. In this paper, we propose to partition the transform block using a variable breakpoint that is adaptive to the local context. We present a method to find one optimal breakpoint per transform block or per multi-block partition efficiently, and we show that by using variable breakpoint, the efficiency of HVLC can be improved considerably compared to the constant breakpoint case.
暂无评论