Such codes are described using dual leaf-linked trees: one specifying the parsing of the source symbols into source words, and the other specifying the formation of code words from code symbols. Compression exceeds en...
详细信息
Such codes are described using dual leaf-linked trees: one specifying the parsing of the source symbols into source words, and the other specifying the formation of code words from code symbols. Compression exceeds entropy by the amount of the informational divergence, between source words and code words, divided by the expected source-word length. The asymptotic optimality of Tunstall or Huffman codes derives from the bounding of divergence while the expected source-word length is made arbitrarily large. A heuristic extension scheme is asymptotically optimal but also acts to reduce the divergence by retaining those source words which are well matched to their corresponding code words.< >
This paper presents a unified transform unit that can efficiently perform forward integer transform, quantization, dequantization, inverse integer transform, and Hadamard transform in H.264/AVC. To reduce hardware cos...
详细信息
ISBN:
(纸本)9781424425983
This paper presents a unified transform unit that can efficiently perform forward integer transform, quantization, dequantization, inverse integer transform, and Hadamard transform in H.264/AVC. To reduce hardware cost, the proposed architecture uses shifters, adder/subtractors instead of multipliers for quantization and de-quantization, and reuses 1-D transform unit for all supporting transforms. Hardware utilization is maximized to achieve required performance with low cost. It takes about 250 cycles to perform forward integer transform, quantization, de-quantization, and inverse integer transform for a macroblock, and additional 35 cycles for forward and inverse Hadamard transforms. The architecture can process about 6,000 frames of QCIF at 150 MHz. The unified transform unit can be widely used in mobile devices.
We describe a novel multimedia security framework based on a modification of the arithmetic coder, which is used by most international image and video coding standards as entropy coding stage. In particular, we propos...
详细信息
ISBN:
(纸本)0780385780
We describe a novel multimedia security framework based on a modification of the arithmetic coder, which is used by most international image and video coding standards as entropy coding stage. In particular, we propose a randomized arithmetic coding paradigm, which achieves encryption by randomly swapping the intervals of the least and most probable symbols in arithmetic coding; moreover, we describe an implementation tailored to the JPEG 2000 standard. The proposed approach turns out to be robust towards attempts to discover the key, and allows very flexible procedures for insertion of redundancy at the codeblock level, allowing to perform total and selective encryption, conditional access, and encryption of regions of interest.
Maxted et al. in 1985 gave a conjecture stating that, for a Geometric source, the stable code has the best error recovery performance for the case of bit inversion among all Huffman codes for this source, while the un...
详细信息
Maxted et al. in 1985 gave a conjecture stating that, for a Geometric source, the stable code has the best error recovery performance for the case of bit inversion among all Huffman codes for this source, while the unstable code has the worst error recovery performance. This conjecture was extended by Swaszek et al. ten years later, but without proof, to sources with certain probability mass function. In this paper, we prove the correctness of the extended version of this conjecture. Our proof provides a novel mathematical technique for proving the optimality of variable length code in the sense of error recovery capability. Furthermore, our result offers some insight into the working mechanism of the suffix condition that has been widely used by many heuristic algorithms to find error-resilient codes.
This paper presents a high throughput hardware architecture for forward transforms module of H.264/AVC video coding standard. The designed architecture can reach 303MHz, when mapped to a Xilinx Virtex II Pro FPGA, and...
详细信息
This paper presents a high throughput hardware architecture for forward transforms module of H.264/AVC video coding standard. The designed architecture can reach 303MHz, when mapped to a Xilinx Virtex II Pro FPGA, and it is able to process 4.9 billion of samples per second. This throughput allows the use of the designed architecture in H.264/AVC codecs targeting real time when processing high resolution videos. This high throughput is very important to reduce the intra-prediction coding time, as will be explained in this paper. This architecture is able to process 1,561 HDTV 1080p frames per second. Comparing this design with published works was possible to conclude that this solution presents the best throughput among all other solutions.
Scalable shape encoding is one of the important steps to achieving highly scalable object-based video coding. In this paper, a new scalable vertex-based shape intra-coding scheme has been described. To improve the enc...
详细信息
Scalable shape encoding is one of the important steps to achieving highly scalable object-based video coding. In this paper, a new scalable vertex-based shape intra-coding scheme has been described. To improve the encoding performance, we propose a new vertex selection scheme, which can reduce the number of approximation vertices. We also propose a new vertex encoding method, in which the information on the coarser layers and statistical entropy coding are exploited for high encoding efficiency. Experimental results show that the proposed scheme can provide 25-60% gain over the scalable encoding method in Buhan Jordan et al. (1998). For some sequences, it can achieve 5-10% gain over the conventional non-scalable vertex-based coding method (O'Connell (1997)) in bit rate, at the price of additional complexity.
We present a scheme for simple oversampled analog-to-digital conversion with single bit quantization and exponential error decay in the bit rate. The scheme is based on recording positions of zero-crossings of the inp...
详细信息
We present a scheme for simple oversampled analog-to-digital conversion with single bit quantization and exponential error decay in the bit rate. The scheme is based on recording positions of zero-crossings of the input signal added to a deterministic dither function. This information can be represented in a manner which requires only logarithmic increase of the bit rate with the oversampling factor, r. The input-bandlimited signal can be reconstructed from this information locally, and with a mean squared error which is inversely proportional to the square of the oversampling factor, MSE=O(1/r/sup 2/). Consequently the mean squared error of this scheme exhibits exponential decay in the bit rate.
A classified context quantization (CCQ) technique is proposed to code basic image VQ indexes in the setting of high order context models. The context model of an index is first classified into one of three classes acc...
详细信息
ISBN:
(纸本)0780388747
A classified context quantization (CCQ) technique is proposed to code basic image VQ indexes in the setting of high order context models. The context model of an index is first classified into one of three classes according to the smoothness of the image area they represent. Then the index is coded with a context quantizer designed for that class. Experimental results show that CCQ achieves about three percent improvement over the previous best results of image VQ by conditional entropy coding of VQ indexes (CECOVI), and does so at a lower computational cost.
A new progressive transmission scheme using spline biorthogonal wavelet bases is proposed in this paper. First, several wavelet bases are compared with the spline biorthogonal wavelet bases. By exploiting the properti...
详细信息
A new progressive transmission scheme using spline biorthogonal wavelet bases is proposed in this paper. First, several wavelet bases are compared with the spline biorthogonal wavelet bases. By exploiting the properties of this set of wavelet bases, a fast algorithm involving only additions and subtractions is developed. Due to the multiresolutional nature of the wavelet transform, this scheme is compatible with hierarchical-structured rendering algorithms. The formula for reconstructing the functional values in a continuous volume space is given in a simple polynomial form. Lossless compression is possible, even when using floating-point numbers. When the algorithm is applied to data from a global ocean model, the lossless compression ratio is about 1.5:1. Even with a compression ratio of 50:1, the reconstructed data is still of good quality. Finally, the reconstructed data is rendered using various visualization algorithms and the results are demonstrated.< >
JPEG2000, an international standard for still image compression, has three main features: 1) high coding performance; 2) unified lossless/lossy compression; and 3) resolution and SNR scalability. Resolution scalabilit...
详细信息
JPEG2000, an international standard for still image compression, has three main features: 1) high coding performance; 2) unified lossless/lossy compression; and 3) resolution and SNR scalability. Resolution scalability is especially promising given the popularity of super high definition (SHD) images like digital-cinema. Unfortunately, the resolution scalability of its current implementation is restricted to powers of two. In this paper, we introduce non-octave scalable coding with a motion compensated interframe wavelet transform. By using the proposed algorithm, images of rational scales can be decoded from a compressed code stream. Experiments on SHD digital cinema test sequences show the effectiveness of the proposed algorithm.
暂无评论