In this paper, the authors present a high performance and low power hardware architecture of entropy coder for H.264/AVC baseline. The authors implemented the architecture with SYNOPSYS design compiler and SMIC 0.13mu...
详细信息
ISBN:
(纸本)1424401607
In this paper, the authors present a high performance and low power hardware architecture of entropy coder for H.264/AVC baseline. The authors implemented the architecture with SYNOPSYS design compiler and SMIC 0.13mum cell library. The result shows that the design need less area than the prior work and it can work at frequency 250Hz. In the worst case, it needs 1095 circles to code a macro block and can process 2306 QCIF (176times144) frames per second
Variable length coding (VLC) is a widely used technique for data compression. Exp-Golomb VLC is adopted for entropy coding in H.264, the latest developed video coding standard. In this paper, we propose an efficient a...
详细信息
Variable length coding (VLC) is a widely used technique for data compression. Exp-Golomb VLC is adopted for entropy coding in H.264, the latest developed video coding standard. In this paper, we propose an efficient architecture of Exp-Golomb VLC encoder in which the codewords can be constructed by using the modified coding number technique. It results that our proposed encoder can be implemented with less area than other VLC encoders. 0.18 mu TSMC cell library technology is used to synthesize the encoder. The area of the proposed architecture is 615 gates and the critical path delay is less than 6 ns. Therefore, it is suitable for low cost multimedia applications
Fine-grain scalable audio coding approaches, such as PLEAC, are flexible, can scale from low rates to lossless, and can allow the perceptual model to be updated in the decoder as additional bits are provided. They pro...
详细信息
Fine-grain scalable audio coding approaches, such as PLEAC, are flexible, can scale from low rates to lossless, and can allow the perceptual model to be updated in the decoder as additional bits are provided. They provide good perceptual quality at mid to high perceptual rates; however, they can not establish the perceptual model rapidly enough to provide good quality at very low bit rates. In this work, we combine a fine grain scalable audio codec with a good base layer audio codec. We use Windows Media Audio 10 Professional (WMA 10 Pro) as the base layer coder, and use the PLEAC codec as the fine grain enhancement codec. The key to this method is to use a base layer codec that provides a very good approximation to the necessary perceptual codec, and then allows the fine grain scalable audio coder to use a simple loudness model on the difference signal. The combined codec can provide continuous scalability from the base layer codec rate all the way to lossless.
This paper proposes a method of lossless image coding by the aid of lossy image coding. It aims at an improvement in the compression efficiency. We apply a kind of embedded coding to large coefficients in magnitude in...
详细信息
This paper proposes a method of lossless image coding by the aid of lossy image coding. It aims at an improvement in the compression efficiency. We apply a kind of embedded coding to large coefficients in magnitude in a wavelet transform domain. The other wavelet coefficients are encoded by a context-based entropy coding. The result slightly outperforms the compression efficiency in JPEG-LS.
By grouping the common prefix of a Huffman tree, in stead of the commonly used single-side rowing Huffman tree (SGH-tree), we construct a memory efficient Huffman table on the basis of an arbitrary-side growing Huffma...
详细信息
By grouping the common prefix of a Huffman tree, in stead of the commonly used single-side rowing Huffman tree (SGH-tree), we construct a memory efficient Huffman table on the basis of an arbitrary-side growing Huffman tree (AGH-tree) to speed up the Huffman decoding. Simulation results show that, in Huffman decoding, an AGH-tree based Huffman table is 2.35 times faster that of the Hashemian's method (an SGH-tree based one) and needs only one-fifth the corresponding memory size. In summary, a novel Huffman table construction scheme is proposed in this paper which provides better performance than existing construction schemes in both decoding speed and memory usage
Context-based adaptive variable length coding (CAVLC) is entropy coding for H.264/AVC video codec. Since the CAVLC is highly context-adaptive and of a block-based context formation, high coding efficiency is achieved....
详细信息
ISBN:
(纸本)1424408989
Context-based adaptive variable length coding (CAVLC) is entropy coding for H.264/AVC video codec. Since the CAVLC is highly context-adaptive and of a block-based context formation, high coding efficiency is achieved. However, its high complexity causes various difficulties in full-hardware implementation. This paper presents high performance hardware architecture of CAVLC. The proposed architecture is implemented in a FPGA device, and verified by RTL simulations. The implementation results show that the proposed architecture encodes a 4times4 block per 16 clock cycles, and achieves a real-time processing for 1920times1088 frame size with 30-fps video at 100MHz clock speed
The paper introduces a new coding methodology of the spectral modified discrete cosine transform (MDCT) coefficients of an audio signal. A lattice quantizer is used for each spectral sub-band, having the dimension equ...
详细信息
The paper introduces a new coding methodology of the spectral modified discrete cosine transform (MDCT) coefficients of an audio signal. A lattice quantizer is used for each spectral sub-band, having the dimension equal to the size of the respective sub-band. The information that needs to be encoded consists of lattice codevector indexes, side information relative to the number the bits on which the indexes are represented and the integer exponents of the sub-band scaling factors. The nature of the side information, together with the parameterization of the quantization resolution allows the use of the method for a large range of bitrates e.g. for 44.1 kHz sampled mono files, from 128 kbits/s down to 16 kbits/s. Subjective listening tests show similar performance of the proposed method to the advanced audio coding (AAC) codec for high bitrates (128 kbits down to 64 kbits/s) and clearly better performance for lower bitrates
In this paper, a low power architecture for realizing the CAVLC decoder is proposed. In traditional VLC decoding algorithms, we could search a level in Huffman coding tree per operation. Therefore, the throughput rate...
详细信息
In this paper, a low power architecture for realizing the CAVLC decoder is proposed. In traditional VLC decoding algorithms, we could search a level in Huffman coding tree per operation. Therefore, the throughput rate is limited by the searching level. The CAVLC algorithm takes the advantage of the trend among AC coefficients in each block to predict the next codeword. The prediction mechanism can significantly improve the decoding efficiency. Hence, we suggested two efficient approaches, table partitioning and prefix predecoding, to reduce the power consumption in decoding the VLC codes. The proposed low-power CAVLD architecture achieves the real-time requirement for 720p HD (1280times720) format, while the clock is operated at 125 MHz. In simulations, the proposed architecture can reduce about 25% of power consumption in comparison to its counterpart without low power design
Hierarchical mesh representation and mesh simplification have been addressed in computer graphics for adaptive level-of-detail rendering of 3D objects. In this paper, by using a new simplification method to design hie...
详细信息
Hierarchical mesh representation and mesh simplification have been addressed in computer graphics for adaptive level-of-detail rendering of 3D objects. In this paper, by using a new simplification method to design hierarchical 3D meshes such that each mesh level has Delaunay topology, we can obtain not only meshes with desired geometric properties, but also efficient compression of the mesh data. The hierarchical compression technique is based on a nearest-neighbor ordering of mesh node points. The baseline is the use of entropy coding of linear prediction between nearest neighbor node coordinates. Vector quantization is also employed just to be able to process efficiently statistical dependences between prediction error vectors of a node. The compression method allows progressive transmission and quality scalability
This paper presents a novel context-based adaptive variable length coding (CAVLC) architecture based on split and shared VLC look up table technique. The architecture is prototyped in Verilog HDL, simulated and synthe...
详细信息
This paper presents a novel context-based adaptive variable length coding (CAVLC) architecture based on split and shared VLC look up table technique. The architecture is prototyped in Verilog HDL, simulated and synthesized for Xilinx Virtex II FPGA. The experimental result shows that the proposed architecture is capable of processing CIF frame sequences in real-time and is smaller than any of the real-time architectures proposed so far. The maximum speed of the core is around 60 MHz
暂无评论