High Efficiency Video coding (HEVC) is the most recent jointly developed video coding standard of ITU-T Visual coding Experts Group (VCEG) and ISO/IEC Moving Picture Experts Group (MPEG). Although its basic architectu...
详细信息
High Efficiency Video coding (HEVC) is the most recent jointly developed video coding standard of ITU-T Visual coding Experts Group (VCEG) and ISO/IEC Moving Picture Experts Group (MPEG). Although its basic architecture is built along the conventional hybrid block-based approach of combining prediction with transform coding, HEVC includes a number of coding tools with greatly enhanced coding-efficiency capabilities relative to those of prior video coding standards. Among these tools are new transform coding techniques that include the support for dyadically increasing transform block sizes ranging from 4 x 4 to 32 x 32, the partitioning of residual blocks into variable block-size transforms by using a quadtree-based partitioning dubbed as residual quadtree (RQT) as well as some properly designed entropy coding techniques for quantized transform coefficients of variable transform block sizes. In this paper, we describe these HEVC techniques for transform coding with a particular focus on the RQT structure and the entropy coding stage and demonstrate their benefit in terms of improved coding efficiency by experimental results.
Modern GPUs excel in parallel computations, so they are an interesting target to perform matrix transformations such as the DCT, a fundamental part of MPEG video coding algorithms. Considering a system to encode synth...
详细信息
Modern GPUs excel in parallel computations, so they are an interesting target to perform matrix transformations such as the DCT, a fundamental part of MPEG video coding algorithms. Considering a system to encode synthetic video (e.g., computer-generated frames), this approach becomes even more appealing, since the images to encode are already in the GPU, eliminating the costs of transferring raw video from the CPU to the GPU. However, after a raw frame has been transformed and quantized by the GPU, the resulting coefficients must be reordered, entropy encoded and framed into the resulting MPEG bitstream. These last steps are essentially sequential and their straightforward GPU implementation is inefficient compared to CPU-based implementations. We present different approaches to implement part of these steps in GPU, aiming for a better usage of the memory bus, compensating the suboptimal use of the GPU with the gains in transfer time. We analyze three approaches to perform the zigzag scan and Huffman coding combining GPU and CPU, and two approaches to assemble the results to build the actual output bitstream both in GPU and CPU memory. Our experiments show that optimising the amount of data transferred from GPU to CPU implementing the last sequential compression steps in the GPU, and using a parallel fast scan implementation of the zigzag scanning improve the overall performance of the system. Savings in transfer time outweigh the extra cost incurred in the GPU.
The aim of this study is to investigate neuronal ensemble coding mechanism via entropy under fear conditioning. entropy is a measurement of uncertainty and the amount of information in a sequence, which can quanta inf...
详细信息
ISBN:
(纸本)9780769535630
The aim of this study is to investigate neuronal ensemble coding mechanism via entropy under fear conditioning. entropy is a measurement of uncertainty and the amount of information in a sequence, which can quanta information as well as describe the ensemble coding characteristic. The original data were recorded at anterior cingulate cortex (ACC) of rats by implicated 16 channels array under two conditions: pre-training and testing. Then through a high pass filtering (300Hz-7.5KHz) and spike detection, spike trains were obtained via spike sorting. The entropy values were performed in a selected window (500ms) with a moving step (size of 1/4window). The results showed that the entropy values were lower in pre-training with no ensemble activity;in testing condition, the entropy values were higher, where presented obvious ensemble activity. Conclusion: in pre-training, no ensemble activity was presented, which indicated that memory was not formed;in testing condition, obvious ensemble activity was manifested, which showed that the fear memory has been built up. The effect on coding firing information of neuronal ensemble by dynamic entropy coding was proved.
An efficient fine-granular scalable coding algorithm of 3-D mesh sequences for low-latency streaming applications is proposed in this work. First, we decompose a mesh sequence into spatial and temporal layers to suppo...
详细信息
An efficient fine-granular scalable coding algorithm of 3-D mesh sequences for low-latency streaming applications is proposed in this work. First, we decompose a mesh sequence into spatial and temporal layers to support scalable decoding. To support the finest-granular spatial scalability, we decimate only a single vertex at each layer to obtain the next layer. Then, we predict the coordinates of decimated vertices spatially and temporally based on a hierarchical prediction structure. Last, we quantize and transmit the spatio-temporal prediction residuals using an arithmetic coder. We propose an efficient context model for the arithmetic coding. Experiment results show that the proposed algorithm provides significantly better compression performance than the conventional algorithms, while supporting finer-granular spatial scalability.
This letter presents a new lossless electron-beam layout data compression and decompression algorithm named LineDiff entropy. The algorithm is designed to facilitate high volume data transfer and massively-parallel de...
详细信息
This letter presents a new lossless electron-beam layout data compression and decompression algorithm named LineDiff entropy. The algorithm is designed to facilitate high volume data transfer and massively-parallel decompression in electron-beam direct-write lithography systems. LineDiff entropy first compares consecutive electron-beam data scanlines and encodes the data based on change/no-change of pixel values and length of corresponding sequences. Then LineDiff entropy utilizes the entropy encoding technique to assign unique short codes to data of frequent occurrence. Because the code format is simple and effective, LineDiff entropy decompression can be achieved with limited computing resources. The benchmark results show that LineDiff entropy is capable of achieving excellent compression factors and very fast decompression speed.
Adaptive predictor combination (APC) is a framework for combining multiple predictors for lossless image compression and is often at the core of state-of-the-art algorithms. In this paper, a Bayesian parameter estimat...
详细信息
Adaptive predictor combination (APC) is a framework for combining multiple predictors for lossless image compression and is often at the core of state-of-the-art algorithms. In this paper, a Bayesian parameter estimation scheme is proposed for APC. Extensive experiments using natural, medical, and remote sensing images of 8-16 bit/pixel have confirmed that the predictive performance is consistently better than that of APC for any combination of fixed predictors and with only a marginal increase in computational complexity. The predictive performance improves with every additional fixed predictor, a property that is not found in other predictor combination schemes studied in this paper. Analysis and simulation show that the performance of the proposed algorithm is not sensitive to the choice of hyper-parameters of the prior distributions. Furthermore, the proposed prediction scheme provides a theoretical justification for the error correction stage that is often included as part of a prediction process.
This paper describes a low-complexity, high-efficiency, lossy-to-lossless 3D image coding system. The proposed system is based on a novel probability model for the symbols that are emitted by bitplane coding engines. ...
详细信息
This paper describes a low-complexity, high-efficiency, lossy-to-lossless 3D image coding system. The proposed system is based on a novel probability model for the symbols that are emitted by bitplane coding engines. This probability model uses partially reconstructed coefficients from previous components together with a mathematical framework that captures the statistical behavior of the image. An important aspect of this mathematical framework is its generality, which makes the proposed scheme suitable for different types of 3D images. The main advantages of the proposed scheme are competitive coding performance, low computational load, very low memory requirements, straightforward implementation, and simple adaptation to most sensors. (C) 2013 Elsevier Inc. All rights reserved.
entropy coding is the main important part of all advanced video compression schemes. Context-adaptive binary arithmetic coding (CABAC) is entropy coding used in H.264/MPEG-4 A VC and H.265/HEVC standards. Probability ...
详细信息
ISBN:
(纸本)9781479902880
entropy coding is the main important part of all advanced video compression schemes. Context-adaptive binary arithmetic coding (CABAC) is entropy coding used in H.264/MPEG-4 A VC and H.265/HEVC standards. Probability estimation is the key factor of CABAC performance efficiency. In this paper high accuracy probability estimation for CABAC is presented. This technique is based on multiple estimations using different models. Proposed method was efficiently realized in integer arithmetic. High precision probability estimation for CABAC provides up-to 1,4% BD-rate gain.
A scalable audio coding method is proposed using a technique, Quantization Index Modulation, borrowed from watermarking. Some of the information of each layer output is embedded (watermarked) in the previous layer. Th...
详细信息
ISBN:
(纸本)9781479900152
A scalable audio coding method is proposed using a technique, Quantization Index Modulation, borrowed from watermarking. Some of the information of each layer output is embedded (watermarked) in the previous layer. This approach leads to a saving in bitrate while keeping the distortion almost unchanged. This makes the scalable coding system more efficient in terms of Rate-Distortion. The results show that the proposed method outperforms the scalable audio coding based on reconstruction error quantization which is used in practical systems such as MPEG-4 AAC.
While prefetching scheme has been used in different levels of computing, research works have not gone far beyond assuming a Markovian model and exploring localities in various applications. In this paper, we derive tw...
详细信息
ISBN:
(纸本)9781479920846
While prefetching scheme has been used in different levels of computing, research works have not gone far beyond assuming a Markovian model and exploring localities in various applications. In this paper, we derive two lower bounds of information gain for prefetch systems and approximately visualize them in terms of decision tree learning concept. With the lower bounds of information gain, we can outline the minimum capacity required for a prefetch system to improve performance in respond to the probability model of a data set. By visualizing the analysis of information gain, We also conclude that performing entropy coding on the attibutes of a data set and making prefetching decisions based on the encoded attributes can help lowering the requirement of information tracking capacity.
暂无评论