In this paper, a Context-based 2D Variable Length Coding (C2DVLC) method for coding the transformed residuals in AVS video coding standard is presented. One feature in C2DVLC is the usage of multiple 2D-VLC tables a...
详细信息
In this paper, a Context-based 2D Variable Length Coding (C2DVLC) method for coding the transformed residuals in AVS video coding standard is presented. One feature in C2DVLC is the usage of multiple 2D-VLC tables and another feature is the usage of simple Exponential-Golomb codes. C2DVLC employs context-based adaptive multiple table coding to exploit the statistical correlation between DCT coefficients of one block for higher coding efficiency. ExpGolomb codes are applied to code the pairs of the run-length of zero coefficients and the nonzero coefficient for lower storage requirement. C2DVLC is a low complexity coder in terms of both computational time and memory requirement. The experimental results show that C2DVLC can gain 0.34dB in average for the tested videos when compared with the traditional 2D-VLC coding method like that used in MPEG-2. And compared with CAVLC in H.264/AVC, C2DVLC shows similar coding efficiency.
The Exponential Golomb and Context Adaptive Variable Length Coding are the entropy eoding tools for H.264/AVC in baseline proffle. Since the Exp_Golomb is based on variable length codes with a regular construction, st...
详细信息
ISBN:
(纸本)9781538681671
The Exponential Golomb and Context Adaptive Variable Length Coding are the entropy eoding tools for H.264/AVC in baseline proffle. Since the Exp_Golomb is based on variable length codes with a regular construction, strong error resilience is achieved. However, its hardware implementation causes a challenge due to the logarithmic operation. This paper presents a low-cost hardware architecture for Exp_Golomb. Furthermore, a "shift number counting method" is proposed to solve the key problem of logarithmic operation. This method can significantly simplify the hardware architecture and reduce the area cost. The proposed design has been synthesized with ISE Xilinx on Virtex IV and functionally verified by RTL simulations. The results show that the proposed design occupies 126 LUT slice and has a clock frequency about 250 MHz.
The ultraspectral sounder data consists of two dimensional pixels, each containing thousands of channels. In retrieval of geophysical parameters, the sounder data is sensitive to noises. Therefore lossless compression...
详细信息
ISBN:
(纸本)9780819492791
The ultraspectral sounder data consists of two dimensional pixels, each containing thousands of channels. In retrieval of geophysical parameters, the sounder data is sensitive to noises. Therefore lossless compression is highly desired for storing and transmitting the huge volume data. The prediction-based lower triangular transform (PLT) features the same de-correlation and coding gain properties as the Karhunen-Loeve transform (KLT), but with a lower design and implementational cost. In previous work, we have shown that PLT has the perfect reconstruction property which allows its direct use for lossless compression of sounder data. However PLT is time-consuming in doing compression. To speed up the PLT encoding scheme, we have recently exploited the parallel compute power of modern graphics processing unit (GPU) and implemented several important transform stages to compute the transform coefficients on GPU. In this work, we further incorporated a GPU-based zero-order entropy coder for the last stage of compression. The experimental result shows that our full implementation of the PLT encoding scheme on GPU shows a speedup of 88x compared to its original full implementation on CPU.
entropy coding is the essential block of transform coders that losslessly converts the quantized transform coefficients into the bit-stream suitable for transmission or storage. Usually, the entropy coders exhibit les...
详细信息
entropy coding is the essential block of transform coders that losslessly converts the quantized transform coefficients into the bit-stream suitable for transmission or storage. Usually, the entropy coders exhibit less compression capability than the lossy coding techniques. Hence, in the past decade, several efforts have been made to improve the compression capability of the entropy coding technique. Recently, a symbol reduction technique (SRT) based Huffman coder is developed to achieve higher compression than the existing entropy coders at similar complexity of the regular Huffman coder. However, the SRT-based Huffman coding is not popular for the real-time applications due to the improper negative symbol handling and the additional indexing issues, which restrict its compression gain at most 10-20% over the regular Huffman coder. Hence, in this paper, an improved SRT (ISRT) based Huffman coder is proposed to properly alleviate the deficiencies of the recent SRT-based Huffman coder and to achieve higher compression gains. The proposed entropy coder is extensively evaluated on the ground of compression gain and the time complexity. The results show that the proposed ISRT-based Huffman coder provides significant compression gain against the existing entropy coders with lower time consumptions.
entropy coders as a noiseless compression method are widely used as final step compression for images, and there have been many contributions to increase of entropy coder performance and to reduction of entropy coder ...
详细信息
ISBN:
(纸本)0819437603
entropy coders as a noiseless compression method are widely used as final step compression for images, and there have been many contributions to increase of entropy coder performance and to reduction of entropy coder complexity. In this paper, we propose some entropy coders based on the binary forward classification (BFC). The BFC requires overhead of classification but there is no change between the amount of input information and the total amount of classified output information, which we prove this property in this paper. And using the proved property, we propose entropy coders that are the BFC followed by Golomb-Rice coders (BFC+GR) and the BFC followed by arithmetic coders (BFC+A). The proposed entropy coders introduce negligible additional complexity due to the BFC. Simulation results also show better performance than other entropy coders that have similar complexity to the proposed coders.
Over the past years, the ever-growing trend on data storage demand, more specifically for "cold" data (rarely accessed data), has motivated research for alternative systems of data storage. Because of its bi...
详细信息
ISBN:
(纸本)9798350338935
Over the past years, the ever-growing trend on data storage demand, more specifically for "cold" data (rarely accessed data), has motivated research for alternative systems of data storage. Because of its biochemical characteristics, synthetic DNA molecules are now considered as serious candidates for this new kind of storage. This paper presents some results on lossy image compression methods based on convolutional autoencoders adapted to DNA data storage, with synthetic DNA-adapted entropic and fixed-length codes. The model architectures presented here have been designed to efficiently compress images, encode them into a quaternary code, and finally store them into synthetic DNA molecules. This work also aims at making the compression models better fit the problematics that we encounter when storing data into DNA, namely the fact that the DNA writing, storing and reading methods are error prone processes. The main take aways of this kind of compressive autoencoder are our latent space quantization and the different DNA adapted entropy coders used to encode the quantized latent space, which are an improvement over the fixed length DNA adapted coders that were previously used.
Linear prediction serves as a mathematical operation to estimate the future values of a discrete-time signal based on a linear function of previous samples. When applied to predictive coding of waveform such as speech...
详细信息
Linear prediction serves as a mathematical operation to estimate the future values of a discrete-time signal based on a linear function of previous samples. When applied to predictive coding of waveform such as speech and audio, a common issue that plagues compression performance is the non-stationary characteristics of prediction residuals around the starting point of the random access frames. This is because dependencies between prediction residuals and the historical waveform are interrupted to satisfy the random access requirement. In such cases, the dynamic range of the prediction residuals will fluctuate dramatically in such frames, leading to substantially poor coding performance in the subsequent entropy coder. In this study, the authors developed a solution to this long-standing issue by establishing a theoretical relationship between the energy envelope of linear prediction residuals in the random access frames and the prediction coefficients. Using the established relationship, an adaptive normalisation method is formulated as a preprocessor to the entropy coder to mitigate the poor coding performance in the random access frames. Simulation results confirm the superiority of the proposed method over existing solutions in terms of coding efficiency performance.
Real-time and high-quality video coding is gaining a wide interest in the research and industrial community for different applications. H.264/AVC, a recent standard for high performance video coding, can be successful...
详细信息
Real-time and high-quality video coding is gaining a wide interest in the research and industrial community for different applications. H.264/AVC, a recent standard for high performance video coding, can be successfully exploited in several scenarios including digital video broadcasting, high-definition TV and DVD-based systems, which require to sustain up to tens of Mbits/s. To that purpose this paper proposes optimized architectures for H.264/AVC most critical tasks, Motion estimation and context adaptive binary arithmetic coding. Post synthesis results on sub-micron CMOS standard-cells technologies show that the proposed architectures can actually process in real-time 720 x 480 video sequences at 30 frames/s and grant more than 50 Mbits/s. The achieved circuit complexity and power consumption budgets are suitable for their integration in complex VLSI multimedia systems based either on AHB bus centric on-chip communication system or on novel Network-on-Chip (NoC) infrastructures for MPSoC (Multi-Processor System on Chip). (C) 2010 Elsevier B.V. All rights reserved.
Two approaches for integrating encryption with multimedia compression systems are studied in this research, i.e., selective encryption and modified entropy coders with multiple statistical models. First, we examine th...
详细信息
Two approaches for integrating encryption with multimedia compression systems are studied in this research, i.e., selective encryption and modified entropy coders with multiple statistical models. First, we examine the limitations of selective encryption using cryptanalysis, and provide examples that use selective encryption successfully. Two rules to determine whether selective encryption is suitable for a compression system are concluded. Next, we propose another approach that turns entropy coders into encryption ciphers using multiple statistical models. Two specific encryption schemes are obtained by applying this approach to the Huffman coder and the QM coder. It is shown that security is achieved without sacrificing the compression performance and the computational speed. This modified entropy coding methodology can be applied to most modern compressed audio/video such as MPEG audio, MPEG video, and JPEG/JPEG2000 images.
This paper presents a compression scheme for digital still images, by using the Kohonen's neural network algorithm, not only for its vector quantization feature, but also for its topological property. This propert...
详细信息
This paper presents a compression scheme for digital still images, by using the Kohonen's neural network algorithm, not only for its vector quantization feature, but also for its topological property. This property allows an increase of about 80% for the compression rate. Compared to the JPEG standard, this compression scheme shows better performances (in terms of PSNR) for compression rates higher than 30.
暂无评论