In most digital cameras, Bayer color filter array (CIA) images are captured and demosaicing is generally carried out before compression. Recently, it was found that compression-first schemes outperform the conventiona...
详细信息
In most digital cameras, Bayer color filter array (CIA) images are captured and demosaicing is generally carried out before compression. Recently, it was found that compression-first schemes outperform the conventional demosaicing-first schemes in terms of output image quality. An efficient prediction-based lossless compression scheme for Bayer CIA images is proposed in this paper. It exploits a context matching technique to rank the neighboring pixels when predicting a pixel, an adaptive color difference estimation scheme to remove the color spectral redundancy when handling red and blue samples, and an adaptive codeword generation technique to adjust the divisor of Rice code for encoding the prediction residues. Simulation results show that the proposed compression scheme can achieve a better compression performance than conventional lossless CIA image coding schemes.
We study encodings that give the best-known thresholds for the nonzero capacity of quantum channels, i.e., the upper bound for correctable noise, using an entropic approach to calculation of the threshold values. Our ...
详细信息
We study encodings that give the best-known thresholds for the nonzero capacity of quantum channels, i.e., the upper bound for correctable noise, using an entropic approach to calculation of the threshold values. Our results show that Pauli noise is correctable up to the hashing bound. For a depolarizing channel, this approach allows one to achieve a nonzero capacity for a fidelity (probability of no error) of f=0.80870.
In this paper a low complexity algorithm is proposed for near lossless compression of images. The reconstructed near lossless image can differ from the original one within a pixelwise error tolerance. This property is...
详细信息
In this paper a low complexity algorithm is proposed for near lossless compression of images. The reconstructed near lossless image can differ from the original one within a pixelwise error tolerance. This property is used to convert the histogram of the original image, by the proposed algorithm, to a new histogram which is proved to have minimum entropy. Hence, a new image is formed which has minimum entropy and high spatial correlation among its pixels and can efficiently be compressed. Simulation results show the effectiveness of this compression algorithm.
This paper proposes a new still image codec. The encoder has the following structure: a set of pixels of the image is selected and transmitted, together with their position. Then, the value of the image at other place...
详细信息
This paper proposes a new still image codec. The encoder has the following structure: a set of pixels of the image is selected and transmitted, together with their position. Then, the value of the image at other places is obtained by a prediction algorithm at the decoder. A useful theoretical covariance model adapted to the image to be encoded is proposed, avoiding the transmission of additional information. The selected pixels and their corresponding positions on the image are encoded using lossless coding algorithms. The computational time of the decoding process is significantly reduced according to the efficient structured memory organization of (i) the image covariance values and (ii) the ordered distances concerning the search of nearest pixels. Experimental results performed on a set of test images show that the rate- distortion results are competitive to the best coders JPEG 2000 and SPIHT with arithmetic coding.
We present a high-throughput and low-cost context adaptive binary arithmetic (CABAC) decoder for H.264/AVC. Since the CABAC decoder has strong data dependency while decoding a plurality of bins, we propose a novel pip...
详细信息
We present a high-throughput and low-cost context adaptive binary arithmetic (CABAC) decoder for H.264/AVC. Since the CABAC decoder has strong data dependency while decoding a plurality of bins, we propose a novel pipeline architecture to speed up this operation. Based on different types of syntax elements, two approaches to improve throughput are proposed. In addition, we re-arrange the context models in memory by applying two principles in order to reduce the usage of memory space and to lower the frequency in memory access. The proposed CABAC decoder is already integrated in a H.264 decoder and is able to achieve real-time decoding for H.264/AVC high profile HD level 4.1. The implemented design can operate at 250 MHz with 35.6 k gate count under 0.18 mum silicon technology.
In this paper, an energy efficient wavelet transforms integrated with EBCOT is proposed. The entropy coding in JPEG 2000 is the embedded block coding with optimized truncation (EBCOT). The EBC is the most complex part...
详细信息
In this paper, an energy efficient wavelet transforms integrated with EBCOT is proposed. The entropy coding in JPEG 2000 is the embedded block coding with optimized truncation (EBCOT). The EBC is the most complex part of JPEG 2000 which consumes more than 50% of total computations. The EBCOT tier 2 is called PCRDO, which truncates the Embedded Bit Stream at a target bit rate to provide the optimal image quality. The two fatal drawbacks of the PCRDO scheme are, computational power of EBC is high and the memory requirement is high as the truncation points are determined before coding. To overcome the above drawbacks, energy efficient wavelet transform along with EBCOT is proposed. In this the truncation points are chosen before actual coding using randomness and propagation property, which reduces the number of computations. The EEWTA is used which further reduces the computations required to compress an image and also enhances the quality of the image even at low bitrates.
The 5/3 wavelet transform with double lifting steps in JPEG 2000 can reconstruct a signal without any loss. It has been utilized for lossless coding. The 9/7 wavelet transform contains two more lifting steps and scali...
详细信息
The 5/3 wavelet transform with double lifting steps in JPEG 2000 can reconstruct a signal without any loss. It has been utilized for lossless coding. The 9/7 wavelet transform contains two more lifting steps and scaling operations to improve performance for lossy coding. The loss is due to (1) quantization of band signals, (2) rounding of signals after scaling and (3) finite word length expression of scaling coefficients. This paper analyzes conditions on word length of coefficients and bit depth of rounded signals for no loss. It also proposes a new structure of lifting wavelet by changing order of the lifting step and the scaling. As a result, the rounding error is not scattered by the lifting steps and the error is minimized in mini-max sense.
In order to increase the computational efficiency of compression methods we have to consider that new hardware architectures increasingly rely more on wider data paths and parallel processing (e.g., SIMD and multi-cor...
详细信息
In order to increase the computational efficiency of compression methods we have to consider that new hardware architectures increasingly rely more on wider data paths and parallel processing (e.g., SIMD and multi-core), than on faster clocks. Higher data throughputs are achieved with entropy coding methods that process larger amounts of information each time, and use context dependencies that are less complicated and that can be quickly updated. We propose a coding method with properties more suited to the new processors, that achieves better compression by exploiting patterns of data magnitude. We present experimental results on image coding implementations that take advantage of the fast decay of transform coefficient variance with frequency.
entropy coding is a fundamental stage in all video compression algorithms in terms of compression efficiency and error resilience. In this paper we propose and optimize a digital signal processor (DSP)-based implement...
详细信息
entropy coding is a fundamental stage in all video compression algorithms in terms of compression efficiency and error resilience. In this paper we propose and optimize a digital signal processor (DSP)-based implementation of the CAVLC tools for the H.264 Baseline encoder. As result, we have been able to generate the bit stream and supply bit rate result: the LETI encoder is able to achieve high compression performance when proposing interesting video quality.
Video halftoning is a key technology for use in the innovative display - electronic paper (e-paper). Since e-paper is power-limited, halftone video compression becomes an emerging issue but is still relatively unexplo...
详细信息
Video halftoning is a key technology for use in the innovative display - electronic paper (e-paper). Since e-paper is power-limited, halftone video compression becomes an emerging issue but is still relatively unexplored. In this paper, this issue is addressed and a novel halftone video compression scheme is proposed. Our scheme is mainly composed of three components: block decomposition, block-based halftone quantization, and source coding. We evaluate the proposed method via lossless halftone video compression comparison with the famous standard, JBIG2. In addition, we demonstrate the rate-distortion performance of the proposed lossy halftone video compression method.
暂无评论