In a resolution scalable image coding algorithm, a multiresolution representation of the data is often obtained using a linear filter bank. Reversible cellular automata have been recently proposed as simpler, nonlinea...
详细信息
In a resolution scalable image coding algorithm, a multiresolution representation of the data is often obtained using a linear filter bank. Reversible cellular automata have been recently proposed as simpler, nonlinear filter banks that produce a similar representation. The original image is decomposed into four subbands, such that one of them retains most of the features of the original image at a reduced scale. In this paper, we discuss the utilization of reversible cellular automata and arithmetic coding for scalable compression of binary and grayscale images. In the binary case, the proposed algorithm that uses simple local rules compares well with the JBIG compression standard, in particular for images where the foreground is made of a simple connected region. For complex images, more efficient local rules based upon the lifting principle have been designed. They provide compression performances very close to or even better than JBIG, depending upon the image characteristics. In the grayscale case, and in particular for smooth images such as depth maps, the proposed algorithm outperforms both the JBIG and the JPEG2000 standards under most coding conditions.
Block truncation coding (BTC) technique is a simple and fast image compression algorithm since complicated transforms are not used. The principle used in BTC algorithm is to use two-level quantiser that adapts to loca...
详细信息
Block truncation coding (BTC) technique is a simple and fast image compression algorithm since complicated transforms are not used. The principle used in BTC algorithm is to use two-level quantiser that adapts to local properties of the image while preserving the first- or first- and second-order statistical moments. The parameters transmitter or stored in the BTC algorithm are statistical moments and bitplane yielding good quality images at a bitrate of 2 bits per pixel (bpp). In this paper, two algorithms for modified BTC (MBTC) are proposed for reducing the bitrate below 2 bpp. The principal used in the proposed algorithms is to use the ratio of moments which is a smaller value when compared to absolute moments. The ratio values are then entropy coded. The bitplane is also coded to remove the correlation among the bits. The proposed algorithms are compared with MBTC and the algorithms obtained by combining JPEG standard with MBTC in terms of bitrate, peak signal-to-noise ratio (PSNR) and subjective quality. It is found that the reconstructed images obtained using the proposed algorithms yield better results.
When the embedded zerotree wavelet (EZW) algorithm was first introduced by Shapiro, four types of symbols (zerotree (ZTR), isolated zero (IZ), positive (POS), and negative (NEG)) were used to represent the tree struct...
详细信息
When the embedded zerotree wavelet (EZW) algorithm was first introduced by Shapiro, four types of symbols (zerotree (ZTR), isolated zero (IZ), positive (POS), and negative (NEG)) were used to represent the tree structure. An improved version of EZW, the set partitioning in hierarchical trees (SPIHT) algorithm was later proposed by Said and Pearlman. SPIHT removed the ZTR symbol, while keeping the other three symbols in a slightly different form. In the SPIHT algorithm, the coding of the parent node is isolated from the coding of its descendants in the tree structure. Therefore, it is no longer possible to encode the parent and its descendants with a single symbol. When both the parent and its descendants are insignificant (forming a degree-0 zerotree (ZTR)), it cannot be represented using a ZTR symbol. From our observation, the number of degree-0 ZTRs can occur very frequently not only in natural and synthesis images, but also in video sequences. Hence, the ZTR symbol is reintroduced into SPIHT in our proposed SPIHT-ZTR algorithm. In order to achieve this, the order of sending the output bits was modified to accommodate the use of ZTR symbol. Moreover, the significant offspring were also encoded using a slightly different method to further enhance the performance. The SPIHT-ZTR algorithm was evaluated on images and video sequences. From the simulation results, the performance of binary-uncoded SPIHT-ZTR is higher than binary-uncoded SPIHT and close to SPIHT with adaptive arithmetic coding.
In this paper, we analyse a new chaos-based cryptosystem with an embedded adaptive arithmetic coder, which was proposed by Li Heng-Jian and Zhang J S (Li H J and Zhang J S 2010 Chin. Phys. B 19 050508). Although thi...
详细信息
In this paper, we analyse a new chaos-based cryptosystem with an embedded adaptive arithmetic coder, which was proposed by Li Heng-Jian and Zhang J S (Li H J and Zhang J S 2010 Chin. Phys. B 19 050508). Although this new method has a better compression performance than its original version, it is found that there are some problems with its security and decryption processes. In this paper, it is shown how to obtain a great deal of plain text from the cipher text without prior knowledge of the secret key. After discussing the security and decryption problems of the Li Heng-Jian et al. algorithm, we propose an improved chaos-based cryptosystem with an embedded adaptive arithmetic coder that is more secure.
In the field of lossless compression, most kinds of traditional software have some shortages when they face the mass data. Their compressing abilities are limited by the data window size and the compressing format des...
详细信息
In the field of lossless compression, most kinds of traditional software have some shortages when they face the mass data. Their compressing abilities are limited by the data window size and the compressing format design. This paper presents a new design of compressing format named 'CZ format' which supports the data window size up to 4 GB and has some advantages in the mass data compression. Using this format, a compressing shareware named 'ComZip' is designed. The experiment results support that ComZip has better compression ratio than WinZip, Bzip2 and are compressed. And ComZip has the potential to beat 7-zip in WinRAR in most cases, especially when GBs or TBs of mass data future as the data window size exceeds 128 MB.
In this paper, we study various lossless compression techniques for electroencephalograph (EEG) signals. We discuss a computationally simple pre-processing technique, where EEG signal is arranged in the form of a matr...
详细信息
In this paper, we study various lossless compression techniques for electroencephalograph (EEG) signals. We discuss a computationally simple pre-processing technique, where EEG signal is arranged in the form of a matrix (2-D) before compression. We discuss a two-stage coder to compress the EEG matrix, with a lossy coding layer (SPIHT) and residual coding layer (arithmetic coding). This coder is optimally tuned to utilize the source memory and the lid. nature of the residual. We also investigate and compare EEG compression with other schemes such as JPEG2000 image compression standard, predictive coding based shorten, and simple entropy coding. The compression algorithms are tested with University of Bonn database and Physiobank Motor/Mental Imagery database. 2-D based compression schemes yielded higher lossless compression compared to the standard vector-based compression, predictive and entropy coding schemes. The use of pre-processing technique resulted in 6% improvement, and the two-stage coder yielded a further improvement of 3% in compression performance. (C) 2011 Elsevier Ltd. All rights reserved.
An improvement of a discrete cosine transform (DCT)-based method for electrocardiogram (ECG) compression is presented. The appropriate use of a block based DCT associated to a uniform scalar dead zone quantiser and ar...
详细信息
An improvement of a discrete cosine transform (DCT)-based method for electrocardiogram (ECG) compression is presented. The appropriate use of a block based DCT associated to a uniform scalar dead zone quantiser and arithmetic coding show very good results, confirming that the proposed strategy exhibits competitive performances compared with the most popular compressors used for ECG compression.
To address the increasing demand for higher resolution and frame rates, processing speed (i.e. performance) and area cost need to be considered in the development of next generation video coding. Accordingly, both alg...
详细信息
ISBN:
(纸本)9781457705397
To address the increasing demand for higher resolution and frame rates, processing speed (i.e. performance) and area cost need to be considered in the development of next generation video coding. Accordingly, both algorithm and architecture should be taken into account during video codec design. This paper proposes joint optimization of both the algorithm and architecture to ensure that high coding efficiency can be achieved in conjunction with high processing speed and low area cost. Specifically, it presents two optimizations that can be performed on Context-based Adaptive Binary arithmetic coding (CABAC), a form of entropy coding in H.264/AVC. First, subinterval reordering is proposed for the arithmetic decoder to increase the processing speed by 14 to 22% with no cost to coding efficiency. Second, modification of the motion vector difference (mvd) context selection is proposed to reduce memory requirements (i.e. area cost) by 50% with negligible coding efficiency impact (<= 0.02%). These joint algorithm and architecture optimizations are non-standard compliant and thus are well suited to be used in High Efficiency Video coding (HEVC), the successor to H.264/AVC.
In recent years, many wavelet coders that use various spatially adaptive coding technique to compress the image. Level of flexibility and the coding efficiency are two crucial issues in spatially adaptive methods. So ...
详细信息
ISBN:
(纸本)9783642240423;9783642240430
In recent years, many wavelet coders that use various spatially adaptive coding technique to compress the image. Level of flexibility and the coding efficiency are two crucial issues in spatially adaptive methods. So in this paper "spherical coder" is introduced. The objective of this paper is to combine the spherical tree with the wavelet lifting technique and compare the performance of the spherical tree between the different coding technique such as arithmetic, Huffman and run length. The Comparison is made by using PSNR and Compression Ratio (CR). It is shown the Spherical tree in wavelet lifting with the arithmetic coder gives high CR value.
Methods for embedment of stream cipher rules into compressive Elias-type entropy coders are presented. Such systems have the ability to simultaneously compress and encrypt input data;thereby providing a fast and secur...
详细信息
ISBN:
(纸本)9780819486370
Methods for embedment of stream cipher rules into compressive Elias-type entropy coders are presented. Such systems have the ability to simultaneously compress and encrypt input data;thereby providing a fast and secure means for data compression. Procedures for maintaining compressive performance are articulated with focus on further compression optimizations. Furthermore, a novel method is proposed which exploits the compression process to hide cipherstream information in the case of a known plaintext attack. Simulations were performed on images from a variety of classes in order to grade and compare the compressive and computational costs of the novel system relative to traditional compression-followed-by-encryption methods.
暂无评论