In this paper, we proposed a new geometric finite mixture model-based adaptive arithmetic coding (AAC) for lossless image compression. Applying AAC for image compression, large compression gains can be achieved only t...
详细信息
In this paper, we proposed a new geometric finite mixture model-based adaptive arithmetic coding (AAC) for lossless image compression. Applying AAC for image compression, large compression gains can be achieved only through the use of sophisticated models that provide more accurate probabilistic descriptions of the image. In this work, we proposed to divide the residual image into non-overlapping blocks, and then we model the statistics of each block by a mixture of geometric distributions of parameters estimated through the maximum likelihood estimation using the expectation-maximization algorithm. Moreover, a histogram tail truncation method within each predicted error block is used in order to reduce the number of symbols in the arithmeticcoding and therefore to reduce the effect of the zero-occurrence symbols. Experimentally, we showed that using convenient block size and number of mixture components in conjunction with the prediction technique median edge detector, the proposed method outperforms the well known lossless image compressors.
Contour compression is to encode the boundaries of objects and important for object-orient image compression and cartoon-like image compression. In this paper, we propose an advanced algorithm to perform lossy compres...
详细信息
ISBN:
(纸本)9781467372183
Contour compression is to encode the boundaries of objects and important for object-orient image compression and cartoon-like image compression. In this paper, we propose an advanced algorithm to perform lossy compression for contours. First, we find that among all curves, the 3rd order polynomial is most suitable for contour compression. Second, we suggest that, instead of encoding the coefficients of curves directly, it is more efficient to encode the heights of curves. Moreover, we apply an alternative definition of curvature to find the dominant points. Furthermore, we apply the improved version of adaptive arithmetic coding, including increasing the probabilities of the values with the same sign or with similar amplitudes, to encode the heights of curves. Simulations show that, when the error is similar, the proposed algorithm requires much less number of bits than other existing methods for contour compression.
We propose a new adaptive block-wise lossless image compression algorithm, which is based on the so-called alphabet reduction scheme combined with an adaptive arithmetic coding (AC). This new encoding algorithm is par...
详细信息
We propose a new adaptive block-wise lossless image compression algorithm, which is based on the so-called alphabet reduction scheme combined with an adaptive arithmetic coding (AC). This new encoding algorithm is particularly efficient for lossless compression of images with sparse and locally sparse histograms. AC is a very efficient technique for lossless data compression and produces a rate that is close to the entropy;however, a compression performance loss occurs when encoding images or blocks with a limited number of active symbols by comparison with the number of symbols in the nominal alphabet, which consists in the amplification of the zero frequency problem. Generally, most methods add one to the frequency count of each symbol from the nominal alphabet, which leads to a statistical model distortion, and therefore reduces the efficiency of the AC. The aim of this work is to overcome this drawback by assigning to each image block the smallest possible set including all the existing symbols called active symbols. This is an alternative of using the nominal alphabet when applying the conventional arithmetic encoders. We show experimentally that the proposed method outperforms several lossless image compression encoders and standards including the conventional arithmetic encoders, JPEG2000, and JPEG-LS. (C) 2015 SPIE and IS&T
In this paper, an effective algorithm for compressing simple images, such as cartoons and man-drawn images, is proposed. Compared to existing methods, the proposed algorithm applies several new techniques. First, we c...
详细信息
ISBN:
(纸本)9781479952304
In this paper, an effective algorithm for compressing simple images, such as cartoons and man-drawn images, is proposed. Compared to existing methods, the proposed algorithm applies several new techniques. First, we classify the regions of an image into 4 classes (uniform, semi-uniform, multiple DCs, and non-uniform). For different classes, different coding algorithms are applied. Second, instead of calculating the average, we apply majority voting to determine DC terms. Moreover, a dividing and 2nd order polynomial approximation scheme is applied for boundary encoding. Simulations show that, when compressing simple images, the proposed algorithm much outperforms other state-of-the-art algorithms, especially in perception.
Despite the approval of a new standard in 2009, JPEG-extended range lossless, current digital products still employ previous standards for lossless image compression, such as JPEG, JPEG2000, JPEG-LS, etc. Wavelet-base...
详细信息
Despite the approval of a new standard in 2009, JPEG-extended range lossless, current digital products still employ previous standards for lossless image compression, such as JPEG, JPEG2000, JPEG-LS, etc. Wavelet-based codecs can provide abundant functionalities and excellent compression efficiency. Among them, the backward coding of wavelet trees (BCWT) algorithm offers lower complexity and consumes less internal buffer memory without sacrificing quality at similar compression ratios (CR) when compared to other wavelet- based codecs, such as JPEG2000 and set partitioning in hierarchical trees (SPIHT). A line-based BCWT was developed for further reduction of internal buffer memory. A very efficient line-based lossless BCWT compression algorithm is presented. Lossless color and lossless wavelet transform are employed and the original BCWT algorithm is modified for lossless operation, including incorporation of adaptive arithmetic coding. In order to eliminate coding redundancies, a set to zeros method and a zero tree detection algorithm are proposed, which significantly enhance the boundary condition CR performance while reserving the algorithm's advantages. Tests and analysis results show that the lossless BCWT algorithm requires less memory and computational resources than SPIHT and JPEG2000, while retaining image quality comparable to the standard image codecs, therefore, lossless BCWT is quite suitable for implementation in modern digital technologies. (C) 2014 SPIE and IS&T
Interband coding techniques are needed for effective compression of hyperspectral images,since high interband correlation cannot be exploited by intraband *** this letter,an interband version of GAP(gradient adjusted...
详细信息
ISBN:
(纸本)9781467349970
Interband coding techniques are needed for effective compression of hyperspectral images,since high interband correlation cannot be exploited by intraband *** this letter,an interband version of GAP(gradient adjusted prediction) is proposed by combining a linear prediction with a gradient adjusted *** corresponding prediction function is chose by comparing the difference between the estimate of horizontal gradients and that of vertical gradients with a given *** prediction,the difference is entropy-coded using an adaptive entropy *** results on Airborne Visible/Infrared Imaging Spectrometer(AVIRIS) data show the proposed algorithm can exploit both interband and intraband statistical correlations,and achieve better compression performance compared with those existing classical ***,low encoder complexity makes it suitable for on-board compression of hyperspectral images.
We propose a distributed source coding system for data collected by sensor networks. It uses a feedback channel between the sensors and the gateway node (i.e., the joint decoder) but, unlike previous systems, the enco...
详细信息
ISBN:
(纸本)9781424417650
We propose a distributed source coding system for data collected by sensor networks. It uses a feedback channel between the sensors and the gateway node (i.e., the joint decoder) but, unlike previous systems, the encoding process is driven by the decoder. Compression is performed using distributed arithmeticcoding, which is extended to adaptively estimate the source probabilities. Specifically, the decoder estimates marginal and conditional probabilities, and sends them back to the sensors to drive the distributed arithmeticcoding process. This reduces the decoding delay, and potentially eliminates the need of rate-compatible Slepian-Wolf codes.
In this article, we present an efficient connectivity compression algorithm for triangular meshes. It is a face-based, single resolution and lossless connectivity compression method. This method is an improvement on E...
详细信息
ISBN:
(纸本)9781424458479
In this article, we present an efficient connectivity compression algorithm for triangular meshes. It is a face-based, single resolution and lossless connectivity compression method. This method is an improvement on Edgebreaker. In the aspect of mesh traversing, we use adaptive mesh traversing method to make Split operations as few as possible, which are burdens of the compression ratio. In the aspect of Entropy encoding, a variable code-mode is well designed for every operator in the operator series, which is the result of mesh traversing. Then a binary strand can be obtained. And finally this binary strand is encoded by using adaptive arithmetic coding method. The compression ratio of our algorithm is obtained when all the operators in the series are encoded. In comparison to the previous best face-based encoding methods, our method can significantly improve the compression ratio.
A lossless wavelet-based image compression method with adaptive prediction is proposed. Firstly, we analyze the correlations between wavelet coefficients to identify a proper wavelet basis function, then predictor var...
详细信息
A lossless wavelet-based image compression method with adaptive prediction is proposed. Firstly, we analyze the correlations between wavelet coefficients to identify a proper wavelet basis function, then predictor variables are statistically test to determine which relative wavelet coefficients should be included in the prediction model. At last, prediction differences are encoded by an adaptivearithmetic encoder. Instead of relying on a fixed number of predictors on fixed locations, we proposed the adaptive prediction approach to overcome the multicollinearity problem. The proposed innovative approach integrating correlation analysis for selecting wavelet basis function with predictor variable selection is fully achieving high accuracy of prediction. Experimental results show that the proposed approach indeed achieves a higher compression rate on CT, MRI and ultrasound images comparing with several state-of-the-art methods. (c) 2006 Elsevier Ltd. All rights reserved.
Modern seismic exploration produces vast amount of data that may exceed 100-Tbytes. Besides that, as more data are processed and integrated on workstations, more data transfer among the workstations through the local ...
详细信息
ISBN:
(纸本)9780780395091
Modern seismic exploration produces vast amount of data that may exceed 100-Tbytes. Besides that, as more data are processed and integrated on workstations, more data transfer among the workstations through the local area networks is required. High compression algorithms are then desirable to make seismic data procession more efficient in terms of storage and transmission bandwidth. Most of current algorithms used for seismic data compression are based on wavelet or LCT (local cosine transform), which can only achieve modest compression rations and may result in visible degradation in high rates compression. In this paper, an adaptive seismic data compression method is presented based on wavelet packets transform, which can achieve higher compression rates and have no visible artifacts in reconstructed data.
暂无评论