Recent progresses in wavelet image coding have brought the field into its maturity, Major developments in the process are rate-distortion (R-D) based wavelet packet transformation, zerotree quantization, subband class...
详细信息
Recent progresses in wavelet image coding have brought the field into its maturity, Major developments in the process are rate-distortion (R-D) based wavelet packet transformation, zerotree quantization, subband classification and trellis-coded quantization, and sophisticated context modeling in entropy coding, Drawing from past experience and recent insights, we propose a new wavelet image coding technique with trellis coded space-frequency quantization (TCSFQ). TCSFQ aims to explore space-frequency characterizations of wavelet image representations via R-D optimized zerotree pruning, trellis-coded quantization, and context modeling in entropy coding. Experiments indicate that the TCSFQ coder achieves twice as much compression as the baseline JPEG coder does at the same peak signal to noise ratio (PSNR), making it better than all other coders described in the literature [1].
A new scheme for the construction of m-out-of* codesbased on the arithmetic coding technique is described. For appropriatevalues of n , k , and m, the scheme can be used to construct an (n,k)Mock code in which all the...
详细信息
A new scheme for the construction of m-out-of* codes
based on the arithmetic coding technique is described. For appropriate
values of n , k , and m, the scheme can be used to construct an (n,k)
Mock code in which all the codewords are of weight m. Such codes
are useful, for example, in providing perfect error detection capabil-
ity in asymmetric channels such as optical communication links and
laser disks. The encoding and decoding algorithms of the scheme per-
form simple arithmetic operations recursively, thereby facilitating the
construction of codes with relativelylong Mock sizes. The scheme also
allows the construction of optimal or naarly optimal m-out-of* codes
for a wide range of block sizes limited only by the arithmetic precision
used.
Block truncation coding (BTC) technique is a simple and fast image compression algorithm since complicated transforms are not used. The principle used in BTC algorithm is to use two-level quantiser that adapts to loca...
详细信息
Block truncation coding (BTC) technique is a simple and fast image compression algorithm since complicated transforms are not used. The principle used in BTC algorithm is to use two-level quantiser that adapts to local properties of the image while preserving the first- or first- and second-order statistical moments. The parameters transmitter or stored in the BTC algorithm are statistical moments and bitplane yielding good quality images at a bitrate of 2 bits per pixel (bpp). In this paper, two algorithms for modified BTC (MBTC) are proposed for reducing the bitrate below 2 bpp. The principal used in the proposed algorithms is to use the ratio of moments which is a smaller value when compared to absolute moments. The ratio values are then entropy coded. The bitplane is also coded to remove the correlation among the bits. The proposed algorithms are compared with MBTC and the algorithms obtained by combining JPEG standard with MBTC in terms of bitrate, peak signal-to-noise ratio (PSNR) and subjective quality. It is found that the reconstructed images obtained using the proposed algorithms yield better results.
This letter proposes a method for lossless coding the left disparity image, L, from a stereo disparity image pair (L, R), conditional on the right disparity image, R, by keeping track of the transformation of the cons...
详细信息
This letter proposes a method for lossless coding the left disparity image, L, from a stereo disparity image pair (L, R), conditional on the right disparity image, R, by keeping track of the transformation of the constant patches from R to L. The disparities in R are used for predicting the disparities in L, and the locations of the pixels where the prediction is erroneous are encoded in a first stage, conditional on the patch-labels of R image, allowing the decoder to already reconstruct with certainty some elements of the L image, e.g., the disparity values at certain pixels and parts of the contours of left image patches. Second, the contours of the patches in L image that are still unknown after first stage are conditionally encoded using a mixed conditioning context: the usual causal current context from the contours of L and a noncausal context extracted from the contours in the correctly estimated part of L obtained in the first stage. The depth values in the patches of L image are finally encoded, if they are not already known from the prediction stage. The new algorithm, dubbed conditional crack-edge region value (C-CERV), is shown to perform significantly better than the non-conditional coding method CERV and than another existing conditional coding method, over the Middlebury corpus. C-CERV is shown to reach lossless compression ratios of 100-250 times for those images that have a high precision of the disparity map.
coding unit (CU) splitting and pruning for complexity reduction in high-efficiency video coding (HEVC) intra coding is dealt with. Adaptive determination of the threshold values for splitting and pruning in each CU is...
详细信息
coding unit (CU) splitting and pruning for complexity reduction in high-efficiency video coding (HEVC) intra coding is dealt with. Adaptive determination of the threshold values for splitting and pruning in each CU is proposed based on the depth level information of neighbour CUs, whose values are determined from a fixed parameter in the previous method. Simple preconditions for splitting and pruning are also proposed to improve coding efficiency. Simulation results show that the proposed method gives significant improvement of computational complexity with much smaller reduction of coding efficiency compared with the previous method.
When the embedded zerotree wavelet (EZW) algorithm was first introduced by Shapiro, four types of symbols (zerotree (ZTR), isolated zero (IZ), positive (POS), and negative (NEG)) were used to represent the tree struct...
详细信息
When the embedded zerotree wavelet (EZW) algorithm was first introduced by Shapiro, four types of symbols (zerotree (ZTR), isolated zero (IZ), positive (POS), and negative (NEG)) were used to represent the tree structure. An improved version of EZW, the set partitioning in hierarchical trees (SPIHT) algorithm was later proposed by Said and Pearlman. SPIHT removed the ZTR symbol, while keeping the other three symbols in a slightly different form. In the SPIHT algorithm, the coding of the parent node is isolated from the coding of its descendants in the tree structure. Therefore, it is no longer possible to encode the parent and its descendants with a single symbol. When both the parent and its descendants are insignificant (forming a degree-0 zerotree (ZTR)), it cannot be represented using a ZTR symbol. From our observation, the number of degree-0 ZTRs can occur very frequently not only in natural and synthesis images, but also in video sequences. Hence, the ZTR symbol is reintroduced into SPIHT in our proposed SPIHT-ZTR algorithm. In order to achieve this, the order of sending the output bits was modified to accommodate the use of ZTR symbol. Moreover, the significant offspring were also encoded using a slightly different method to further enhance the performance. The SPIHT-ZTR algorithm was evaluated on images and video sequences. From the simulation results, the performance of binary-uncoded SPIHT-ZTR is higher than binary-uncoded SPIHT and close to SPIHT with adaptive arithmetic coding.
This letter describes a context-based entropy coding suitable for any causal spatial differential pulse code modulation (DPCM) scheme performing lossless or near-lossless image coding. The proposed method is based on ...
详细信息
This letter describes a context-based entropy coding suitable for any causal spatial differential pulse code modulation (DPCM) scheme performing lossless or near-lossless image coding. The proposed method is based on partitioning of prediction errors into homogeneous classes before arithmetic coding. A context function is measured on prediction errors lying within a two-dimensional (2-D) causal neighborhood, comprising the prediction support of the current pixel, as the root mean square (RMS) of residuals weighted by the reciprocal of their Euclidean distances. Its effectiveness is demonstrated in comparative experiments concerning both lossless and near-lossless coding. The proposed context coding/decoding is strictly real-time.
Two weighting procedures are presented for compaction of output sequences generated by binary independent sources whose unknown parameter may occasionally change, The resulting codes need no knowledge of the sequence ...
详细信息
Two weighting procedures are presented for compaction of output sequences generated by binary independent sources whose unknown parameter may occasionally change, The resulting codes need no knowledge of the sequence length T, i.e., they are strongly sequential, and also the number of parameter changes is unrestricted, The additional-transition redundancy of the first method was shown to achieve the Merhav lower bound, i.e., log T bits per transition, For the second method we could prove that additional-transition redundancy is not more than 3/2 log T bits per transition, which is more than the Merhav bound;however, the storage and computational complexity of this method are also more interesting than those of the first method, Simulations show that the difference in redundancy performance between the two methods is negligible.
A new lossless image coding method competitive with the best known image coding techniques in terms of efficiency and complexity is suggested. It is based on adaptive color space transform, adaptive context coding, an...
详细信息
A new lossless image coding method competitive with the best known image coding techniques in terms of efficiency and complexity is suggested. It is based on adaptive color space transform, adaptive context coding, and improved prediction of pixel values of image color components. Examples of application of the new algorithm to a set of standard images are given and comparison with known algorithms is performed.
To acheive both high compression ratio and information preserving, it is an efficiet way to combine segmentation and lossy compression scheme. Microcalcification in mammogram is one of the most significant sign of ear...
详细信息
ISBN:
(纸本)0819427802
To acheive both high compression ratio and information preserving, it is an efficiet way to combine segmentation and lossy compression scheme. Microcalcification in mammogram is one of the most significant sign of early stage of breast cancer. Therefore in coding, detection and segmentation of microcalcification enable us to preserve it well by allocating more bits to it than to other regions. Segmentation of microcalcification is performed both in spatial domain and in wavelet transform domain. Peak error controllable quantization step, which is off-line designed, is suitable for medical image compression. For region-adaptive quantization, block-based wavelet transform coding is adopted and different peak-error-constrained quantizers are applied to blocks according to the segmentation result. In view of perservation of microcalcification, the proposed coding scheme shows better performance than JPEG.
暂无评论