This paper presents a fully progressive image coder based on well-optimized M-channel linear phase paraunitary filter banks coupled with several levels of wavelet decomposition of the DC band if needed. The uniform-ba...
详细信息
This paper presents a fully progressive image coder based on well-optimized M-channel linear phase paraunitary filter banks coupled with several levels of wavelet decomposition of the DC band if needed. The uniform-band transform can now be implemented as an overlapped block transform in a parallel fashion with fast, robust, and efficient lattice structures. Keying only in the transform stage, we are able to obtain the highest performance up to date embedded coder. This result shows that lapped transforms, when carefully designed, are capable of providing superior reconstructed image quality compared to wavelets, both objectively and subjectively.
This paper presents our efforts on improving wavelet transform based image coding. In particular, by making use of efficient image edge extension, a high performance significant-wavelet-coefficient-based image coding ...
详细信息
This paper presents our efforts on improving wavelet transform based image coding. In particular, by making use of efficient image edge extension, a high performance significant-wavelet-coefficient-based image coding (SWCBIC) is developed. Compared with the performance of original significant-wavelet-coefficient-based scheme, namely embedded zerotree wavelet (EZW) coding, our algorithm shows improvement on image quality, especially at a high compression ratio.
In order to obtain the high-fidelity medical compressed images, a new compression scheme is proposed. Based on the stringent requirements on lossy medical image compression, we refine the context modeling for a given ...
详细信息
ISBN:
(纸本)0818681837
In order to obtain the high-fidelity medical compressed images, a new compression scheme is proposed. Based on the stringent requirements on lossy medical image compression, we refine the context modeling for a given class of medical images and utilize the conditional entropy coding of the VQ index (CECOVI) scheme to code the MR head images. The experimental results show that the image-type-dependent CECOVI can achieve better rate-distortion performance than the state-of-art wavelet image coder SPIHT. This also implies that incorporating the conditional entropy coding strategy into the VQ process is an appropriate way for high-fidelity medical image compression.
Arithmetic coding is an efficient data compression technique. This paper describes the VLSI implementation of an arithmetic coder for a multilevel alphabet (256 symbols). The design we propose is based on the use of r...
详细信息
Arithmetic coding is an efficient data compression technique. This paper describes the VLSI implementation of an arithmetic coder for a multilevel alphabet (256 symbols). The design we propose is based on the use of redundant arithmetic and the development of new schemes for storing and updating the cumulative probabilities and updating the range and left point of the interval. The use of redundant arithmetic reduces the delays of the modules, so the speed of the design is improved. The resulting chip has an area of 31 mm/sup 2/ and a operating frequency of 39 MHz.
The perceptual subband image coder (PIC) introduced by Safranek and Johnston (1989), selects a noise target level for each subband based on an empirically derived perceptual masking measure. These noise target levels ...
详细信息
The perceptual subband image coder (PIC) introduced by Safranek and Johnston (1989), selects a noise target level for each subband based on an empirically derived perceptual masking measure. These noise target levels are used to set the quantization level in the DPCM quantizer for every particular subband. It achieves high quality output at bit rates from 0.1 to 0.9 bits/pixel (bpp) depending on the complexity of the image. In this paper, we present an algorithm that locally adapts the quantizer step size at each pixel according to an estimate of the masking measure. This estimate is based on the already coded pixels and predictions of the not yet coded pixels. Compared to the PIC, the proposed method does not require any additional side information. In fact, it eliminates the need to transmit the quantizer step size for each subband. For comparable perceptual quality, the proposed method achieves compression gains up to 40 percent. Typical values are in the order of 20 to 30 percent, depending on the nature of the image. Our algorithm has also better performance for supra-threshold image compression since the perceptual error is distributed more evenly and is not concentrated in the most sensitive regions.
Two methods to overcome the problems with large vector quantization (VQ) codebooks are lattice VQ (LVQ) and product codes. The approach described in this paper takes advantage of both methods by applying residual VQ w...
详细信息
Two methods to overcome the problems with large vector quantization (VQ) codebooks are lattice VQ (LVQ) and product codes. The approach described in this paper takes advantage of both methods by applying residual VQ with LVQ at all stages. Using LVQ in conjunction with entropy coding is strongly motivated by the fact that entropy constrained but structurally unconstrained VQ design leads to more equally sized VQ cells. The entropy code of the first LVQ stage should aim at exploiting the statistical properties of the source. The refinement LVQ stages quantize the residuals. Simulations show that there exist certain scales of the refinement lattices yielding extraordinary performance. We focus on the search of these scales.
A lattice-based vector quantizer (VQ) and noiseless code are proposed for transform and subband image coding. The quantization Is simple to implement, and no vector codebooks need to be stored. The noiseless code enum...
详细信息
A lattice-based vector quantizer (VQ) and noiseless code are proposed for transform and subband image coding. The quantization Is simple to implement, and no vector codebooks need to be stored. The noiseless code enumerates lattice codevectors based on their (weighted) l(1) norm, A software implementation is able to handle lattice codebooks of size 2(256). The image coding performance is shown to be comparable or superior to the best encoding methods reported in the literature.
In quantization of any source with a nonuniform probability density function, the entropy coding of the quantizer output can result in a substantial decrease in bit rate. A straightforward entropy coding scheme faces ...
详细信息
In quantization of any source with a nonuniform probability density function, the entropy coding of the quantizer output can result in a substantial decrease in bit rate. A straightforward entropy coding scheme faces us with the problem of variable data rate. A solution in a space of dimensionality N is to select an appropriate subset of elements in the N-fold Cartesian product of a scalar quantizer and represent its elements with codewords of the same length, The drawback is that the search/adressing of this scheme can no longer be achieved independently along the one-dimensional subspaces, A reasonable rule is to select the N-fold symbols of the highest probability. For a memoryless source, this is equivalent to selecting the N-fold symbols with the lowest additive self-information, In this case, due to the additivity property of the self-information, the selected subset has a high degree of structure which can be used to substantially decrease the search/addressing complexity, In this work, a dynamic programming approach is used to exploit this structure. We build our recursive structure required for the dynamic programming in a hierarchy of levels, This results in several benefits over the conventional trellis-based approaches, Using this structure, me develop efficient rules (based on aggregating the states) to substantially reduce the search/addressing complexities while keeping the degradation in performance negligible.
This paper presents a new interframe coding method for medical images, in particular magnetic resonance (MR) images. Until now, attempts in using interframe redundancies for coding MR images have been unsuccessful. We...
详细信息
This paper presents a new interframe coding method for medical images, in particular magnetic resonance (MR) images. Until now, attempts in using interframe redundancies for coding MR images have been unsuccessful. We believe that the main reason for this is twofold: unsuitable interframe estimation models and the thermal noise inherent in magnetic resonance imaging (MRT). The interframe model used in this paper is a continuous affine mapping based on (and optimized by) deforming triangles. The inherent noise of MRI is dealt with by using a median filter within the estimation loop, The residue frames are quantized with a zero-tree wavelet coder, which includes arithmetic entropy coding. This particular method of quantization allows for progressive transmission, which aside from avoiding buffer control problems is very attractive in medical imaging applications.
Embedded zerotree wavelet (EZW) coding, introduced by J. M. Shapiro, is a very effective and computationally simple technique for image compression, Here we offer an alternative explanation of the principles of its op...
详细信息
Embedded zerotree wavelet (EZW) coding, introduced by J. M. Shapiro, is a very effective and computationally simple technique for image compression, Here we offer an alternative explanation of the principles of its operation, so that the reasons for its excellent performance can be better understood, These principles are partial ordering by magnitude with a set partitioning sorting algorithm, ordered bit plane transmission, and exploitation of self-similarity across different scales of an image wavelet transform, Moreover, we present a new and different implementation based on set partitioning in hierarchical trees (SPIHT), which provides even better performance than our previously reported extension of EZW that surpassed the performance of the original EZW, The image coding results, calculated from actual file sizes and images reconstructed by the decoding algorithm, are either comparable to or surpass previous results obtained through much more sophisticated and computationally complex methods, In addition, the new coding and decoding procedures are extremely fast, and they can be made even faster, with only small loss in performance, by omitting entropy coding of the bit stream by arithmetic code.
暂无评论