An optimization-based method is proposed for the design of high-performance separable wavelet filter banks for image coding. This method yields linear-phase perfect-reconstruction systems with high coding gain, good f...
详细信息
An optimization-based method is proposed for the design of high-performance separable wavelet filter banks for image coding. This method yields linear-phase perfect-reconstruction systems with high coding gain, good frequency selectivity, and certain prescribed vanishing-moment properties. Several filter banks designed with the proposed method are presented and shown to work extremely well for image coding, outperforming the well-known 9/7 filter bank from JPEG 2000 in most cases. With the proposed design method, the coding gain can be maximized with respect to the separable or isotropic image model, or jointly with respect to both models. The joint case, which is shown to be equivalent to the isotropic case, is experimentally demonstrated to lead to filter banks with better average coding performance than the separable case. During the development of the proposed design method, filter banks from a certain popular separable two-dimensional (2D) wavelet class (to which our optimal designs belong) were observed to always have a higher coding gain with respect to the separable image model than with respect to the isotropic one. This behavior is examined in detail, leading to the conclusion that, for filter banks belonging to the above class, it is highly improbable (if not impossible) for the isotropic coding gain to exceed the separable coding gain. (C) 2009 Elsevier B.V. All rights reserved.
Lifting-style implementations of wavelets are widely used in image coders. A two-dimensional (2-D) edge adaptive lifting structure, which is similar to Daubechies 513 wavelet, is presented. The 2-D prediction filter p...
详细信息
Lifting-style implementations of wavelets are widely used in image coders. A two-dimensional (2-D) edge adaptive lifting structure, which is similar to Daubechies 513 wavelet, is presented. The 2-D prediction filter predicts the value of the next polyphase component according to an edge orientation estimator of the image. Consequently, the prediction domain is allowed to rotate +/- 45 degrees in regions with diagonal gradient. The gradient estimator is computationally inexpensive with additional costs of only six subtractions per lifting instruction, and no multiplications are required.
Vector quantization of images raises problems of complexity in codebook search and subjective quality of images. The family of image vector quantieation algorithms proposed in this paper addresses both of those proble...
详细信息
Vector quantization of images raises problems of complexity in codebook search and subjective quality of images. The family of image vector quantieation algorithms proposed in this paper addresses both of those problems. The Fuzzy Classified Vector Quantizer (FCVQ) is based on fuzzy set theory and consists basically in a method of extracting a subcodebook from the original codebook, biased by the features of the block to be coded. The incidence of each feature on the blocks is represented by a fuzzy set that captures its (possibly subjective) nature. Unlike the Classified Vector Quantizer (CVQ), in the FCVQ a specific subcodebook is extracted for each block to be coded, allowing a better adaptation to the block. The CVQ may be regarded as a special case of the FCVQ. In order to explore the possible correlation between blocks, an estimator for the degree of incidence of features on the block to be coded is included. The estimate is based on previously coded blocks and is obtained by maximizing a possibility;a distribution that intends to represent the subjective knowledge on the feature's possibility of occurrence conditioned to the coded blocks is used. Some examples of the application of a FCVQ coder to two test images are presented. A slight improvement on the subjective quality of the coded images is obtained, together with a significant reduction on the codebook search complexity and, when applying the estimator, a reduction of the bit rate.
The rapid growth of image resources on the Internet makes it possible to find some highly correlated images on some Web sites when people plan to transmit an image over the Internet. This study proposes a low bit-rate...
详细信息
The rapid growth of image resources on the Internet makes it possible to find some highly correlated images on some Web sites when people plan to transmit an image over the Internet. This study proposes a low bit-rate cloud-based image coding scheme, which utilizes cloud resources to implement image coding. Multiple- discrete wavelet transform was adopted to decompose the input image into a low-frequency sub-band and several high-frequency sub-bands. The low-frequency sub-band image was used to retrieve highly correlated images (HCOIs) in the cloud. The highly correlated regions in the HCOIs were used to reconstruct the high-frequency sub-bands at the decoder to save bits. The final reconstructed image was generated using multiple inverse wavelet transform from a decompressed low-frequency sub-band and reconstructed high-frequency sub-bands. The experimental results showed that the coding scheme performed well, especially at low bit rates. The peak signal-to-noise ratio of the reconstructed image can gain up to 7 and 1.69dB over JPEG and JPEG2000 under the same compression ratio, respectively. By utilizing the cloud resources, our coding scheme showed an obvious advantage in terms of visual quality. The details in the image can be well reconstructed compared with both JPEG, JPEG2000, and intracoding of HEVC.
Recently the wavelet-based contourlet transform (WBCT) is adopted for image coding because it matches better image textures of different orientations. However, its computational complexity is very high. In this paper,...
详细信息
Recently the wavelet-based contourlet transform (WBCT) is adopted for image coding because it matches better image textures of different orientations. However, its computational complexity is very high. In this paper, we propose three tools to enhance the WBCT coding scheme, in particular, on reducing its computational complexity. First, we propose short-length 2-D filters for directional transform. Second, the directional transform is applied to only a few selected subbands and the selection is done by a mean-shift-based decision procedure. Third, we fine-tune the context tables used by the arithmetic coder in WBCT coding to improve coding efficiency and to reduce computation. Simulations show that, at comparable coded image quality, the proposed scheme saves over 92% computing time of the original WBCT scheme. Comparing to the conventional 2-D wavelet coding schemes, it produces clearly better subjective image quality. (c) 2012 Elsevier Inc. All rights reserved.
We investigate central issues such as invertibility, stability, synchronization, and frequency characteristics for nonlinear wavelet transforms built using the lifting framework. The nonlinearity comes from adaptively...
详细信息
We investigate central issues such as invertibility, stability, synchronization, and frequency characteristics for nonlinear wavelet transforms built using the lifting framework. The nonlinearity comes from adaptively choosing between a class of linear predictors within the lifting framework. We also describe how earlier families of nonlinear filter banks can be extended through the use of prediction functions operating on a causal neighborhood of pixels. Preliminary compression results for model and real-world images demonstrate the promise of our techniques.
Cellular neural/nonlinear networks (CNN) are considered here for efficient Implementation of the most computationally intensive steps of dynamic image coding, Several analogic CNN algorithms are presented for the gene...
详细信息
Cellular neural/nonlinear networks (CNN) are considered here for efficient Implementation of the most computationally intensive steps of dynamic image coding, Several analogic CNN algorithms are presented for the generation of binary image masks and image decomposition, Measurement results for the first CNN universal chips executing an analogic algorithm for a reconstruction operator are also presented, Based on measured execution times, the viability of the CNN implementation of efficient but computationally expensive compression algorithms such as dynamic image coding is assessed.
This paper presents an architecture suitable for real-time image coding using adaptive vector quantization. This architecture is based on the concept of content-addressable memory (CAM) where the data is accessed simu...
详细信息
This paper presents an architecture suitable for real-time image coding using adaptive vector quantization. This architecture is based on the concept of content-addressable memory (CAM) where the data is accessed simultaneously and in parallel on the basis of its content. Vector quantization (VQ) essentially involves, for each input vector, a search operation to obtain the best match codeword. Traditionally, the search mechanism is implemented sequentially: each vector is compared with the codewords one at a time. For K input vectors of dimension L and a codebook of size N the search complexity is O(KLN);this is heavily computation-intensive and therefore real-time implementation of the VQ algorithm is difficult. The architectures reported thus far employ parallelism in the directions of vector dimension L and codebook size N. However, as K >> N for image coding, a greater degree of parallelism can be obtained by employing parallelism in the directions of L and K. Matching must therefore be performed from the perspective of the codewords: for a given codeword, all input vectors are evaluated in parallel. A speedup of order S(p)(KL) results if a CAM-based implementation is employed. This speedup, coupled with the gains in execution time for the basic distortion operation, implies that even codebook generation is possible in real time (< 33 ms). In using the CAM, the conventional MSE measure is replaced by the absolute difference measure. This measure results in little degradation and in fact limits large errors. The regular and iterable architecture is particularly well suited for VLSI implementation.
In the traditional approach of block transform image coding, a large number of bits are allocated to the DC coefficients. A technique called DC coefficient restoration (DCCR) has been proposed to further improve the c...
详细信息
In the traditional approach of block transform image coding, a large number of bits are allocated to the DC coefficients. A technique called DC coefficient restoration (DCCR) has been proposed to further improve the compression ability of block transform image coding by not transmitting the DC coefficients but estimating them from the transmitted AC coefficients. images thus generated, however, have inherent errors that degrade the image visual quality. In the paper, a global estimation DCCR scheme is proposed that can eliminate the inherent errors. The scheme estimates all the DC coefficients of the blocks simultaneously by minimising the sum of the energy of all the edge difference vectors of the image. The performance of the global estimation DCCR is evaluated using a mathematical model and experiments. Fast algorithms are also developed for efficient implementation of the proposed scheme.
The multiple description coding method proposed in this paper provides the least amount of degradation, caused by loss of descriptors, for those areas of the image which are of greater interest. This is achieved by em...
详细信息
The multiple description coding method proposed in this paper provides the least amount of degradation, caused by loss of descriptors, for those areas of the image which are of greater interest. This is achieved by employing a nonlinear geometrical transform to add redundancy mainly to the area of interest followed by a partitioning of the transformed image into subimages which are coded and transmitted separately. Simulations show that this approach yields acceptable performance even when only one descriptor is received.
暂无评论