We present a technique to compress scalar functions defined on 2-manifolds. Our approach combines discrete wavelet transforms with zerotree compression, building on ideas from three previous developments: the lifting ...
详细信息
ISBN:
(纸本)0818677619
We present a technique to compress scalar functions defined on 2-manifolds. Our approach combines discrete wavelet transforms with zerotree compression, building on ideas from three previous developments: the lifting scheme, spherical wavelets, and embedded zerotree coding methods. Applications lie in the efficient storage and rapid transmission of complex data sets. Typical data sets are earth topography, satellite images, and surface parametrizations. Our contribution in this paper is the novel combination and application of these techniques to general 2-manifolds.
In [1], Effros and Chou introduce a two-stage universal transform code called the weighted universal transform code (WUTC). By replacing JPEG's single, non-optimal transform code with a collection of optimal trans...
详细信息
ISBN:
(纸本)0818677619
In [1], Effros and Chou introduce a two-stage universal transform code called the weighted universal transform code (WUTC). By replacing JPEG's single, non-optimal transform code with a collection of optimal transform codes, the WUTC achieves significant performance gains over JPEG. the computational and storage costs of that performance gain are effectively the computation and storage required to operate and store a collection of transform codes rather than a single transform code. We here consider two complexity- and storage-constrained variations of the WUTC. the complexity and storage of the algorithm are controlled by constraining the order of the bases. In the first algorithm, called a fast WUTC (FWUTC), complexity is controlled by controlling the maximum order of each transform. On a sequence of combined text and gray-scale images, the FWUTC achieves performance comparable to the WUTC at 1/32 the complexity for fates up to about 0.10 bits per pixel (bpp), 1/16 the complexity for rates up to about 0.15 bpp, 1/8 the complexity for rates up to about 0.20 bpp, and 1/4 the complexity for rates up to about 0.40 bpp. In the second algorithm, called a jointly optimized fast WUTC (JWUTC), the complexity is controlled by controlling the average order of the transforms. On the same data set and for the same complexity, the performance of the JWUTC always exceeds the performance of the FWUTC. On the data set considered, the performance of the JWUTC is, at each rate, virtually indistinguishable from that of the WUTC at 1/8 the complexity. the JWUTC and FWUTC algorithm are interesting both for their complexity and storage savings in data compression and for the insights that they lend into the choice of appropriate fixed- and variable-order bases for image representation.
We present examples of a new type of wavelet basis functions that are orthogonal across shifts, but not across scales. the analysis functions are low order splines while the synthesis functions are polynomial splines ...
详细信息
ISBN:
(纸本)0819422134
We present examples of a new type of wavelet basis functions that are orthogonal across shifts, but not across scales. the analysis functions are low order splines while the synthesis functions are polynomial splines of higher degree n2. the approximation power of these representations is essentially as good as that of the corresponding Battle- Lemarie orthogonal wavelet transform, withthe difference that the present wavelet synthesis filters have a much faster decay. this last property, together withthe fact that these transformation s are almost orthogonal, may be useful for imagecoding and data compression.
Multimedia data security is very important for multimedia commerce on the Internet such as video-on-demand and real-time video multicast. However, traditional cryptographic algorithms for data secrecy such as DES are ...
详细信息
ISBN:
(纸本)9780897918718
Multimedia data security is very important for multimedia commerce on the Internet such as video-on-demand and real-time video multicast. However, traditional cryptographic algorithms for data secrecy such as DES are not fast enough to process the vast amount of data generated by the multimedia applications to meet the real-time constraints required by the multimedia applications. How to incorporate cryptographic technology with digital image processing technology to provide multimedia security does not seem to be considered in the previous literatures. the main contribution of this paper is the idea of incorporating cryptographic techniques (random algorithms) with digital image processing techniques (image compression algorithms) to achieve compression (decompression) and encryption (decryption) in one step. One of our methods is almost as efficient as the existing video encoding and decoding process while providing considerable level of security without affecting the quality of images when decrypted. Our methods are also adjustable to provide different levels of security for different requirements of the multimedia applications. Our methods are based on the widely used JPEG and MPEG standards. We also conduct a series of experimental studies to test and evaluate our algorithms.
the proceedings contain 21 papers. the special focus in this conference is on Algebraic coding. the topics include: the algebraic structure of codes invariant under a permutation;a method of combining algebraic geomet...
ISBN:
(纸本)9783540617488
the proceedings contain 21 papers. the special focus in this conference is on Algebraic coding. the topics include: the algebraic structure of codes invariant under a permutation;a method of combining algebraic geometric goppa codes;some best rate 1/p quasi-cyclic codes over GF(5);cryptography and secure communications information leakage of a randomly selected boolean function;on the use of periodic timebase companding in the scrambling of stationary processes;a novel approach to spread spectrum communication using linear periodic time-varying filters;on random-like codes (invited paper);an alternative approach to the design of interleavers for block “turbo” codes;improved VLSI design for decoding concatenated codes comprising an irreducible cyclic code and a reed-solomon code;non-minimal trellises for linear block codes;trellis complexity of linear block codes via atomic codewords;error corrections for channels with substitutions, insertions, and deletions;reduced complexity soft-output maximum likelihood sequence estimation of 4-ary CPM signals transmitted over rayleigh flat-fading channels;a novel receiver structure for MPSK in the presence of rapidly changing phase;finite-field wavelet transforms (invited paper);choice of wavelets for image compression;information in Markov random fields and image redundancy;coding of imagedata via correlation filters for invariant pattern recognition;improving myoelectric signal classifier generalization by preprocessing with exploratory projections;Chinese character recognition via orthogonal moments.
In this paper, we present an artificial neural network model with some special neurons that are designed to function as the feature detectors found in the visual cortex and apply it to the reconstruction of visual ima...
详细信息
In this paper, we present an artificial neural network model with some special neurons that are designed to function as the feature detectors found in the visual cortex and apply it to the reconstruction of visual images. the model is a multilayer feedforward neural network. the neurons in the first hidden layer of the network are feature detectors of various scales and orientations. the connection strengths between the input and the first hidden layer are pre-set (and fixed) such that the outputs of this layer are some visually important features of various scales. the rest of the connection strengths in the network are decided through learning via the backpropagation algorithm in a self-supervising manner. Computer simulations were conducted, and the results seem to show that the artificial neural network model proposed in this paper is consistent with its biological counterpart (the visual cortex) in terms of the visual features detected by its feature detectors and its ability to reconstruct near perfect input images from the output of these feature detectors. It is also shown that the system can be used for effective imagedata compression. Simulation results are presented which show promising potential of the new system for imagecoding applications.
Due to a reduced space requirement, the output of compressive transformations can generally be processed with fewer operations than are required to process uncompressed data. Since 1992, we have published theory that ...
详细信息
ISBN:
(纸本)0819415421
Due to a reduced space requirement, the output of compressive transformations can generally be processed with fewer operations than are required to process uncompressed data. Since 1992, we have published theory that unifies the processing compressed and/or encrypted imagery, and have demonstrated significant computational speedup in the case of compressive processing, In this paper, the third of a series, we extend our previously reported work in optical processor design based on image algebra to include the design of optical processors that compute over compressed data. Parts 1 and 2 describe optical architectures that are designed to produce compressed or encrypted imagery.
We investigate the use of a Differential Vector Quantizer (DVQ) architecture for the coding of digital images. An Artificial Neural Network (ANN) is used to develop entropy-based codebooks which yield substantial data...
详细信息
ISBN:
(纸本)0819412015
We investigate the use of a Differential Vector Quantizer (DVQ) architecture for the coding of digital images. An Artificial Neural Network (ANN) is used to develop entropy-based codebooks which yield substantial data compression while retaining insensitivity to transmission channel errors. Two methods are presented for variable bit-rate coding using the described DVQ algorithm. In the first method, boththe encoder and the decoder have multiple codebooks of different sizes. In the second, variable bit-rates are achieved by encoding using subsets of one fixed codebook. We compare the performance of these approaches under conditions of error-free and error-prone channels.
In the coding of digital facsimile documents, a number of non-information-preserving codes have been proposed which make use of the repetition of binary patterns corresponding to printed or typewritten text characters...
详细信息
暂无评论