Multimedia data may be transmitted or stored either according to the classical Shannon information theory or according to the newer Autosophy information theory. Autosophy algorithms combine very high "lossless&q...
详细信息
ISBN:
(纸本)0819454990
Multimedia data may be transmitted or stored either according to the classical Shannon information theory or according to the newer Autosophy information theory. Autosophy algorithms combine very high "lossless" data and image compression with virtually unbreakable "codebook" encryption. Shannon's theory treats all data items as "quantities", which are converted into binary digits (bit), for transmission in meaningless bit streams. Only "lossy" data compression is possible. A new "Autosophy" theory was developed by Klaus Holtz in 1974 to explain the functioning of natural self-assembling structures, such as chemical crystals or living trees. The same processes can also be used for growing self-assembling data structures, which grow like data crystals or data trees in electronic memories. This provides true mathematical learning algorithms, according to a new Autosophy information theory. Information in essence is only that which can be perceived and which is not already known by the receiver. The transmission bit rates are dependent on the data content only. applications already include the V.42bis compression standard in modems, the gif and tif formats for lossless image compression, and Autosophy Internet television. A new 64bit data format could make all future communications compatible and solve the Internet's Quality of Service (QoS) problems.
Consideration of the geometry of spaces of textual images leads to description of ensembles of text classes via representation of such imagedata using a multi-scale multi-resolution method. A direct approach to stati...
详细信息
ISBN:
(纸本)0819445606
Consideration of the geometry of spaces of textual images leads to description of ensembles of text classes via representation of such imagedata using a multi-scale multi-resolution method. A direct approach to statistics of textual imagedata using 4 by 4 matrices of pixel values in the image of the document is introduced. The mathematical approach to analysis of occurrences of such pieces of a text image is Matrix Frequency Analysis (MFA). It is shown that MFA provides effective information on the classification of files using small sample sizes.
image compression is increasingly employed in applications such as medical imaging, for reducing data storage requirement, and Internet video transmission, to effectively increase channel bandwidth. Similarly, militar...
详细信息
ISBN:
(纸本)0819454990
image compression is increasingly employed in applications such as medical imaging, for reducing data storage requirement, and Internet video transmission, to effectively increase channel bandwidth. Similarly, military applications such as automated target recognition (ATR) often employ compression to achieve storage and communication efficiencies, particularly to enhance the effective bandwidth of communication channels whose throughput suffers, for example, from overhead due to error correction/detection or encryption. In the majority of cases, lossy compression is employed due the resultant low bit rates (high compression ratio). However, lossy compression produces artifacts in decompressed imagery that can confound ATR processes applied to such imagery, thereby reducing the probability of detection (Pd) and possibly increasing the rate or number of false alarms (Rfa or Nfa). In this paper, the authors' previous research in performance measurement of compression transforms is extended to include (a) benchmarking algorithms and software tools, (b) a suite of error exemplars that are designed to elicit compression transform behavior in an operationally relevant context, and (c) a posteriori analysis of performance data. The following transforms are applied to a suite of 64 error exemplars: Visual Pattern imagecoding (VPIC [1]), Vector Quantization with a fast codebook search algorithm (VQ [2,3]), JPEG and a preliminary implementation of JPEG 2000 [4,5], and EBLAST [6-8]. compression ratios range from 2:1 to 200:1, and various noise levels and types are added to the error exemplars to produce a database of 7,680 synthetic test images. Several global and local (e.g., featural) distortion measures are applied to the decompressed test imagery to provide a basis for rate-distortion and rate-performance analysis as a function of noise and compression transform type.
The first part of the article gives a brief examination of the state of art in terms of methodologies, hardware available and algorithms used in space applications. Particular emphasis is given to the lossless algorit...
详细信息
ISBN:
(纸本)0819441899
The first part of the article gives a brief examination of the state of art in terms of methodologies, hardware available and algorithms used in space applications. Particular emphasis is given to the lossless algorithms used and their characterization. In the second part a more detailed analysis, in terms of data entropy, is presented. At the end the preliminary results in the determination of compressibility of the file will be presented.
This Volume 4793 of the conference proceedings contains 34 papers. Topics discussed include mathematics of data/image coding, compression, and encryption and security, remote sensing and communication and compressive ...
详细信息
This Volume 4793 of the conference proceedings contains 34 papers. Topics discussed include mathematics of data/image coding, compression, and encryption and security, remote sensing and communication and compressive processing.
A number of methods have been recently proposed in the literature for the encryption of 2-D information using linear optical systems. In particular the double random phase encoding system has received widespread atten...
详细信息
A number of methods have been recently proposed in the literature for the encryption of 2-D information using linear optical systems. In particular the double random phase encoding system has received widespread attention. This system uses two Random Phase Keys (RPK) positioned in the input spatial domain and the spatial frequency domain and if these random phases are described by statistically independent white noises then the encrypted image can be shown to be a white noise. Decryption only requires knowledge of the RPK in the frequency domain. The RPK may be implemented using a Spatial Light Modulators (SLM). In this paper we propose and investigate the use of SLMs for secure optical multiplexing. We show that in this case it is possible to encrypt multiple images in parallel and multiplex them for transmission or storage. The signal energy is effectively spread in the spatial frequency domain. As expected the number of images that can be multiplexed together and recovered without loss is proportional to the ratio of the input image and the SLM resolution. Many more images may be multiplexed with some loss in recovery. Furthermore each individual encryption is more robust than traditional double random phase encoding since decryption requires knowledge of both RPK and a lowpass filter in order to despread the spectrum and decrypt the image. Numerical simulations are presented and discussed.
The suggested device includes a spatial light modulator (SLM) allowing varying the polarization state of an input illumination source. Attached to the SLM a sub wavelength grating is placed per each pixel of the devic...
详细信息
ISBN:
(纸本)0819445606
The suggested device includes a spatial light modulator (SLM) allowing varying the polarization state of an input illumination source. Attached to the SLM a sub wavelength grating is placed per each pixel of the device. This grating is used to encode the input information. The decoding is done by another sub wavelength grating. When the two gratings are attached a variation of the energetic efficiency between the two polarization states occurs. The polarization ratio encrypts the spatial information strewn. The sub wavelength grating are hard to copy and thus high encryption capability is obtained.
Lattice independence and strong lattice independence of a set of pattern vectors are fundamental mathematical proper-ties that lie at the core of pattern recognition applications based on lattice theory. Specifically,...
详细信息
ISBN:
(纸本)9780819468482
Lattice independence and strong lattice independence of a set of pattern vectors are fundamental mathematical proper-ties that lie at the core of pattern recognition applications based on lattice theory. Specifically, the development of morphological associative memories robust to inputs corrupted by random noise are based on strong lattice independent sets, and real world problems, such as, autonomous endmember detection in hyperspectral imagery, use auto-associative morphological memories as detectors of lattice independence. In this paper, we present a unified mathematical framework that-develops the relationship between different notions of lattice independence currently used in the literature. Computational procedures are provided to test if a given set of pattern vectors is lattice independent or strongly lattice independent;in addition, different techniques are fully described that can be used to generate sets of vectors with the aforementioned lattice properties.
In this paper we propose a novel method for computing JPEG quantization matrices based on desired mean square error, avoiding the classical trial and error procedure. First, we use a relationship between a Laplacian s...
详细信息
ISBN:
(纸本)0819429112
In this paper we propose a novel method for computing JPEG quantization matrices based on desired mean square error, avoiding the classical trial and error procedure. First, we use a relationship between a Laplacian source and its quantization error when uniform quantization is used in order to find a model for uniform quantization error. Then we apply this model to the coefficients obtained in the JPEG standard once the image to be compressed has been transformed by the discrete cosine transform. This allows us to compress an image using JPEG standard under a global MSE (or PSNR) constraint and a set of local constraints determined by JPEG standard and visual criteria. Simulations show that our method generates better quantization matrices than the classical method of scaling the JPEG default quantization matrix, with a cost lower than the coding, decoding and error measuring procedure.
Earth observation missions have recently attracted a growing interest from the scientific and industrial communities, mainly due to the large number of possible applications capable to exploit remotely sensed data and...
详细信息
ISBN:
(纸本)0819441899
Earth observation missions have recently attracted a growing interest from the scientific and industrial communities, mainly due to the large number of possible applications capable to exploit remotely sensed data and images. Along with the increase of market potential, the need arises for the protection of the image products from non-authorized use. Such a need is a very crucial one even because the Internet and other public/private networks have become preferred means of data exchange. A crucial issue arising when dealing with digital image distribution is copyright protection. Such a problem has been largely addressed by resorting to watermarking technology. A question that obviously arises is whether the requirements imposed by remote sensing imagery are compatible with existing watermarking techniques. On the basis of these motivations, the contribution of this work is twofold: i) assessment of the requirements imposed by the characteristics of remotely sensed images on watermark-based copyright protection ii) analysis of the state-of-the-art, and performance evaluation of existing algorithms in terms of the requirements at the previous point.
暂无评论