We present a fast fractal image encoding algorithm which is based on a refinement of the fractal code from an initial coarse level of the pyramid. The pyramid search algorithm is quasi-optimal in terms of minimizing t...
详细信息
We present a fast fractal image encoding algorithm which is based on a refinement of the fractal code from an initial coarse level of the pyramid. The pyramid search algorithm is quasi-optimal in terms of minimizing the mean square error. Assuming that the distribution of the matching error is described by an independent, identically distributed (i.i.d.) Laplacian random process, we derive the threshold sequence for the objective function in each pyramidal level. The computational efficiency depends on the depth of the pyramid and the search step size and could be improved up to two orders of magnitude compared with the full search of the original image.
Several improvements to the Bugajski-Russo N-gram algorithm are proposed. When applied to English text these result in an algorithm with comparable complexity and approximately 10 to 30% less rate than the commonly us...
详细信息
Several improvements to the Bugajski-Russo N-gram algorithm are proposed. When applied to English text these result in an algorithm with comparable complexity and approximately 10 to 30% less rate than the commonly used COMPRESS algorithm.
Summary form only given. We propose two new algorithms that are based on the 16-bit or 32-bit sampling character set and on the unique features of languages with a large number of distinct characters to improve the da...
详细信息
ISBN:
(纸本)0818670126
Summary form only given. We propose two new algorithms that are based on the 16-bit or 32-bit sampling character set and on the unique features of languages with a large number of distinct characters to improve the data compression ratios for multilingual text documents. We choose Chinese language using 16 bit character sampling as the representative language in our study. The first approach, called the static Chinese Huffman coding, introduces the concept of a single Chinese character in the Huffman tree. Experimental results showed that the improvement in compression ratio obtained. The second approach, called the dictionary-based Chinese Huffman coding, includes the concept of Chinese words in the Huffman coding.
Summary form only given. Effective compression of a text-based information retrieval system involves compression not only the text itself, but also of the concordance by which one accesses that text and which occupies...
详细信息
Summary form only given. Effective compression of a text-based information retrieval system involves compression not only the text itself, but also of the concordance by which one accesses that text and which occupies an amount of storage comparable to the text itself. The concordance can be a rather complicated data structure, especially if it permits hierarchical access to the database. But one or more components of the hierarchy can usually be conceptualized as a bit-map. We conceptualize our bit-map as being generated as follows. At any bit-map site we are in one of two states: a cluster state (C), or a between-cluster state (B). In a given state, we generate a bit-map-value of zero or one and, governed by the transition probabilities of the model, enter a new state as we move to the next bit-map site. Such a model has been referred to as a hidden Markov model in the literature. Unfortunately, this model is analytically difficult to use. To approximate it, we introduce several traditional Markov models with four states each, B and C as above, and two transitional states. We present the models, show how they are connected, and state the formal compression algorithm based on these models. We also include some experimental results.
Summary form only given. A typical seismic analysis scenario involves collection of data by an array of seismometers, transmission over a channel offering limited data rate, and storage of data for analysis. Seismic d...
详细信息
Summary form only given. A typical seismic analysis scenario involves collection of data by an array of seismometers, transmission over a channel offering limited data rate, and storage of data for analysis. Seismic data analysis is performed for monitoring earthquakes and for planetary exploration as in the planned study of seismic events on Mars. Seismic data compression systems are required to cope with the transmission of vast amounts of data over constrained channels and must be able to accurately reproduce occasional high energy seismic events. We propose a compression algorithm that includes three stages: a decorrelation stage based on subband coding, a quantization stage that introduces a controlled amount of distortion to allow for high compression ratios, and a lossless entropy coding stage based on a simple but efficient block-adaptive arithmetic coding method. Adaptivity to the non-stationary behavior of the waveform is achieved by partitioning the data into blocks which are encoded separately. The compression ratio of the proposed scheme can be set to meet prescribed fidelity requirements, i.e. the waveform can be reproduced with sufficient fidelity for accurate interpretation and analysis. The distortions incurred by this compression scheme are currently being evaluated by several seismologists. Encoding is done with high efficiency due to the low overhead required to specify the parameters of the arithmetic encoder. Rate-distortion performance results on seismic waveforms are presented for various filter banks and numbers of levels of decomposition.
This paper presents a data compression scheme for Chinese text files. Due to the skewness of the distribution of Chinese ideograms, the Huffman coding method is adopted. By storing the Huffman tree in the coding table...
详细信息
ISBN:
(纸本)0780325796
This paper presents a data compression scheme for Chinese text files. Due to the skewness of the distribution of Chinese ideograms, the Huffman coding method is adopted. By storing the Huffman tree in the coding table and representing the Huffman tree using the Zaks sequence, the algorithm produces significant improvement on the compression results. The proposed method is evaluated by comparing its performance with three well-known compression algorithms and an algorithm specially designed to compress the coding table. This algorithm should also be applicable to other ideogram-based or oriental language texts. Also, it has the potential to reduce the dictionary size in a bigram or trigram-based semi-adaptive compression scheme for English texts.
Weighted finite automata (WFA) is a tool for specifying real functions and in particular grayscale images. The image compression software based on this algorithm is competitive with other methods in compression of typ...
详细信息
Weighted finite automata (WFA) is a tool for specifying real functions and in particular grayscale images. The image compression software based on this algorithm is competitive with other methods in compression of typical grayscale images. It performs particularly well for high compression rates, for color images, and compared to other methods it has several additional advantages. This paper mainly deals with image manipulation. Weighted finite transducers (WFT) can be used to specify the widest variety of image transformations (linear operators on grayness functions). The authors briefly introduce WFA and WFT and give some examples of image transformations specified by WFT.
A novel discrete cosine transform (DCT) and fractal transform coding (FTC) hybrid image compression algorithm is proposed which dramatically improves the speed of the FTC coding and JPEG's ability of preserving im...
详细信息
ISBN:
(纸本)0780325168
A novel discrete cosine transform (DCT) and fractal transform coding (FTC) hybrid image compression algorithm is proposed which dramatically improves the speed of the FTC coding and JPEG's ability of preserving image details at high compression ratios. The overall subjective quality of the whole JPEG decoded image is also heightened.
Today, in the digitized satellite image domain, the need for high-dimension images is increasing considerably. To transmit or to store such images (more than 6000/spl times/6000 pixels), we need to reduce their data v...
详细信息
Today, in the digitized satellite image domain, the need for high-dimension images is increasing considerably. To transmit or to store such images (more than 6000/spl times/6000 pixels), we need to reduce their data volume, and so we have to use image compression techniques. In most cases, these operations have to be processed in real time. The large amount of computations required by classical image compression algorithms prohibits the use of common sequential processors. To solve this problem, CEA (in collaboration with CNES) has tried to define the best-suited architecture for image compression. In order to achieve this aim, we developed and evaluated a new parallel image compression algorithm for general-purpose parallel computers using data-parallelism. This paper presents this new parallel image compression algorithm. We present implementation results on several parallel computers. We also examine load balancing and data mapping problems. We end by defining optimal characteristics of the parallel machine for real-time image compression.
暂无评论