Lempel-Ziv-Welch methods and their variations are all based on the principle of using a prescribed parsing rule to find duplicate occurrences of data and encoding the repeated strings with some sort of special code wo...
详细信息
Lempel-Ziv-Welch methods and their variations are all based on the principle of using a prescribed parsing rule to find duplicate occurrences of data and encoding the repeated strings with some sort of special code word identifying the data to be replaced. This paper includes a general presentation of five existing lossless compression methods used in any application of digital signal processing. The comparisons are made experimentally by computer simulation.
This paper presents a transform coding algorithm devoted to high quality audio coding at a bit rate of 64 kbps per monophonic channel. It enables the transmission of a high quality stereo sound through the basic acces...
详细信息
This paper presents a transform coding algorithm devoted to high quality audio coding at a bit rate of 64 kbps per monophonic channel. It enables the transmission of a high quality stereo sound through the basic access (2B channels) of ISDN. Although a complete system including framing, synchronization and error correction has been developed, only the bit rate compression algorithm is described here. A detailed analysis of the signal processing techniques such as the time/frequency transformation, the pre-echo reduction by adaptive filtering, the fast algorithm computations, etc., is provided. The use of psychoacoustical properties is also precisely reported. Finally, some subjective evaluation results and one real time implementation of the coder using the ATT DSP32C digital signal processor are presented.
This paper describes the algorithm and implementation of an experimental full-digital HDTV system we have recently developed. The video compression algorithm is based on motion-compensated DCT without B (bidirectional...
详细信息
This paper describes the algorithm and implementation of an experimental full-digital HDTV system we have recently developed. The video compression algorithm is based on motion-compensated DCT without B (bidirectional prediction) frames and the audio compression is based on subband analysis/synthesis technique. The algorithms have commonalties with international standards such as MPEG, with some additional features. The system has been implemented using off-the-shelf IC's and programmable logic devices, and provides a fairly good video/audio quality (28-38 dB peak-to-peak signal-to-noise ratio for video) despite relatively low complexity.
Presents a spatial and spectral decorrelation scheme for lossless data compression of remotely sensed imagery. The statistical properties of Landsat-TM data are investigated. Measurements of the statistical informatio...
详细信息
ISBN:
(纸本)0780314972
Presents a spatial and spectral decorrelation scheme for lossless data compression of remotely sensed imagery. The statistical properties of Landsat-TM data are investigated. Measurements of the statistical information indicate that some bands have strong spectral correlation which can be used for interband prediction or spectral decorrelation; whereas other bands may be most suitable for spatial decorrelation within single band. Experiments show that data compression has been improved using both spatial and spectral nature of the remotely sensed data.< >
A new adaptive algorithm for lossless compression of digital audio is presented. The algorithm is derived from ideas from both dictionary coding and source-modeling. An adaptive Lempel-Ziv (1977) style fixed dictionar...
详细信息
A new adaptive algorithm for lossless compression of digital audio is presented. The algorithm is derived from ideas from both dictionary coding and source-modeling. An adaptive Lempel-Ziv (1977) style fixed dictionary coder is used to build a source model that fuels an arithmetic coder. As a result, variable length strings drawn from the source alphabet are mapped onto variable length strings that are on average shorter. The authors show that this algorithm outperforms arithmetic coding or Lempel-Ziv coding working alone on the same source (in their experiments the source is an ADPCM quantizer). Adaptation heuristics for the Lempel-Ziv coder, relevant data structures, and a discussion of audio source modeling (entropy estimation) experiments are described. While the algorithm presented herein is designed to be used as a post-compressor in a lossy audio transform coding system, it is well suited for any instance where non-stationary source outputs must be compressed.< >
This paper addresses the problem of collaborative video over "heterogeneous" networks. Current standards for video compression are not designed to deal with this problem. We define an additional set of metri...
详细信息
This paper addresses the problem of collaborative video over "heterogeneous" networks. Current standards for video compression are not designed to deal with this problem. We define an additional set of metrics (i.e., in addition to the standard rate versus distortion measure) to evaluate compression algorithms for this application. We then present an efficient algorithm for video compression in such an environment. The algorithm is a novel combination of the discrete wavelet transform and hierarchical vector quantization. It is unique in that both the encoder and the decoder are implemented with only table lookups and are amenable to efficient software and hardware solutions.< >
This paper describes the research effort currently in progress to develop lossless data compression algorithms for seismic, speech, and image data sets. For many applications, such as transmitting and archiving resear...
详细信息
This paper describes the research effort currently in progress to develop lossless data compression algorithms for seismic, speech, and image data sets. For many applications, such as transmitting and archiving research data bases, using lossy compression algorithms is not advisable. In situations where critical data (e.g. research instrumentation) is to be transmitted or archived, a real time lossless data compression algorithm is desirable. It presents a version of the algorithm using a recursive least squares a priori adaptive lattice structure followed by an arithmetic coding stage. The real time effectiveness of this algorithm is being verified by coding the technique to run on a TMS320C3x card custom developed for our applications.< >
A major difficulty in providing rapid transfer of whole body PET (WB-PET) images to referring physicians at remote hospitals is the large size of the data set. A WB-PET study contains typically over 400 images, or ove...
详细信息
A major difficulty in providing rapid transfer of whole body PET (WB-PET) images to referring physicians at remote hospitals is the large size of the data set. A WB-PET study contains typically over 400 images, or over 35 Mbyte of data. Using standard high speed modems (14.4 kbps) and fast communication protocols, a transfer time of 7-8 hrs/study is required. To reduce the transfer time, a means of compressing the data is necessary. Due to the relatively high noise levels in WB-PET images, conventional error-free compression algorithms provides little or no compression. The compression can be improved by processing the image data prior to compression. In this work, different pre-processing approaches are evaluated as well as different compression algorithms. By removing all negative values, who carries little or no diagnostic information, the compression is slightly improved. A more significant improvement was achieved by masking the transaxial images such that all pixel elements located outside the body contour are set to zero. Three different masking schemes based on geometrical shapes (circular, rectangular and elliptical) were evaluated. In addition, a masking scheme based on transmission images was also investigated. Using the transmission image mask, with negative value removal, and an efficient error-free compression algorithm, compressions of up to 1:7 were obtained, with no loss in image information.< >
The author has extensively studied the benefits and problems associated with reordering the bands of a multispectral image for performing lossless compression. Under a reasonable model of inter-band prediction, he has...
详细信息
The author has extensively studied the benefits and problems associated with reordering the bands of a multispectral image for performing lossless compression. Under a reasonable model of inter-band prediction, he has shown how to efficiently compute the optimal compression band ordering. In addition, he has formalized the restrictions that arise when bands need to be extracted individually from a compressed archive, and has shown that computing the optimal ordering under these restrictions is NP-hard, except in the most simple case.< >
暂无评论