This paper presents a study of the statistical characteristics and multiplexing of variable-bit-rate (VBR) MPEG-coded video streams. Our results are based on 23 minutes of video obtained from the entertainment movie, ...
详细信息
This paper presents a study of the statistical characteristics and multiplexing of variable-bit-rate (VBR) MPEG-coded video streams. Our results are based on 23 minutes of video obtained from the entertainment movie, The Wizard of Oz. The experimental setup which was used to capture, digitize, and compress the video stream is described. Although the study is conducted at the frame level (as opposed to the slice level), it is observed that the inter-frame correlation structure for the frame-size sequence involves complicated forms of pseudo-periodicity that are mainly affected by the compression pattern of the sequence. A simple model for an MPEG traffic source is developed in which frames are generated according to the compression pattern of the original captured video stream. The number of cells per frame is fitted by a lognormal distribution. Simulations are used to study the performance of an ATM multiplexer for MPEG streams.
Quadtree decomposition is a simple technique used to obtain an image representation at different resolution levels. This representation can be useful for a variety of image processing and image compression algorithms....
详细信息
Quadtree decomposition is a simple technique used to obtain an image representation at different resolution levels. This representation can be useful for a variety of image processing and image compression algorithms. This paper presents a simple way to get better compression performances (in MSE sense) via quadtree decomposition, by using: Near to optimal choice of the threshold for quadtree decomposition. Bit allocation procedure based on the equations derived from rate-distortion theory. The rate-distortion performance of the improved algorithm is calculated for some Gaussian field, and it is examined vie simulation over benchmark gray-level images. In both these cases, significant improvement in the compression performances is shown.
In this paper, we present scalable compression algorithms for image browsing. Recently, the International Standards Organization (ISO) has proposed the JPEG standard for still image compression. JPEG standard not only...
详细信息
In this paper, we present scalable compression algorithms for image browsing. Recently, the International Standards Organization (ISO) has proposed the JPEG standard for still image compression. JPEG standard not only provides the basic feature of compression (baseline algorithm) but also provides the framework for reconstructing images in different picture qualities and sizes. These features are referred to as SNR and spatial scalability, respectively. SNR scalability and spatial scalability can be implemented using the progressive and hierarchical modes in the JPEG standard. In this paper, we implement and investigate the performance of the progressive and hierarchical coding modes of the JPEG standard and compare their performance with the baseline algorithm,
The paper addresses the problem of collaborative video over "heterogeneous" networks. Current standards for video compression are not designed to deal with this problem. We define an additional set of metric...
详细信息
The paper addresses the problem of collaborative video over "heterogeneous" networks. Current standards for video compression are not designed to deal with this problem. We define an additional set of metrics (ie., in addition to the standard rate versus distortion measure) to evaluate compression algorithms for this application. We also present an efficient algorithm and corresponding architectures for video compression in such an environment. The algorithm is a unique combination of the discrete wavelet transform and hierarchical vector quantization. It is unique in that both the encoder and the decoder are implemented with only table lookups. This makes both the software and hardware implementations very efficient and cheap.< >
Lempel-Ziv-Welch methods and their variations are all based on the principle of using a prescribed parsing rule to find duplicate occurrences of data and encoding the repeated strings with some sort of special code wo...
详细信息
Lempel-Ziv-Welch methods and their variations are all based on the principle of using a prescribed parsing rule to find duplicate occurrences of data and encoding the repeated strings with some sort of special code word identifying the data to be replaced. This paper includes a general presentation of five existing lossless compression methods used in any application of digital signal processing. The comparisons are made experimentally by computer simulation.
This paper presents a transform coding algorithm devoted to high quality audio coding at a bit rate of 64 kbps per monophonic channel. It enables the transmission of a high quality stereo sound through the basic acces...
详细信息
This paper presents a transform coding algorithm devoted to high quality audio coding at a bit rate of 64 kbps per monophonic channel. It enables the transmission of a high quality stereo sound through the basic access (2B channels) of ISDN. Although a complete system including framing, synchronization and error correction has been developed, only the bit rate compression algorithm is described here. A detailed analysis of the signal processing techniques such as the time/frequency transformation, the pre-echo reduction by adaptive filtering, the fast algorithm computations, etc., is provided. The use of psychoacoustical properties is also precisely reported. Finally, some subjective evaluation results and one real time implementation of the coder using the ATT DSP32C digital signal processor are presented.
This paper describes the algorithm and implementation of an experimental full-digital HDTV system we have recently developed. The video compression algorithm is based on motion-compensated DCT without B (bidirectional...
详细信息
This paper describes the algorithm and implementation of an experimental full-digital HDTV system we have recently developed. The video compression algorithm is based on motion-compensated DCT without B (bidirectional prediction) frames and the audio compression is based on subband analysis/synthesis technique. The algorithms have commonalties with international standards such as MPEG, with some additional features. The system has been implemented using off-the-shelf IC's and programmable logic devices, and provides a fairly good video/audio quality (28-38 dB peak-to-peak signal-to-noise ratio for video) despite relatively low complexity.
Presents a spatial and spectral decorrelation scheme for lossless data compression of remotely sensed imagery. The statistical properties of Landsat-TM data are investigated. Measurements of the statistical informatio...
详细信息
ISBN:
(纸本)0780314972
Presents a spatial and spectral decorrelation scheme for lossless data compression of remotely sensed imagery. The statistical properties of Landsat-TM data are investigated. Measurements of the statistical information indicate that some bands have strong spectral correlation which can be used for interband prediction or spectral decorrelation; whereas other bands may be most suitable for spatial decorrelation within single band. Experiments show that data compression has been improved using both spatial and spectral nature of the remotely sensed data.< >
A new adaptive algorithm for lossless compression of digital audio is presented. The algorithm is derived from ideas from both dictionary coding and source-modeling. An adaptive Lempel-Ziv (1977) style fixed dictionar...
详细信息
A new adaptive algorithm for lossless compression of digital audio is presented. The algorithm is derived from ideas from both dictionary coding and source-modeling. An adaptive Lempel-Ziv (1977) style fixed dictionary coder is used to build a source model that fuels an arithmetic coder. As a result, variable length strings drawn from the source alphabet are mapped onto variable length strings that are on average shorter. The authors show that this algorithm outperforms arithmetic coding or Lempel-Ziv coding working alone on the same source (in their experiments the source is an ADPCM quantizer). Adaptation heuristics for the Lempel-Ziv coder, relevant data structures, and a discussion of audio source modeling (entropy estimation) experiments are described. While the algorithm presented herein is designed to be used as a post-compressor in a lossy audio transform coding system, it is well suited for any instance where non-stationary source outputs must be compressed.< >
This paper addresses the problem of collaborative video over "heterogeneous" networks. Current standards for video compression are not designed to deal with this problem. We define an additional set of metri...
详细信息
This paper addresses the problem of collaborative video over "heterogeneous" networks. Current standards for video compression are not designed to deal with this problem. We define an additional set of metrics (i.e., in addition to the standard rate versus distortion measure) to evaluate compression algorithms for this application. We then present an efficient algorithm for video compression in such an environment. The algorithm is a novel combination of the discrete wavelet transform and hierarchical vector quantization. It is unique in that both the encoder and the decoder are implemented with only table lookups and are amenable to efficient software and hardware solutions.< >
暂无评论