Radarsat is an Earth observing SAR mission led by the Canadian Space Administration (CSA). NASA is participating as a partner and is responsible for the planning, processing, archiving and distribution of data collect...
详细信息
Radarsat is an Earth observing SAR mission led by the Canadian Space Administration (CSA). NASA is participating as a partner and is responsible for the planning, processing, archiving and distribution of data collected at the Alaska SAR Facility (ASF). Among the five Radarsat operating modes, scan SAR represents a novel approach in data collection. The scanning beam data taking arrangement effectively acts like a burst mode radar allowing changing radar parameters from burst to burst much like the Venus mapping radar Magellan. With successive bursts 'scanned' from near to far range, swath coverage up to 500 km is achieved by properly combining images from these bursts. The paper presents a scan SAR data processor straw-man design that blueprints on the existing Magellan processing concept. Topics discussed include processing parameters derivation, signal compression algorithm selection, geometric rectification process and radiometric compensation strategy.< >
The embedded wavelet hierarchical image coder is a simple and effective image compression algorithm, having the property that the bits in the bit stream are generated in order of importance, yielding a fully embedded ...
详细信息
The embedded wavelet hierarchical image coder is a simple and effective image compression algorithm, having the property that the bits in the bit stream are generated in order of importance, yielding a fully embedded code. An analysis of the algorithm is presented in which the embedded code is viewed as a sequence of binary decisions that distinguish an image from the 'null' image, i.e., the all-gray image. Thus the technique is similar in spirit to binary finite-precision representations of real numbers. An interesting application of very low-bit rate image coding, i.e., image database browsing, is discussed.< >
Considerable improvement in satellite channel capacity utilization is achievable through application of compression technology to military communications, especially secondary imagery systems. Standards activities are...
详细信息
Considerable improvement in satellite channel capacity utilization is achievable through application of compression technology to military communications, especially secondary imagery systems. Standards activities are producing uniform and powerful compression algorithms. In addition, the NITFS standardizes the format of digital imagery and imagery related data. NITFS incorporates JPEG and will incorporate other compression algorithms in the future.< >
Summary form only given. This paper investigates the performance for memoryless sources and sources with memory by using vector quantization to encode and decode the source data. By modeling an image to be a Markov so...
详细信息
Summary form only given. This paper investigates the performance for memoryless sources and sources with memory by using vector quantization to encode and decode the source data. By modeling an image to be a Markov source, the authors suggest a lower bound estimate of the rate-distortion function for the image with memory which can be used to evaluate the performance of VQ (Vector Quantization) and predictive VQ. For the latter, the residual image, which is the difference between the original and the predictive image, is used to generate codebooks. In the encoder, three previous encoded pixels are used to predict the current pixels. The errors between a codevector and the corresponding predictive vector are compared and the minimum criterion is used to select the codevector.< >
In recent years, there have been a number of studies addressing both reversible and irreversible compression of medical images ranging from 256 x 256 to 2048 x 2048 pixels in spatial resolution. There is a need to add...
详细信息
In recent years, there have been a number of studies addressing both reversible and irreversible compression of medical images ranging from 256 x 256 to 2048 x 2048 pixels in spatial resolution. There is a need to address the high-resolution end of the image categories, namely mammograms and chest X-rays, which require resolution of the order of 4096 x 4096 pixels. Further, data compression schemes for most medical applications have to be information-preserving or reversible. In this paper, the performance of a number of block-based, reversible, compression algorithms suitable for compression of very large-format images (4096 x 4096 pixels or more) is compared to that of a novel two-dimensional linear predictive coder developed by extending the multichannel version of the Burg algorithm to two dimensions. The compression schemes implemented are: Huffman coding, Lempel-Ziv coding, arithmetic coding, two-dimensional linear predictive coding (in addition to the aforementioned one), transform coding using discrete Fourier-, discrete cosine-, and discrete Walsh transforms, linear interpolative coding, and combinations thereof. We discuss the performance of these coding techniques with a few mammograms and chest radiographs digitized to sizes up to 4096 x 4096, 10 b pixels. We have achieved compression from 10 bits to 2.5-3.0 b/pixel on these images without any loss of information. The modified multichannel linear predictor out-performs the other methods while offering certain advantages in implementation.
The author modified three electrocardiogram (ECG) data compression algorithms, average beat subtraction with residual differencing (ABSURD), SAPA compression, and TRIM compression, to produce specified average data ra...
详细信息
The author modified three electrocardiogram (ECG) data compression algorithms, average beat subtraction with residual differencing (ABSURD), SAPA compression, and TRIM compression, to produce specified average data rates. He tested the three algorithms on channel one of the MIT-BIH arrhythmia database sampled at 100 sample/s. With a target data rate of 193 b/s, average data rates for the SAPA and ABSURD algorithms varied from 99.7 to 100.1% of the target data rate. TRIM compression algorithms ranged from 87.3% to 101.7% of the target rate. With the average data rate controlled, ABSURD produced signal-to-compression noise ratios that ranged from 46 to 943. SAPA produced SNRs that ranged from 47 to 436, and TRIM produced SNRs that ranged from 54 to 806.< >
This paper provides a technical overview of the prioritization and transport in the Advanced Digital Television system (ADTV). ADTV is the all-digital terrestrial simulcast system developed by the Advanced Television ...
详细信息
This paper provides a technical overview of the prioritization and transport in the Advanced Digital Television system (ADTV). ADTV is the all-digital terrestrial simulcast system developed by the Advanced Television Research Consortium, ATRC, (Thomson Consumer Electronics, Philips North America, NBC, compression Labs, and the David Sarnoff Research Center.) ADTV incorporates an efficient MPEG-compatible compression algorithm at its central core, with application-specific data prioritization and transport features added as separable layers. The compression process is based on a 1440x960 (1050-line 2:1 interlaced) HDTV format, producing a selectable bit-rate in the region of 15-20 Mbps. The data prioritization layer of ADTV achieves robust delivery over an appropriate two-tier modem by separating compressed video data into high- and standard-priority bitstreams with appropriate bit-rates. This prioritized data is then formatted into fixed length "cells" (packets) with appropriate data-link level and service-specific adaptation level headers, designed to provide capabilities such as flexible service multiplexing, priority handling, efficient cell packing, error detection and graceful recovery from errors. (Issues related to the ADTV compression coding, receiver error recovery, and transmission are discussed in separate papers.[1,2,3])
We consider constraints on the encoded bit rate of a video signal that are imposed by a channel and encoder and decoder buffers. We present conditions that ensure that the video encoder and decoder buffers do not over...
详细信息
We consider constraints on the encoded bit rate of a video signal that are imposed by a channel and encoder and decoder buffers. We present conditions that ensure that the video encoder and decoder buffers do not overflow or underflow when the channel can transmit a variable bit rate. Using these conditions and a commonly proposed network-user contract, we examine the effect of a network policing function on the allowable variability in the encoded video bit rate. We describe how these ideas might be implemented in a system that controls both the encoded and transmitted bit rates. Finally, we present the performance of video that has been encoded using the derived constraints for the leaky bucket channel.
The studies performed in the framework of CMTT/2 for the transmission of 4:2:2 video signals through digital networks, have led to the definition of a bitrate reduction algorithm. In order to implement this videoalgor...
详细信息
ISBN:
(纸本)0852965478
The studies performed in the framework of CMTT/2 for the transmission of 4:2:2 video signals through digital networks, have led to the definition of a bitrate reduction algorithm. In order to implement this videoalgorithm on a single board, a considerable effort has been made to develop an ASIC chipset, capable of processing 27 M samples per second (4:2:2 picture). For HDTV signals, the same technique can be used to provide a bitrate reduction from 1.152 Gbit/s to about 120 Mbit/s. This compression algorithm allows the transmission of one HDTV channel with audio and service channels on a 139.264 Mbit/s (or 2*34.368 Mbit/s) G703/G751 link or on an ATM network (155.5 Mbit/s). The authors describe the hardware implementation of the chipset in TV and HDTV codecs; emphasis is put to the features and performances of the equipment.
暂无评论