the efficient transmission and storage of digital imagery increasingly requires compression to maintain effective channel bandwidth and device capacity. Unfortunately, in applications where high compression ratios are...
详细信息
ISBN:
(纸本)0819441899
the efficient transmission and storage of digital imagery increasingly requires compression to maintain effective channel bandwidth and device capacity. Unfortunately, in applications where high compression ratios are required, lossy compression transforms tend to produce a wide variety of artifacts in decompressed images. image quality measures (IQMs) have been published that detect global changes in image configuration resulting from the compression or decompression process. Examples include statistical and correlation-based procedures related to mean-squared error, diffusion of energy from features of interest, and spectral analysis. Additional but sparsely-reported research involves local IQMs that quantify feature distortion in terms of objective or subjective models. In this paper, a suite of spatial exemplars and evaluation procedures is introduced that can elicit and measure a wide range of spatial, statistical, or spectral distortions from an image compression transform T. By applying the test suite to the input of T, performance deficits can be highlighted in the transform's design phase, versus discovery under adverse conditions in field practice. In this study, performance analysis is concerned primarily withthe effect of compression artifacts on automated target recognition (ATR) algorithm performance. For example, featural distortion can be measured using linear, curvilinear, polygonal, or elliptical features interspersed with various textured or noise-perturbed backgrounds or objects. these simulated target blobs may themselves be perturbed with various types or levels of noise, thereby facilitating measurement of statistical target-background interactions. By varying target-background contrast, resolution, noise level, and target shape, compression transforms can be stressed to isolate performance deficits. Similar techniques can be employed to test spectral, phase, and boundary distortions due to decompression. Applicative examples are taken from A
A crucial deficiency of lossy blockwise image compression is the generation of local artifacts such as ringing defects, obscuration of fine detail, and blocking effect (BE). To date, few published reports of image qua...
详细信息
ISBN:
(纸本)0819441899
A crucial deficiency of lossy blockwise image compression is the generation of local artifacts such as ringing defects, obscuration of fine detail, and blocking effect (BE). To date, few published reports of image quality measures (IQMs) have addressed the detection of such errors in a realistic, efficient manner. Exceptions are feature-based IQMs, perceptual IQMs, error detection templates, and quantification of BE that support its reduction in JPEG- and wavelet-compressed imagery. In this paper, we present an enhanced suite of IQMs that emphasize detection of local, feature-specific errors that corrupt visual appearance or numerical integrity of decompressed digital imagery. By the term visual appearance is meant subjective error, in contrast with objectively quantified effects of compression on individual pixel values and their spatial interrelationships. Subjective error is of key importance in human viewing applications, for example, Internet video. Objective error is primarily of interest in object recognition applications such as automated target recognition (ATR), where implementational concerns involve the effect of compression or decompression algorithms on probability of detection (Pd) and rate of false alarms (Rfa). Analysis of results presented herein emphasizes application-specific quantiication of local compression errors. In particular, introduction of extraneous detail (e.g., ringing defects or BE) or obscuration of source detail (e.g., texture masking) adversely impact both subjective and objective error of a decompressed image. Blocking effect is primarily a visual problem, but can confound ATR filters when a target spans a block boundary. Introduction of point or cluster errors primarily degrades ATR filter performance, but can also produce noticeable degradation of fine detail for human visual evaluation of decompressed imagery. In practice, error and performance analysis is supported by examples of ATR imagery including airborne and underwater
this paper proposes a lossless (reversible) data compression method for bi-level images, particularly printing images. In this method, called Dispersed Reference compression (DRC), the coding scheme is changed accordi...
详细信息
In this paper, we propose an efficient source encoding technique based on mapping a non-binary information source with a large alphabet onto an equivalent binary source using weighted fixed-length code assignments. th...
详细信息
In this paper, we propose an efficient source encoding technique based on mapping a non-binary information source with a large alphabet onto an equivalent binary source using weighted fixed-length code assignments. the weighted codes are chosen such that the entropy of the resulting binary source multiplied by the code length is made as close as possible to that of the original non-binary source. It is found that a large saving in complexity, execution time, and memory size is achieved when the commonly-used source encoding algorithms are applied to the n/sup th/ order extension of the resulting binary source. this saving is due to the large reduction in the number of symbols in the alphabet of the new extended binary source. As an example to validate the effectiveness of this approach, text compression using Huffman encoder applied to the n/sup th/ order extended binary source is studied. It is found that the bit-wise Huffman encoder of the 4th-order extended binary source (16 symbols) achieves compression efficiency close to that of the conventional Huffman encoder (256 symbols).
A novel two-stage image compression scheme is proposed. In the first stage, differential pulse code modulation (DPCM) is used to decorrelate the raw imagedata. Next, an effective scheme based on the Huffman coding sc...
详细信息
In this paper, an efficient embedded image compression algorithm is presented. It is based on the observation that the distribution of significant coefficients is intra-subband clustered and inter-subband self-similar...
详细信息
Lossless textual image compression based on pattern matching classically includes a 'residue' coding step that refines an initially lossy reconstructed image to its lossless original form. this step is typical...
详细信息
ISBN:
(纸本)0818677619
Lossless textual image compression based on pattern matching classically includes a 'residue' coding step that refines an initially lossy reconstructed image to its lossless original form. this step is typically accomplished by arithmetically codingthe predicted value for each lossless image pixel, based on the values of previously reconstructed nearby pixels in boththe lossless image and its precursor lossy image. Our contribution describes TPR-B, a fast method for residue coding based on 'typical prediction' which permits the skipping of pixels to be arithmetically encoded;and TPR-NS, an improved compression method for residue coding also based on 'typical prediction'. Experimental results are reported based on the residue coding method proposed in Howard's SPM algorithm and the lossy images it generates when applied to eight CCITT bi-level test images. these results demonstrate that after lossy imagecoding, 88% of the lossless image pixels in the test set can be predicted using TPR-B and need not be residue coded at all. In terms of saved SPM arithmetic coding operations while residue coding, TPR-B achieves an average coding speed increase of 8 times. Using TPR-NS together with TPR-B increases the SPM residue coding compression ratios by an average of 11%.
the ubiquity of networking and computational capacity associated withthe new communications media unveil a universe of new requirements for image representations. Among such requirements is the ability of the represe...
详细信息
ISBN:
(纸本)0818677619
the ubiquity of networking and computational capacity associated withthe new communications media unveil a universe of new requirements for image representations. Among such requirements is the ability of the representation used for coding to support higher-level tasks such as content-based retrieval. In this paper, we explore the relationships between probabilistic modeling and data compression to introduce a representation - library-based coding - which, by enabling retrieval in the compressed domain, satisfies this requirement. Because it contains an embedded probabilistic description of the source, this new representation allows the construction of good inference models without compromise of compression efficiency, leads to very efficient procedures for query and retrieval, and provides a framework for higher level tasks such as the analysis and classification of video shots.
We propose a postfiltering algorithm which adapts to global image quality as well as (optionally) to semantic image content extracted from the video sequence. this approach is in contrast to traditional postfiltering ...
详细信息
ISBN:
(纸本)0818677619
We propose a postfiltering algorithm which adapts to global image quality as well as (optionally) to semantic image content extracted from the video sequence. this approach is in contrast to traditional postfiltering techniques which attempt to remove coding artifacts based on local signal characteristics only. Our postfilter is ideally suited to bead-and-shoulders video coded at very low bit rates (less than 25.6 kbps), where coding artifacts are fairly strong and difficult to distinguish from fine image detail. Results are shown comparing head-and-shoulder sequences encoded at 16 kbps with an H.263-based codec to images postfiltered using the content-adaptive postfilter proposed here. the postfilter manages to remove most of the mosquito artifacts introduced by the low-bit-rate coder while preserving a good rendition of facial detail.
In this paper we present an adaptive imagecoding algorithm based on novel backward-adaptive quantization/classification techniques. We use a simple uniform scalar quantizer to quantize the image subbands. Our algorit...
详细信息
ISBN:
(纸本)0818677619
In this paper we present an adaptive imagecoding algorithm based on novel backward-adaptive quantization/classification techniques. We use a simple uniform scalar quantizer to quantize the image subbands. Our algorithm puts coefficient into one of several classes depending on the values of neighboring previously quantized coefficients. these previously quantized coefficients form contexts which are used to characterize the subband data. To each context type corresponds a different probability model and thus each subband coefficient is compressed with an arithmetic coder having the appropriate model depending on that coefficient's neighborhood. We show how the context selection can be driven by rate-distortion criteria, by choosing the contexts in a way that the total distortion for a given bit rate is minimized. Moreover the probability models for each context are initialized/updated in a very efficient way so that practically no overhead information has to be sent to the decoder. Our results are comparable or in some cases better than the recent state of the art, with our algorithm being simpler than most of the published algorithms of comparable performance.
暂无评论