A technique for reducing the transmission requirements of facsimile images while maintaining high intelligibility in mobile communications environments is described. The algorithms developed are capable of achieving a...
详细信息
A technique for reducing the transmission requirements of facsimile images while maintaining high intelligibility in mobile communications environments is described. The algorithms developed are capable of achieving a compression of approximately 32 to 1. The technique focuses on the implementation of a low-cost interface unit suitable for facsimile communication between low-power mobile stations and fixed stations for both point-to-point and point-to-multipoint transmissions. This interface may be colocated with the transmitting facsimile terminals. The technique was implemented and tested by intercepting facsimile documents in a store-and-forward mode.< >
Both algorithms are adaptive and require no extra communication from the encoder to the decoder. The authors present a scheme to cascade these into an adaptive algorithm which achieves higher compression ratio and is ...
详细信息
Both algorithms are adaptive and require no extra communication from the encoder to the decoder. The authors present a scheme to cascade these into an adaptive algorithm which achieves higher compression ratio and is appropriate for communication. Different refinements of the cascading are tested to optimize the secondary compression.< >
Summary form only given. The authors study the compression of a list of items which are access keys to a dictionary or encyclopedia. Redundancy can be either inside the single word or between words (depending on the o...
详细信息
Summary form only given. The authors study the compression of a list of items which are access keys to a dictionary or encyclopedia. Redundancy can be either inside the single word or between words (depending on the order). Attention is focused on those compression algorithms which can exploit mainly these kinds of redundancies. Three sequential methods, the classic Huffman code, a variant of the LZW algorithm, and the cascade application of differential technique plus Huffman encoding are analysed. A modification of the last-mentioned method makes it sequential in N-length chunks and reduces access delays for real-time applications.< >
Summary form only given. The author surveys the menagerie of quantization and compression algorithms in the specific context of image compression and provides some general comparisons based on performance, complexity,...
详细信息
Summary form only given. The author surveys the menagerie of quantization and compression algorithms in the specific context of image compression and provides some general comparisons based on performance, complexity, and side benefits of particular coding techniques.< >
The Lempel-Ziv-Welch compression algorithm is widely used because it achieves an excellent compromise between compression performance and speed of execution. A simple way to improve the compression without significant...
详细信息
The Lempel-Ziv-Welch compression algorithm is widely used because it achieves an excellent compromise between compression performance and speed of execution. A simple way to improve the compression without significantly degrading its speed is proposed, and experimental data show that it works in practice. Even better results are achieved with additional optimization of 'phasing in' binary numbers.< >
The authors present the design and performance evaluation of a robust, DCT-based (discrete-cosine-transform-based) variable-bit-rate (VBR) compression algorithm for use on B-ISDN/ATM networks. The algorithm class unde...
详细信息
The authors present the design and performance evaluation of a robust, DCT-based (discrete-cosine-transform-based) variable-bit-rate (VBR) compression algorithm for use on B-ISDN/ATM networks. The algorithm class under consideration is based on a recent proposal by F. Kishino et al. (1989), intended to provide robust delivery of video under relatively high ATM cell loss conditions. The robust VBR codec is based on separation of subjectively important low-frequency DCT coefficients (for high-priority transport) from the less important high-frequency coefficients (which are sent at a lower priority level). Temporal propagation of error after loss of low-priority ATM cells is avoided by limiting interframe prediction to low-frequency information transmitted in high-priority cells. Several key questions that arise in the design of such an ATM codec are considered, including: (a) the trade-off between total bit-rate and robustness; (b) the influence of the high/low priority boundary parameter on the high-priority and low-priority bit-rates; and (c) performance at the decoder in the presence of ATM channel loss.< >
Extended summary form only given. The authors show the existence of a bijective mapping between representative binary strings and tree structures through four theorems. Using these proofs they create an algorithm for ...
详细信息
Extended summary form only given. The authors show the existence of a bijective mapping between representative binary strings and tree structures through four theorems. Using these proofs they create an algorithm for generating representative binary strings directly from commonly used graph data structures. This technique effectively compresses the structure of a tree with n nodes into 2n-1 bits plus the original information contained within the tree.< >
A new, simple, extremely fast, locally adaptive data compression algorithm of the LZ77 class is presented. The algorithm, called LZRW1, almost halves the size of text files, uses 16 K of memory, and requires about 13 ...
详细信息
A new, simple, extremely fast, locally adaptive data compression algorithm of the LZ77 class is presented. The algorithm, called LZRW1, almost halves the size of text files, uses 16 K of memory, and requires about 13 machine instructions to compress and about 4 instructions to decompress each byte. This results in speeds of about 77 K and 250 K bytes per second on a one-MIPS machine. The algorithm runs in linear time and has a good worst-case running time. It adapts quickly and has a negligible initialization overhead, making it fast and efficient for small as well as large blocks of data.< >
暂无评论