The problem of compression and error-control coding algorithms for digital image transmission through communication channels is considered. The criteria of the output image quality are discussed, and a technique for m...
详细信息
The problem of compression and error-control coding algorithms for digital image transmission through communication channels is considered. The criteria of the output image quality are discussed, and a technique for modeling faults in communication channels is proposed. In an example of the compression method based on hierarchical grid interpolation, a possibility of significant increase of noise immunity of compressed images is shown.
The development of space telemetry technology has brought forward the need for large capacity memory of any solid-state recorder;data compression therefore, becomes more and more important. The compression feasibility...
详细信息
The development of space telemetry technology has brought forward the need for large capacity memory of any solid-state recorder;data compression therefore, becomes more and more important. The compression feasibility and potentiality of telemetry data are examined by analyzing the statistical characteristics of actual telemetry data recovered from recorders. Aiming at the disadvantage of present data formats in data compression for multi-channel telemetry data acquisition systems, this introduces a data packet structure, and a real-time compression algorithm for low complex hardware design. The principles and implementations of data package compression are described. Simulation results show that this technology can meet the requirements of multi-channel real-time data compression with a high compression ratio and a fast compression speed, which possesses great application value.
XML has already become the de facto standard for specifying and exchanging data on the Web. However, XML is by nature verbose and thus XML documents are usually large in size, a factor that hinders its practical usage...
详细信息
XML has already become the de facto standard for specifying and exchanging data on the Web. However, XML is by nature verbose and thus XML documents are usually large in size, a factor that hinders its practical usage, since it substantially increases the costs of storing, processing, and exchanging data. In order to tackle this problem, many XML-specific compression systems, such as XMill, XGrind, XMLPPM, and Millau, have recently been proposed. However, these systems usually suffer from the following two inadequacies: They either sacrifice performance in terms of compression ratio and execution time in order to support a limited range of queries, or perform full decompression prior to processing queries over compressed documents. In this paper, we address the above problems by exploiting the information provided by a Document Type Definition (DTD) associated with an XML document. We show that a DTD is able to facilitate better compression as well as generate more usable compressed data to support querying. We present the architecture of the XCQ, which is a compression and querying tool for handling XML data. XCQ is based on a novel technique we have developed called DTD Tree and SAX Event Stream Parsing (DSP). The documents compressed by XCQ are stored in Partitioned Path-Based Grouping (PPG) data streams, which are equipped with a Block Statistics Signature (BSS) indexing scheme. The indexed PPG data streams support the processing of XML queries that involve selection and aggregation, without the need for full decompression. In order to study the compression performance of XCQ, we carry out comprehensive experiments over a set of XML benchmark datasets.
A 65nm CMOS integrated circuit implementation of a bio-physiological signal compression device is presented, reporting exceptionally low power, and extremely low silicon area cost, relative to state-of-the-art. A nove...
详细信息
A 65nm CMOS integrated circuit implementation of a bio-physiological signal compression device is presented, reporting exceptionally low power, and extremely low silicon area cost, relative to state-of-the-art. A novel 'xor-log2-sub-band' data compression scheme is evaluated, achieving modest compression, but with very low resource cost. With the intent to design the simplest useful compression algorithm', the outcome is demonstrated to be very favourable where power must be saved by trading off compression effort against data storage capacity, or data transmission power, even where more complex algorithms can deliver higher compression ratios. A VLSI design and fabricated Integrated Circuit implementation are presented, and estimated performance gains and efficiency measures for various bio-medical use-cases are given. Power costs as low as 1.2 pJ per sample-bit are suggested for a 10kSa/s data-rate, whilst utilizing a power-gating scenario, and dropping to 250fJ/bit at continuous conversion data-rates of 5MSa/sec. This is achieved with a diminutive circuit area of 155um(2). Both power and area appear to be state-of-the-art in terms of compression versus resource cost, and this yields benefit for system optimization.
In the blossoming age of Next Generation Sequencing (NGS) technologies, genome sequencing has become much easier and more affordable. The large number of enormous genomic sequences obtained demand the availability of ...
详细信息
In the blossoming age of Next Generation Sequencing (NGS) technologies, genome sequencing has become much easier and more affordable. The large number of enormous genomic sequences obtained demand the availability of huge storage space in order to be kept for analysis. Since the storage cost has become an impediment facing biologists, there is a constant need of software that provides efficient compression of genomic sequences. Most general-purpose compression algorithms do not exploit the inherent redundancies that exist in genomic sequences which is the reason for the success and popularity of reference-based compression algorithms. In this research, a new reference-based lossless compression technique is proposed for deoxyribonucleic acid (DNA) sequences stored in FASTA format which can act as a layer above gzip compression. Several experiments were performed to evaluate this technique and the experimental results show that it is able to obtain promising compression ratios saving up to 99.9% space and reaching a gain of 80% for some plant genomes. The proposed technique also succeeds in performing the compression at acceptable time;even saving more than 50% of the time taken by ERGC in most experiments.
In this paper, we propose low complexity joint-way compression algorithms with Tensor-Ring (TR) decomposition and weight sharing to further lower the storage and computational complexity requirements for low density p...
详细信息
In this paper, we propose low complexity joint-way compression algorithms with Tensor-Ring (TR) decomposition and weight sharing to further lower the storage and computational complexity requirements for low density parity check (LDPC) neural decoding. Compared with Tensor-Train (TT) decomposition, TR decomposition is more flexible for the selection of ranks, and is also conducive to the use of rank optimization algorithms. In particular, we use TR decomposition to decompose not only the weight parameter matrix of Neural Normalized Min-Sum (NNMS)+ algorithm, but also the message matrix transmitted between variable nodes and check nodes. Furthermore, we combine the TR decomposition and weight sharing algorithm, called joint-way compression, to further lower the complexity of LDPC neural decoding algorithm. We show that the joint-way compression algorithm can achieve better compression efficiency than a single compression algorithm while maintaining a comparable bit error rate (BER) performance. From the numerical experiments, we found that all the compression algorithms with appropriate selection of ranks give almost no performance degradation and that the TRwm-ssNNMS+ algorithm, which combines the spatial sharing and TR decomposition of both weight and message matrix, has the best compression efficiency. Compared with our TT-NNMS+ algorithm proposed in Yuanhui et al. (2022), the number of parameters is reduced by about 70 times and the number of multiplications is reduced by about 6 times.
Lempel-Ziv-Welch (LZW) technique for text compression has been successfully modified to lossless image compression such as GIF. Recently, a new class of text compression, namely, Burrows and Wheeler Transformation (BW...
详细信息
Lempel-Ziv-Welch (LZW) technique for text compression has been successfully modified to lossless image compression such as GIF. Recently, a new class of text compression, namely, Burrows and Wheeler Transformation (BWT) has been developed which gives promising results for text compression. Here, we propose a sub-block interchange lossless compression method which belongs to this block sorting class. Our compression results have outperformed GIF in compression ratios and BWT in compression times when tested with 512x512 pixel 8-bit grey scale images. The comparison of compression ratios and times with GIF, BWT and other popular LZ based compression methods are discussed.
Remote Healthcare Monitoring Systems (RHMs) that use ECG signals are very effective tools for the early diagnosis of various heart conditions. However, these systems are still confronted with a problem that reduces th...
详细信息
Remote Healthcare Monitoring Systems (RHMs) that use ECG signals are very effective tools for the early diagnosis of various heart conditions. However, these systems are still confronted with a problem that reduces their efficiency, such as energy consumption in wearable devices because they are battery-powered and have limited storage. This paper presents a novel algorithm for the compression of ECG signals to reduce energy consumption in RHMs. The proposed algorithm uses discrete Krawtchouk moments as a feature extractor to obtain features from the ECG signal. Then the accelerated Ant Lion Optimizer (AALO) selects the optimum features that achieve the best-reconstructed signal. Our proposed algorithm is extensively validated using two benchmark datasets: MIT-BIH arrhythmia and ECG-ID. The proposed algorithm provides the average values of compression ratio (CR), percent root mean square difference (PRD), signal to noise ratio (SNR), Peak Signal to noise ratio (PSNR), and quality score (QS) are 15.56, 0.69, 44.52, 49.04 and 23.92, respectively. The comparison demonstrates the advantages of the proposed compression algorithm on recent algorithms concerning the mentioned performance metrics. It also tested and compared against other existing algorithms concerning Processing Time, compression speed and computational efficiency. The obtained results show that the proposed algorithm extremely outperforms in terms of (Processing Time = 6.89 s), (compression speed = 4640.19 bps) and (computational efficiency = 2.95). The results also indicate that the proposed compression algorithm reduces energy consumption in a wearable device by decreasing the wake-up time by 3600 ms.
To deal with the large volume of data produced by hyperspectral sensors, the Canadian Space Agency (CSA) has developed and patented two near lossless data compression algorithms for use onboard a hyperspectral satelli...
详细信息
To deal with the large volume of data produced by hyperspectral sensors, the Canadian Space Agency (CSA) has developed and patented two near lossless data compression algorithms for use onboard a hyperspectral satellite: successive approximation multi-stage vector quantization (SAMVQ) and hierarchical self-organizing cluster vector quantization (HSOCVQ). This paper describes the two compression algorithms and demonstrates their near lossless feature. The compression error introduced by the two compression algorithms was compared with the intrinsic noise of the original data that is caused by the instrument noise and other noise sources such as calibration and atmospheric correction errors. The experimental results showed that the compression error was not larger than the intrinsic noise of the original data when a test data set was compressed at a compression ratio of 20:1. The overall noise in the reconstructed data that contains both the intrinsic noise and the compression error is even smaller than the intrinsic noise when the data is compressed using SAMVQ. A multi-disciplinary user acceptability study has been carried out in order to evaluate the impact of the two compression algorithms on hyperspectral data applications. This paper briefly summarizes the evaluation results of the user acceptability study. A prototype hardware compressor that implements the two compression algorithms has been built using field programmable gate arrays (FPGAs) and benchmarked. The compression ratio and fidelity achieved by the hardware compressor are similar to those obtained by software simulation.
This paper presents some new techniques of spectral and spatial decorrelation in lossless data compression of remotely sensed imagery. These techniques provide methods to efficiently compute the optimal band combinati...
详细信息
This paper presents some new techniques of spectral and spatial decorrelation in lossless data compression of remotely sensed imagery. These techniques provide methods to efficiently compute the optimal band combination and band ordering based on the statistical properties of Landsat-TM data. Experiments on several Landsat-TM images show that using both the spectral and the spatial nature of the remotely sensed data results in significant improvement over spatial decorrelation alone. These techniques result in higher compression ratios and are computationally inexpensive.
暂无评论