Data compression can efficiently reduce the memory and persistence storage cost, which is highly desirable in modern computing systems, such as enterprise, cloud, and High-Performance Computing (HPC) environments. How...
详细信息
Data compression can efficiently reduce the memory and persistence storage cost, which is highly desirable in modern computing systems, such as enterprise, cloud, and High-Performance Computing (HPC) environments. However, the main challenges of existing data compressors are the insufficient compression ratio and low throughput. This paper focuses on improving the compression ratio of state-of-the-art lossy compression algorithms from the view of applications. Besides, we also use the characteristics of the applications to reduce the runtime overhead. To this end, we explore the idea with Adaptive Mesh Refinement (AMR), which is widely adopted as a computational technique to reduce the amount of computation and memory required in scientific simulations. We propose Level Associated Mapping-based Preconditioning (LAMP) to improve the storage efficiency of AMR applications. The main idea is twofold. First, we utilize the high similarities among the adjacent AMR levels to precondition the data prior to compression. Second, AMR has a unique characteristic of grid structures. We utilize grid structures to rebuild a level associated mapping table, which significantly reduces the runtime overhead of LAMP. Thanks to the optimization techniques of General Matrix Multiplication (GEMM), we further accelerate the process of rebuilding AMR hierarchy for LAMP. Besides, we also block multiple adjacent coordinates within a box and further improve cache locality. The experimental results show that the compression ratios of LAMP are improved up to 63.8% compared to directly compressing the data.
This paper describes a new algorithm for electrocardiogram (ECG) compression. The main goal of the algorithm is to reduce the bit rate while keeping the reconstructed signal distortion at a clinically acceptable level...
详细信息
This paper describes a new algorithm for electrocardiogram (ECG) compression. The main goal of the algorithm is to reduce the bit rate while keeping the reconstructed signal distortion at a clinically acceptable level. It is based on the compression of the linearly predicted residuals of the wavelet coefficients of the signal. In this algorithm, the input signal is divided into blocks and each block goes through a discrete wavelet transform;then the resulting wavelet coefficients are linearly predicted. In this way, a set of uncorrelated transform domain signals is obtained. These signals are compressed using various coding methods, including modified run-length and Huffman coding techniques. The error corresponding to the difference between the wavelet coefficients and the predicted coefficients is minimized in order to get the best predictor. The method is assessed through the use of percent root-mean square difference (PRD) and visual inspection measures. By this compression method, small PRD and high compression ratio with low implementation complexity are achieved. Finally, we have compared the performance of the ECG compression algorithm on data from the MIT-BIH database. (C) 2003 Elsevier Inc. All rights reserved.
The following topics are dealt with: error measures; transform-domain processing; preprocessing of color images; the principal components transform; the discrete wavelet transform; wavelet decomposition and reconstruc...
详细信息
The following topics are dealt with: error measures; transform-domain processing; preprocessing of color images; the principal components transform; the discrete wavelet transform; wavelet decomposition and reconstruction; vector quantization; distortion measures; a codebook design algorithm; design and implementation of compression algorithms; preprocessing effects; quality assessment of the compressed images.
compression of Electrocardiographic (ECG) data is an important requirement to develop an efficient telecardiology application. This study describes an offline compression technique, which is implemented for ECG transm...
详细信息
compression of Electrocardiographic (ECG) data is an important requirement to develop an efficient telecardiology application. This study describes an offline compression technique, which is implemented for ECG transmission in a global system of mobile (GSM) network for preliminary level evaluation of patient's cardiac condition in a non-critical condition. A short-duration (5-6 beats) ECG data from Massachusetts Institute of Technology-Beth Israel Hospital (MIT-BIH) arrhythmia database is used for the trial. The compression algorithm is based on direct processing of ECG samples in four major steps: viz., down-sampling of dataset, normalising inter-sample differences, grouping for sign and magnitude encoding, zero element compression and finally, conversion of bytes into corresponding 8 bit American standard code for information interchange (ASCII) characters. The developed software at the patient side computer also converts the compressed data file into formatted sequence of short text messages (SMSs). Using a dedicated GSM module these message are delivered to the mobile phone of the remote cardiologist. The received SMSs are to be downloaded at the authors computer for concatenation and decompression to obtain back the original ECG for visual or automated investigation. An average percentage root-mean-squared difference and compression ratio values of 43.54 and 1.73 are obtained, respectively, with MIT-BIH arrhythmia data. The proposed technique is useful for rural clinics in India for preliminary level cardiac investigation.
Convolutional neural networks (CNN) has been widely used in the research of multispectral image compression, but they still face the challenge of extracting spectral feature effectively while preserving spatial featur...
详细信息
Convolutional neural networks (CNN) has been widely used in the research of multispectral image compression, but they still face the challenge of extracting spectral feature effectively while preserving spatial feature with integrity. In this article, a novel spectral-spatial feature extraction method is proposed with polydirectional CNN (SSPC) for multispectral image compression. First, the feature extraction network is divided into three parallel modules. The spectral module is employed to obtain spectral features along the spectral direction independently, and simultaneously, with two spatial modules extracting spatial features along two different spatial directions. Then all the features are fused together, followed by downsampling to reduce the size of the feature maps. To control the tradeoff between the rate loss and the distortion, the rate-distortion optimizer is added to the network. In addition, quantization and entropy encoding are applied in turn, converting the data into bit stream. The decoder is structurally symmetric to the encoder, which is convenient for structuring the framework to recover the image. For comparison, SSPC is tested along with JPEG2000 and three-dimensional (3-D) SPIHT on the multispectral datasets of Landsat-8 and WorldView-3 satellites. Besides, to further validate the effectiveness of polydirectional CNN, SSPC is also compared with a normal CNN-based algorithm. The experimental results show that SSPC outperforms other methods at the same bit rates, which demonstrates the validity of the spectral-spatial feature extraction method with polydirectional CNN.
We implemented a method for compression of the abulatory ECG that includes average beat subtraction and first differencing of residual data. Our previous investigations indicated that this method is superior to other ...
详细信息
We implemented a method for compression of the abulatory ECG that includes average beat subtraction and first differencing of residual data. Our previous investigations indicated that this method is superior to other compression methods with respect to data rate as a function mean-squared-error distortion. Based on previous results we selected a sample rate of 100 samples per second and a quantization step size of 35-mu-V. These selections allow storage of 24 h of two-channel ECG data in 4 Mbytes of memory with a minimum rms distortion. For this sample rate and quantization level, we show that estimation of beat location and quantizer location can significantly affect compression performance. Improved compression resulted when beats were located with a temporal resolution of 5 ms and coarse quantization was performed in the compression loop. For the 24-h MIT/BIH arrhythmia database our compression algorithm coded a single-channel of ECG data with an average data rate of 174 bits per second.
compression algorithms are widely used to reduce data size and improve application performance. Nevertheless, data compression has a computational cost which can limit its use. GPUs could be leveraged to reduce compre...
详细信息
compression algorithms are widely used to reduce data size and improve application performance. Nevertheless, data compression has a computational cost which can limit its use. GPUs could be leveraged to reduce compression time. However, existing GPU-based compression libraries expect data to compress in GPU memory, although it is usually stored in CPU memory. Additionally, setup time of GPUs could be a problem when compressing small data sizes. In this paper, we implement a new GPU-based compression library. Contrary to existing ones, our library uses data located in CPU memory. Performance results show that, for the same compression algorithms, GPUs are beneficial for larger data sizes whereas smaller data sizes are compressed faster using CPUs. Therefore, we enhance our proposal with Hybrid-Smash: a heterogeneous CPU-GPU compression library, which transparently uses CPU or GPU compression depending on data size, thus improving compression for any data size.
This paper presents a detailed analysis of various approaches to hardware implemented compression algorithm dictionaries, including our optimized method. To obtain comprehensive and detailed results, we introduced a m...
详细信息
This paper presents a detailed analysis of various approaches to hardware implemented compression algorithm dictionaries, including our optimized method. To obtain comprehensive and detailed results, we introduced a method for the fair comparison of programmable hardware architectures to show the benefits of our approach from the perspective of logic resources, frequency, and latency. We compared two generally used methods with our optimized method, which was found to be more suitable for maintaining the memory content via (in)valid bits in any mid-density memory structures, which are implemented in programmable hardware such as FPGAs (Field Programmable Gate Array). The benefits of our new method based on a x201C;Distributed Memoryx201D;technique are shown on a particular example of compression dictionary but the method is also suitable for another use cases requiring a fast (re-)initialization of the used memory structures before each run of an algorithm with minimum time and logic resources consumption. The performance evaluation of the respective approaches has been made in Xilinx ISE and Xilinx Vivado toolkits for the Virtex-7 FPGA family. However the proposed approach is compatible with 99x0025;of modern FPGAs.
It is demonstrated that a variant of the Lempel-Ziv data compression algorithm where the data base is held fixed and is reused to encode successive strings of incoming input symbols is optimal, provided that the sourc...
详细信息
It is demonstrated that a variant of the Lempel-Ziv data compression algorithm where the data base is held fixed and is reused to encode successive strings of incoming input symbols is optimal, provided that the source is stationary and satisfies certain conditions (e.g., a finite-order Markov source).
Lossy compression algorithms trade bits for quality, aiming at reducing as much as possible the bitrate needed to represent the original source (or set of sources), while preserving the source quality. In this letter,...
详细信息
Lossy compression algorithms trade bits for quality, aiming at reducing as much as possible the bitrate needed to represent the original source (or set of sources), while preserving the source quality. In this letter, we propose a novel paradigm of compression algorithms, aimed at minimizing the information loss perceived by the final user instead of the actual source quality loss, under compression rate constraints. As main contributions, we first introduce the concept of perceived information (PI), which reflects the information perceived by a given user experiencing a data collection, and which is evaluated as the volume spanned by the sources features in a personalized latent space. We then formalize the rate-PI optimization problem and propose an algorithm to solve this compression problem. Finally, we validate our algorithm against benchmark solutions with simulation results, showing the gain in taking into account users' preferences while also maximizing the perceived information in the feature domain.
暂无评论