According to Cisco's Visual Networking Index forecast, global mobile data traffic grew 2.3-fold in 2011, and has more than doubled for the fourth year in a row. This has entailed expensive network capacity upgrade...
详细信息
According to Cisco's Visual Networking Index forecast, global mobile data traffic grew 2.3-fold in 2011, and has more than doubled for the fourth year in a row. This has entailed expensive network capacity upgrades which reduce cellular network profitability. This drives the need for new data reduction features to decrease the upgrade cost. We tested the data reduction potential of the well known ZDelta compression algorithm by Trendafilov et al. on modern web application sites. The results indicate a high data reduction potential averaging 34%. Further, the additional latency is less than 50 ms for file sizes of up to 350KB.
Based on the specialties of the transmission of fixed-length short data in certain special industries, the lossless compression algorithm for the fixed-length short data packets is presented. The iteration procedure i...
详细信息
ISBN:
(纸本)9781467352512
Based on the specialties of the transmission of fixed-length short data in certain special industries, the lossless compression algorithm for the fixed-length short data packets is presented. The iteration procedure is used to explore the rules rightward and downward from the source data packet and the compression dictionary is obtained. It overcomes the shortcoming of the traditional compression algorithm which only compresses the data by file, but cannot compress the short data effectively by packets. Only the compressed packet but not the compression dictionary is transferred in the network. The experimental results show that the proposed algorithm also overcomes the shortcoming that the Lempel-Ziv-Welch (LZW) algorithm cannot compress the data by pockets. This lossless compression algorithm can effectively compress the fixed-length short data which has similar structure and massive repetition. It realizes compressing and transfering the packets effectively and securely.
Electromyography signals are used for biomedical applications and clinical diagnosis. This paper presents a system to acquire high quality EMG (electromyography) signal and compress it for high data transfer rate. For...
详细信息
Electromyography signals are used for biomedical applications and clinical diagnosis. This paper presents a system to acquire high quality EMG (electromyography) signal and compress it for high data transfer rate. For acquiring EMG signals we are using off-the-shelf integrated circuit AFE which generates 24 bit data stream output. After acquiring the signal from front end amplifier, the signal is processed (filtration, compression). This is achieved by making use of MSP430 MCU (micro controller). We are using AFE and MSP430 because it is less power consuming, reliable, and both are compatible with each other. The detail of the hardware and software is explained in the further section of the paper. This paper also describes the compression algorithm. compression is done so that whenever the data is send wirelessly using Bluetooth, Zigbee etc, it requires less bandwidth.
In this paper, a template based ECG compression algorithm for mobile device application is proposed and *** threshold is applied for good R-peak detection *** for PQ interval, QRS complex, ST interval is *** template ...
详细信息
In this paper, a template based ECG compression algorithm for mobile device application is proposed and *** threshold is applied for good R-peak detection *** for PQ interval, QRS complex, ST interval is *** template code that closely matched with the input signal is saved. The determining factor for "closely matched" is based on the calculation of MSE value after comparing the template with each of the intervals (PQ interval, QRS complex and ST interval) with real time input ECG *** performance of algorithm is evaluated by using the Massachusetts Institute of Technology-Boston's Beth Israel Hospital Normal Sinus Rhythm Database and different sampling record. The highest compression ratio is 15.2:1. The accuracy of the R-peak detection is 100%.
Data reduction algorithms operating on GPS tracklogs are widely used in cartographic applications, because the raw dataset usually includes a huge redundancy and unneccessarily high volume of measurement points. The e...
详细信息
Data reduction algorithms operating on GPS tracklogs are widely used in cartographic applications, because the raw dataset usually includes a huge redundancy and unneccessarily high volume of measurement points. The existing methods for compressing tracklogs are either too simplistic or intended to be used on powerful computers, whereas our research focuses on mobile environments where both memory and computational power are very limited. In this paper we are introducing a modified version of a well-known line generalization algorithm, which aims to reach the best trade-off between complexity and accuracy in embedded systems, mostly smartphones.
We offer an experimental proof that the application of compression to data files can be used as a evaluation technique for minability of the data. This is based on the fact that the presence of patterns embedded in da...
详细信息
We offer an experimental proof that the application of compression to data files can be used as a evaluation technique for minability of the data. This is based on the fact that the presence of patterns embedded in data has an influence of compressibility.
Many programs require more RAM to hold their data than a typical computer has. Theoretically, both the compression and deduplication can trade the rich computing capacity for more available RAM space. This paper compr...
详细信息
Many programs require more RAM to hold their data than a typical computer has. Theoretically, both the compression and deduplication can trade the rich computing capacity for more available RAM space. This paper comprehensively evaluates the performance behaviour of memory compression and memory deduplication by using seven real memory traces. The experimental results give two implications: (1) Memory deduplication greatly outperforms memory block compression. (2) Fixed-size partition (FSP) achieves the best performance in contrast to Content-defined Chunking (CDC) and Sliding Block (SB). The optimal chunking size of FSP is equal to the size of a memory page. The analysis results in this paper should be able to provide useful insights for designing or implementing systems that require abundant memory resources to enhance the system performance.
In this paper, we present an FPGA implementation of a novel adaptive and predictive algorithm for lossy hyperspectral image compression. This algorithm was specifically designed for on-board compression, where FPGAs a...
详细信息
ISBN:
(纸本)9781467363822
In this paper, we present an FPGA implementation of a novel adaptive and predictive algorithm for lossy hyperspectral image compression. This algorithm was specifically designed for on-board compression, where FPGAs are the most attractive and popular option, featuring low power and high-performance. However, the traditional RTL design flow is rather time-consuming. High-level synthesis (HLS) tools, like the well-known CatapultC, can help to shorten these times. Utilizing CatapultC, we obtain an FPGA implementation of the lossy compression algorithm directly from a source code written in C language with a double motivation: demonstrating how well the lossy compression algorithm would perform on an FPGA in terms of throughput and area;and at the same time showing how HLS is applied, in terms of source code preparation and CatapultC settings, to obtain an efficient hardware implementation in a relatively short time. The P&R on a Virtex 5 5VFX130 displays effective performance terms of area (maximum device utilization at 14%) and frequency (80 MHz). A comparison with a previous FPGA implementation of a lossless to near-lossless algorithm is also provided. Results on a Virtex 4 4VLX200 show less memory requirements and higher frequency for the LCE algorithm.
The present paper concerns the basic algorithm for processing of video sequences by way of the example of P-type predicted frames. The article shows the major disadvantages of this method and proposes an alternative a...
详细信息
The present paper concerns the basic algorithm for processing of video sequences by way of the example of P-type predicted frames. The article shows the major disadvantages of this method and proposes an alternative algorithm using polyadic encoding, which will increase the compression ratio, taking into account the requirements for it in time of processing, the bit rate and quality of video data recovery.
Genomics data is being produced at an unprecedented rate, especially in the context of clinical applications and grand challenge questions. There are various types of data in genomics research, most of which are store...
详细信息
Genomics data is being produced at an unprecedented rate, especially in the context of clinical applications and grand challenge questions. There are various types of data in genomics research, most of which are stored as plain text tables. A data compression framework tailored to this file type is introduced in this paper, featuring a combination of generic compression algorithms, GPU acceleration, and column-major storage. This approach is the first to achieve both compression and decompression rates of around 100MB/s on commodity hardware without compromising compression ratio. By selecting appropriate compression schemes for each column of data, this framework efficiently exploits data redundancy while remaining applicable to a wide range of formats. The GPU-accelerated implementation also properly exploits the parallelism of compression algorithms. Finally, this paper presents a novel first-order Markov model based transformation, with evidence that it is at least as effective as Burrows-Wheeler and Move-To-Front in some contexts.
暂无评论