[Auto Generated] 1 Introduction 1 2 The reversible transformation 2 3 Why the transformed string compresses well 5 4 An ef cient implementation 8 4.1 compression :: :::: :::: ::::: :::: ::::: :::: 8 4.2 Decompression:...
详细信息
[Auto Generated] 1 Introduction 1 2 The reversible transformation 2 3 Why the transformed string compresses well 5 4 An ef cient implementation 8 4.1 compression :: :::: :::: ::::: :::: ::::: :::: 8 4.2 Decompression: :::: :::: ::::: :::: ::::: :::: 12 5 algorithm variants 13 6 Performance of implementation 15 7 Conclusions 16 The most widely used data compression algorithms are based on the sequential data compressors of Lempel and Ziv [1, 2]. Statistical modelling techniques may produce superi
In order to improve the compression ratio,most of improved compression methods require more memory and CPU *** these improvements are not suitable for embedded system with limited-resource,especially for vehicle commu...
详细信息
ISBN:
(纸本)9781479919819
In order to improve the compression ratio,most of improved compression methods require more memory and CPU *** these improvements are not suitable for embedded system with limited-resource,especially for vehicle communications *** order to communicate realtimely by lower costs,the huge and short data are compressed before *** paper discusses several original lossless compression methods including their principles,features,advantages and disadvantages *** on the front vehicle system show that the LZ77,PPM and BWT are more suitable for the automobile communication data,and they have better compression ratios,simpler coding,lower CPU and memory costs.
Background: The exponential growth of next-generation sequencing (NGS) derived DNA data poses great challenges to data storage and transmission. Although many compression algorithms have been proposed for DNA reads in...
详细信息
Background: The exponential growth of next-generation sequencing (NGS) derived DNA data poses great challenges to data storage and transmission. Although many compression algorithms have been proposed for DNA reads in NGS data, few methods are designed specifically to handle the quality scores. Results: In this paper we present a memetic algorithm (MA) based NGS quality score data compressor, namely MMQSC. The algorithm extracts raw quality score sequences from FASTQ formatted files, and designs compression codebook using MA based multimodal optimization. The input data is then compressed in a substitutional manner. Experimental results on five representative NGS data sets show that MMQSC obtains higher compression ratio than the other state-of-the-art methods. Particularly, MMQSC is a lossless reference-free compression algorithm, yet obtains an average compression ratio of 22.82% on the experimental data sets. Conclusions: The proposed MMQSC compresses NGS quality score data effectively. It can be utilized to improve the overall compression ratio on FASTQ formatted files.
In this work, we propose a half-size frequent pattern compression algorithm (HSFPC) for the 3D DRAM system. In the 3D DRAM system, the access data compressed by HSFPC can achieve better compression rate than by the tr...
详细信息
ISBN:
(纸本)9781479987511
In this work, we propose a half-size frequent pattern compression algorithm (HSFPC) for the 3D DRAM system. In the 3D DRAM system, the access data compressed by HSFPC can achieve better compression rate than by the traditional compression method, frequent pattern compression (FPC). In our experiments, the 3D DRAM system using HSFPC can reduce the peak temperature by 0.5 degrees C similar to 4.5 degrees C compared with the one using FPC.
In the paper, the method of data preprocessing by segmentation procedure needed for increasing efficiency of compressing is offered. The basic principle of segmentation procedure of input data sequence is described an...
详细信息
ISBN:
(纸本)9781479971039
In the paper, the method of data preprocessing by segmentation procedure needed for increasing efficiency of compressing is offered. The basic principle of segmentation procedure of input data sequence is described and the data compression algorithm is offered. The efficiency of proposed method as data compression procedure and also as data preprocessing procedure before using of some compression algorithm is shown.
A cost effective, easy to manage, elastic and powerful resources over the internet is provided to the users by cloud environment. The resource pooling allows the use of same pool by multiple users through multi tenanc...
详细信息
ISBN:
(纸本)9781509012787
A cost effective, easy to manage, elastic and powerful resources over the internet is provided to the users by cloud environment. The resource pooling allows the use of same pool by multiple users through multi tenancy and virtualization technology. The VM Migration is the process of relocating a VM to another physical machine without shutting down the VM. The Virtual Migration is carried out for the number of reasons such as load balancing, fault tolerance and maintenance with respect to pooling and memory space management Memory space prediction method used to predict the free memory space and allocate the memory to the new processes or the requested user. Memory allocation then becomes a selection function of selecting appropriate memory space according to the user needs a capacity. The free memory spaces are listed in the free list It's helpful for easy allocation. During the migration phase, the contents of the VM are exposed to the network that might lead to data privacy and integrity concerns. Besides data, the space used by VM for storage and communication is also vulnerable to attackers during migration. To reduce the data loss due to attack on memory space to increase the confidentiality of the data which is migrated, here it is designed and implemented a new scheme for Virtual Machine Migration using Memory Space Prediction with compression method for storage using a cryptic methodology. The prediction based compression algorithm used to transfer data in compressed manner, it also decreases the migration time and down time and the hash algorithm used to increases the data confidentiality and security.
Because local slant stacking increases the data dimension in beam migration, the volume of local slant stacks can be enormous and can obstruct efficient data processing. In addition, a proper beam compression algorith...
详细信息
Because local slant stacking increases the data dimension in beam migration, the volume of local slant stacks can be enormous and can obstruct efficient data processing. In addition, a proper beam compression algorithm can reduce the computation of ray tracing and beam mapping. Thus, compressing the local slant stacks with high fidelity can improve the efficiency of beam migration. A new approach is proposed to efficiently compress the local slant stacks. This approach combines the estimation of multiple local slopes based on the structure tensor to reduce the number of slopes, and the sparse representation for the slant stacked data via the matching pursuit decomposition to reduce the number of temporal samples. Furthermore, a new algorithm to estimate multiple local slopes based on the second-order structure tensor is proposed to handle the intersecting events efficiently. Several data examples indicated that the new compression algorithm required much less storage. Meanwhile, the new algorithm can restore the significant events and tolerate some random noise. The migration results determined that this compression algorithm does not obviously degrade the quality of the beam migration result, and it even makes the migration result more clear by suppressing the random noise smearing.
Subsurface images are widely used by the oil companies to find oil reservoirs. The construction of these images involves to collect and process a huge amount of seismic data. Generally, the oil companies use compressi...
详细信息
ISBN:
(纸本)9781467394611
Subsurface images are widely used by the oil companies to find oil reservoirs. The construction of these images involves to collect and process a huge amount of seismic data. Generally, the oil companies use compression algorithms to reduce the storage and transmission costs. Currently, the compression process is developed on-site using CPU architectures, whereas the construction of the subsurface images is developed on GPU clusters. For this reason, the decompression process has to be developed on GPU architectures. So, fast and parallel decompression algorithms are required to be implemented on GPUs. We implemented an algorithm that performs the decompression of seismic traces on GPU. The algorithm is based on a 2D Lifting Wavelet Transform. The decompression algorithm was developed in CUDA 6.5 and implemented into a GeForce GTX660 GPU. This algorithm was tested using different data sets supplied by an oil company. Experimental results allowed us to establish how the compression ratio affects the performance of our algorithm. Additionally, we also show how the number of threads per block affects this performance.
This paper introduced an improved LZW algorithm of data compression. In view of Chinese text characteristic, in this paper, the expansion to the dictionary was carried on, and the first level database of Chinese was a...
详细信息
A basic tenet of wireless sensor networks is that processing of data is less expensive in terms of power than transmitting data. A data compression method is proposed to limit the amount of data transmitted within the...
详细信息
A basic tenet of wireless sensor networks is that processing of data is less expensive in terms of power than transmitting data. A data compression method is proposed to limit the amount of data transmitted within the network. In this paper, we propose a novel data compression algorithm suitable for low power computing devices. In our method, a data point density algorithm is used to determine which points to discard in a given data region. This algorithm is applied to uniform sections throughout the entirety of the data set. Regions with the highest data point density will be represented by a single point. The resulting data points then form the compressed data set. The transmission and subsequent processing of this compressed data set will cause less strain on the network than the original data set, while still maintaining the required information of the original data set. A tool is developed to test the method and compare it with other methods.
暂无评论