We have applied the generalized and universal distance measure NCD-Normalized compression Distance-to the problem of determining the types of file fragments via example. A corpus of files that can be redistributed to ...
详细信息
ISBN:
(纸本)9781424458790
We have applied the generalized and universal distance measure NCD-Normalized compression Distance-to the problem of determining the types of file fragments via example. A corpus of files that can be redistributed to other researchers in the field was developed and the NCD algorithm using k-nearest-neighbor as a classification algorithm was applied to a random selection of file fragments. The experiment covered circa 2000 fragments from 17 different file types. While the overall accuracy of the n-valued classification only improved the prior probability of the class from approximately 6% to circa 50% overall, the classifier reached accuracies of 85%-100% for the most successful file types.
The large amount of image data from the captured 3D integral image requires to be presented with adequate resolution. It is therefore necessary to develop compression algorithms that take advantage of the characterist...
详细信息
ISBN:
(纸本)9781424458653
The large amount of image data from the captured 3D integral image requires to be presented with adequate resolution. It is therefore necessary to develop compression algorithms that take advantage of the characteristics of the recorded integral image. In this paper we propose a new compression method that is adapted to integral imaging. According to the optical characteristics of integral imaging, most of the information of each elemental image is overlapped with that of its adjacent elemental images. Thus, the method is to achieve image compression by taking a sample from the elemental image sequence for every m elemental images to get image compression. Experimental results that are presented to illustrate the proposed compression technique prove that the proposed technique can improve the compression ratio of integral imaging.
Motivated by the Markov chain Monte Carlo (MCMC) relaxation method of Jalali and Weissman, we propose a lossy compression algorithm for continuous amplitude sources that relies on a finite reproduction alphabet that g...
详细信息
ISBN:
(纸本)9781424464258
Motivated by the Markov chain Monte Carlo (MCMC) relaxation method of Jalali and Weissman, we propose a lossy compression algorithm for continuous amplitude sources that relies on a finite reproduction alphabet that grows with the input length. Our algorithm asymptotically achieves the optimum rate distortion (RD) function universally for stationary ergodic continuous amplitude sources. However, the large alphabet slows down the convergence to the RD function, and is thus an impediment in practice. We thus propose an MCMC-based algorithm that uses a (smaller) adaptive reproduction alphabet. In addition to computational advantages, the reduced alphabet accelerates convergence to the RD function, and is thus more suitable in practice.
In data compression or source coding algorithms, input sequences of symbols are converted to shorter sequences while the original information remains unchanged. One of the well-known data compression algorithms is Def...
详细信息
ISBN:
(纸本)9781424467600
In data compression or source coding algorithms, input sequences of symbols are converted to shorter sequences while the original information remains unchanged. One of the well-known data compression algorithms is Deflate which is designed based on the LZ method. Deflate method has three different modes where its second mode is applicable for real-time applications. In this mode, a certain static table of Huffman codes is employed during the coding procedure. In this paper, a new version of deflate algorithm is proposed and implemented in hardware. In the proposed method, a new basic coding table is employed. This table is modified adaptively based on the input sequence. Simulation results show that in this adaptive algorithm, the coding performance is improved. In the hardware implementation of the new method, through some parallelism concepts, we try to improve the hardware utilization and throughput.
This paper focuses on improvement of compression of XML documents based on clustering and rearranging of XML elements within XML documents. Such transformed XML documents can be efficiently compressed.
This paper focuses on improvement of compression of XML documents based on clustering and rearranging of XML elements within XML documents. Such transformed XML documents can be efficiently compressed.
Medical imaging is crucial for early detection and diagnosis of illnesses. The increasing amount of high resolution scans being done every day requires efficient compression algorithms. A magnetic resonance image (MRI...
详细信息
ISBN:
(纸本)9781424464258;9780769539942
Medical imaging is crucial for early detection and diagnosis of illnesses. The increasing amount of high resolution scans being done every day requires efficient compression algorithms. A magnetic resonance image (MRI) consists of a series of many cuts or slices. Viewed as a 3D image, it contains a 3D figure surrounded by background. This background is not clinically relevant.
In this paper, a new variable block-size image compression scheme is presented. A quadtree segmentation is employed to generate blocks of variable size according to their visual activity. Inactive blocks are coded by ...
详细信息
ISBN:
(纸本)9781424455690
In this paper, a new variable block-size image compression scheme is presented. A quadtree segmentation is employed to generate blocks of variable size according to their visual activity. Inactive blocks are coded by the block mean, while active blocks are coded by the proposed matching algorithm using a set of parameters associated with the pattern appearing inside the block. Both the segmentation and the pattern matching are carried out through histogram analysis of block residuals. The use of pattern parameters at the receiver together with the quadtree code reduces the cost of reconstruction significantly and exploits the efficiency of the proposed technique.
Lidar raw echo data has characteristics such as huge data quantity, strong discreteness and unpredictability. According to the construction of Lidar monitoring network of atmospheric environment, the existing network ...
详细信息
Lidar raw echo data has characteristics such as huge data quantity, strong discreteness and unpredictability. According to the construction of Lidar monitoring network of atmospheric environment, the existing network can not provide enough bandwidth to transmit Lidar data in real time. In this paper, we propose a novel hybrid lossless compression algorithm to reduce the transmission amount, namely the probability statistics lossless compression algorithm base on the improved LZW(Lempel-Ziv-Welch), which combines Huffman coding. With experiment on the raw two-value atmospheric data, we verify the effectiveness of our approach that the compression ratio is close to 9.5:1 and the coding efficiency is up to 98%.
This paper presents the power-performance trade off of three different cache compression algorithms. Cache compression improves performance, since the compressed data increases the effective cache capacity by reducing...
详细信息
This paper presents the power-performance trade off of three different cache compression algorithms. Cache compression improves performance, since the compressed data increases the effective cache capacity by reducing the cache misses. The unused memory cells can be put into sleep mode to save static power. The increased performance and saved power due to cache compression must be more than the delay and power consumption added due to CODEC(COmpressor and DECompressor) block respectively. Among the studied algorithms, power-delay characteristic of Frequent Pattern compression(FPC) is found to be the most suitable for cache compression.
This paper presents a theory of lossless digital compression. Quality of voice signal is not important for voice communication. In hearing music high quality music is always recommended. For this emphasis is given on ...
详细信息
This paper presents a theory of lossless digital compression. Quality of voice signal is not important for voice communication. In hearing music high quality music is always recommended. For this emphasis is given on the quality of speech signal. To save more music it is needed to save them consuming smaller memory space. In proposed compression 8-bit PCM/PCM speech signal is compressed. When values of samples are varying they are kept same. When they are not varying the number of samples containing same value is saved. After compression the signal is also an 8-bit PCM/PCM. MPEG-4 ALS is applied in this compressed PCM signal for better compression.
暂无评论