Pattern substrings analysis to find high redundancy in binary shapes is carried out to improve compression levels in binary objects. Modifications of a recent set of symbols to encode arbitrary contour shapes is propo...
详细信息
Pattern substrings analysis to find high redundancy in binary shapes is carried out to improve compression levels in binary objects. Modifications of a recent set of symbols to encode arbitrary contour shapes is proposed. The concept of Pieces of Discrete Straight Lines is introduced and the probability of appearance of symbols in contours is analyzed to propose a code of nine symbols, MDF9. Also, this code is compared with the six well-known contour codes for compression without loss of information: FCCE, FCCF, VCC, 30T, DFCCE and C_VCC. The proposed MDF9 code in this paper, gives better compression efficiency than existing codes. (C) 2010 Elsevier Inc. All rights reserved.
A framework with two scalar parameters is introduced for various problems of finding a prefix code minimizing a coding penalty function. The framework encompasses problems previously proposed by huffman, Campbell, Nat...
详细信息
A framework with two scalar parameters is introduced for various problems of finding a prefix code minimizing a coding penalty function. The framework encompasses problems previously proposed by huffman, Campbell, Nath, and Drmota and Szpankowski, shedding light on the relationships among these problems. In particular, Nath's range of problems can be seen as bridging the minimum average redundancy problem of huffman with the minimum maximum pointwise redundancy problem of Drmota and Szpankowski. Using this framework, two linear-time huffman-like algorithms are devised for the minimum maximum pointwise redundancy problem, the only one in the framework not previously solved with a huffman-like algorithm. Both algorithms provide solutions common to this problem and a subrange of Nath's problems, the second algorithm being distinguished by its ability to find the minimum variance solution among all solutions common to the minimum maximum pointwise redundancy and Nath problems. Simple redundancy bounds are also presented.
We present a study of compression efficiency for binary objects or bi-level images for different chain-code schemes. Chain-code techniques are used for compression of bi-level images because they preserve information ...
详细信息
We present a study of compression efficiency for binary objects or bi-level images for different chain-code schemes. Chain-code techniques are used for compression of bi-level images because they preserve information and allow a considerable data reduction. Furthermore, chain codes are the standard input format for numerous shape-analysis algorithms. In this work we apply chain codes to represent object with holes and we compare their compression efficiency for seven chain codes. We have also compared all these chain codes with the JBIG standard for bi-level images. (c) 2006 Pattern Recognition Society. Published by Elsevier Ltd. All rights reserved.
This paper introduces a new way of prefix code translation. It helps to finish the whole translation by mapping once (only one comparison instruction is needed for getting the length of prefix code), and returns the o...
详细信息
This paper introduces a new way of prefix code translation. It helps to finish the whole translation by mapping once (only one comparison instruction is needed for getting the length of prefix code), and returns the original data and the length of prefix code element. The decoding time is only about four times as many as the time accessing original data directly.
huffman algorithm allows for constructing optimal prefix-codes with O(nlogn) complexity. As the number of symbols ngrows, so does the complexity of building the code-words. In this paper, a new algorithm and implement...
详细信息
huffman algorithm allows for constructing optimal prefix-codes with O(nlogn) complexity. As the number of symbols ngrows, so does the complexity of building the code-words. In this paper, a new algorithm and implementation are proposed that achieve nearly optimal coding without sorting the probabilities or building a tree of codes. The complexity is proportional to the maximum code length, making the algorithm especially attractive for large alphabets. The focus is put on achieving almost optimal coding with a fast implementation, suitable for real-time compression of large volumes of data. A practical case example about checkpoint files compression is presented, providing encouraging results. Copyright (c) 2015 John Wiley & Sons, Ltd.
A study of compression efficiency of 3-D chain codes to represent discrete curves is described. The 3-D Freeman chain code and the five orthogonal change chain directions (5OT) chain code are compared. The 3-D Freeman...
详细信息
A study of compression efficiency of 3-D chain codes to represent discrete curves is described. The 3-D Freeman chain code and the five orthogonal change chain directions (5OT) chain code are compared. The 3-D Freeman chain code consists of 26 directions, in 3-D Euclidean space, with no invariance under rotation. The 5OT chain elements represent the orthogonal direction changes of the contiguous straight-line segments of the discrete curve. This chain code only considers relative direction changes, which allows us to have a curve descriptor invariant under rotation, and mirroring curves may be obtained with ease. In the 2-D domain, Freeman chain codes are widely used to represent contour curves. Until now, the authors have had no information of implementing Freeman chain codes to compress 3-D curves. Our contribution is how to implement the Freeeman chain code in 3-D and how to compare it with the recently proposed 5OT code. Finally, to probe our results, we apply the proposed method to three different cases: arbitrary curves, cube-filling Hilbert curves, and lattice knots. (C) 2008 Society of Photo-Optical Instrumentation Engineers.
The conflict between ever-increasing volumes of microscan imager logging data and limited cable transmission bandwidth intensifying day by day. In this paper, an improved lossless data compression algorithm is propose...
详细信息
The conflict between ever-increasing volumes of microscan imager logging data and limited cable transmission bandwidth intensifying day by day. In this paper, an improved lossless data compression algorithm is proposed. Specifically, according to the characteristics of the micro resistivity imaging logging data, it is proved that Hex character encoding has better compressibility than decimal character encoding. It analyzed that traditional quadtree huffman algorithm does not be fully applicable to microscan imager logging data. Lastly, it employed improved quadtree huffman algorithm for logging data compression so as to enhance the data compression ratio. The experiment comparsions show that, compared to the convention quadtree algorithm and the improved compressed huffman encoding, both elapsed time and compression ratio are a great improvement.
Data compression via the huffman algorithm, which is a data compression technique, is the most efficient technique between single symbol data compression techniques. This algorithm is counted among statistical data co...
详细信息
Data compression via the huffman algorithm, which is a data compression technique, is the most efficient technique between single symbol data compression techniques. This algorithm is counted among statistical data compression techniques. Many efforts have been made for optimizing this technique and some algorithms have been presented, too. One of these algorithms is the ACW algorithm which is presented by Hussein Al-Bahadili et al. At first, they used the huffman algorithm and then used the results of the huffman algorithm as the input of their algorithm to calculate the optimum character wordlength. So, they decreased the stored characters via increasing the character wordlength for doing the compression. In this paper, we examined and criticized the ACW algorithm and presented some of its weaknesses via suitable counterexamples. At the end, it was concluded that the optimum character wordlength can never be calculated by extant techniques. (C) 2012 Elsevier Ltd. All rights reserved.
In this work a lossless wavelet-fractal image coder is proposed. The process starts by compressing and decompressing the original image using wavelet transformation and fractal coding algorithm. The decompressed image...
详细信息
In this work a lossless wavelet-fractal image coder is proposed. The process starts by compressing and decompressing the original image using wavelet transformation and fractal coding algorithm. The decompressed image is removed from the original one to obtain a residual image which is coded by using huffman algorithm. Simulation results show that with the proposed scheme, we achieve an infinite peak signal to noise ratio (PSNR) with higher compression ratio compared to typical lossless method. Moreover, the use of wavelet transform speeds up the fractal compression algorithm by reducing the size of the domain pool. The compression results of several welding radiographic images using the proposed scheme are evaluated quantitatively and compared with the results of huffman coding algorithm.
Cloud computing is a technology that holds great promise and has potential to revolutionize the healthcare sector. Many security and privacy issues are brought up by the cloud's centralization of data for both pat...
详细信息
Cloud computing is a technology that holds great promise and has potential to revolutionize the healthcare sector. Many security and privacy issues are brought up by the cloud's centralization of data for both patients and healthcare professionals. There is a need for maintaining secrecy in communication in exchanging medical data between the sender and the receiver, which can be done by cryptography. This article presents a cryptographic algorithm (encryption and decryption) to have a secure communication of digital health care confidential data using DNA cryptography and huffman coding. The interesting property is the cipher size obtained from our algorithm is equal to the size of the cipher obtained from the character set of given data. Security analysis is provided to show the security of data when stored and transmitted to the cloud. The cryptographic requirements, key space analysis, key and plain text sensitivity, sensitive score analysis, sensitivity and specificity, optimal threshold, randomness analysis, uniqueness of implementation, entropies of binary bits, DNA bases, DNA bases with huffman code, huffman encoded binary bits and cloud service provider's risk are analyzed. The method proposed is compared with other cryptographic methods and results that it is more secure and stronger than other methods.
暂无评论