In this work, we extend arithmetic coding and present a data encryption scheme that achieves data compression and data security at the same time. This scheme is based on a chaotic dynamics, which makes use of the fact...
详细信息
ISBN:
(纸本)0819441899
In this work, we extend arithmetic coding and present a data encryption scheme that achieves data compression and data security at the same time. This scheme is based on a chaotic dynamics, which makes use of the fact that the decoding process of arithmetic coding scheme can be considered as the repetition of Bernoulli shift map. Data encryption is achieved by controlling the piecewise linear maps by a secret key in three kinds of approach: (i) perturbation method, (ii) switching method, and (iii) source extension method. Experimental results show that the obtained arithmetic codes for a message are randomly distributed on the mapping domain [0,1) by using different keys without seriously deteriorating the compression ratio, and the transition of the orbits in the domain [0,1) is similar to the chaotic dynamics.
In this paper, we analyse a new chaos-based cryptosystem with an embedded adaptive arithmetic coder, which was proposed by Li Heng-Jian and Zhang J S (Li H J and Zhang J S 2010 Chin. Phys. B 19 050508). Although thi...
详细信息
In this paper, we analyse a new chaos-based cryptosystem with an embedded adaptive arithmetic coder, which was proposed by Li Heng-Jian and Zhang J S (Li H J and Zhang J S 2010 Chin. Phys. B 19 050508). Although this new method has a better compression performance than its original version, it is found that there are some problems with its security and decryption processes. In this paper, it is shown how to obtain a great deal of plain text from the cipher text without prior knowledge of the secret key. After discussing the security and decryption problems of the Li Heng-Jian et al. algorithm, we propose an improved chaos-based cryptosystem with an embedded adaptive arithmetic coder that is more secure.
A method for the compression of ECG data is presented. The method is based on the edit distance algorithm developed in the file comparison problems. The edit distance between two sequences of symbols is defined as the...
详细信息
A method for the compression of ECG data is presented. The method is based on the edit distance algorithm developed in the file comparison problems. The edit distance between two sequences of symbols is defined as the number of edit operations required to transform a sequence of symbols into the other. We adopt the edit distance algorithm to obtain a list of edit operations, called edit script, which transforms a reference pulse into a pulse selected from ECG data. If the decoder knows the same reference, it can reproduce the original pulse, only from the edit script. The amount of the edit script is expected to be smaller than that of the original pulse when the two pulses look alike and thereby we can reduce the amount of space to store the data. Applying the proposed scheme to the raw data of ECG, we have achieved a high compression about 14 : 1 without losing the significant features of signals.
A flexible and low-complexity entropy-constrained vector quantizer (ECVQ) scheme based on Gaussian mixture models (GMMs), lattice quantization, and arithmetic coding is presented. The source is assumed to have a proba...
详细信息
A flexible and low-complexity entropy-constrained vector quantizer (ECVQ) scheme based on Gaussian mixture models (GMMs), lattice quantization, and arithmetic coding is presented. The source is assumed to have a probability density function of a GMM. An input vector is first classified to one of the mixture components, and the Karhunen-Loeve transform of the selected mixture component is applied to the vector, followed by quantization using a lattice structured codebook. Finally, the scalar elements of the quantized vector are entropy coded sequentially using a specially designed arithmetic coder. The computational complexity of the proposed scheme is low, and independent of the coding rate in both the encoder and the decoder. Therefore, the proposed scheme serves as a lower complexity alternative to the GMM based ECVQ proposed by Gardner, Subramaniam and Rao [1]. The performance of the proposed scheme is analyzed under a high-rate assumption, and quantified for a given GMM. The practical performance of the scheme was evaluated through simulations on both synthetic and speech line spectral frequency (LSF) vectors. For LSF quantization, the proposed scheme has a comparable performance to [1] at rates relevant for speech coding (20-28 bits per vector) with lower computational complexity.
In this study an adaptive arithmetic coder is embedded in the Baptista-type chaotic cryptosystem for implementing secure data compression. To build the multiple lookup tables of secure data compression, the phase spac...
详细信息
In this study an adaptive arithmetic coder is embedded in the Baptista-type chaotic cryptosystem for implementing secure data compression. To build the multiple lookup tables of secure data compression, the phase space of chaos map with a uniform distribution in the search mode is divided non-uniformly according to the dynamic probability estimation of plaintext symbols. As a result, more probable symbols are selected according to the local statistical characters of plaintext and the required number of iterations is small since the more probable symbols have a higher chance to be visited by the chaotic search trajectory. By exploiting non-uniformity in the probabilities under which a number of iteration to be coded takes on its possible values, the compression capability is achieved by adaptive arithmetic code. Therefore, the system offers both compression and security. Compared with original arithmetic coding, simulation results on Calgary Corpus files show that the proposed scheme suffers from a reduction in compression performance less than 12% and is not susceptible to previously carried out attacks on arithmetic coding algorithms.
In this paper, we presented a novel wavelet packet image coding approach which provides the functionality of fine granular bitstream scalability. The proposed progressive wavelet packet image coding scheme consists of...
详细信息
ISBN:
(纸本)0780374029
In this paper, we presented a novel wavelet packet image coding approach which provides the functionality of fine granular bitstream scalability. The proposed progressive wavelet packet image coding scheme consists of three parts, wavelet packet decomposition, a quadtree sorting procedure for classifying wavelet coefficients and universal trellis-coded quantization for quantizing the sorted coefficients. The image coding results, calculated in PSNR and images reconstructed by the decoding algorithm, are either comparable to or surpass previous results, due to the flexible representation ability of wavelet packet, the effective quadtree classifier and the improved granular fidelity of the UTCQ over scalar quantization.
In this paper, a large-alphabet-oriented scheme is proposed for both Chinese and English text compression. Our scheme parses Chinese text with the alphabet defined by Big-5 code, and parses English text with some rule...
详细信息
In this paper, a large-alphabet-oriented scheme is proposed for both Chinese and English text compression. Our scheme parses Chinese text with the alphabet defined by Big-5 code, and parses English text with some rules designed here. Thus, the alphabet used for English is not a word alphabet. After a token is parsed out from the input text, zero-, first-, and second-order Markov models are used to estimate the occurrence probabilities of this token. Then, the probabilities estimated are blended and accumulated in order to perform arithmetic coding. To implement arithmetic coding under a large alphabet and probability-blending condition, a way to partition count-value range is studied. Our scheme has been programmed and can be executed as a software package. Then, typical Chinese and English text files are compressed to study the influences of alphabet size and prediction order. On average, our compression scheme can reduce a text file's size to 33.9% for Chinese and to 23.3% for English text. These rates are comparable with or better than those obtained by popular data compression packages. Copyright (C) 2005 John Wiley & Sons, Ltd.
The problem of generating a random number with an arbitrary probability distribution by using a general biased M-coin is studied, An efficient and very simple algorithm based on the successive refinement of partitions...
详细信息
The problem of generating a random number with an arbitrary probability distribution by using a general biased M-coin is studied, An efficient and very simple algorithm based on the successive refinement of partitions of the unit interval [0, 1), which we call the interval algorithm, is proposed, A fairly tight evaluation on the efficiency is given, Generalizations of the interval algorithm to the following cases are investigated: 1) output sequence is independent and identically distributed (i.i.d.);2) output sequence is Markov;3) input sequence is Markov;4) input sequence and output sequence are both subject to arbitrary stochastic processes.
We present a novel lossless (reversible) data-embedding technique, which enables the exact recovery of the original host signal upon extraction of the embedded information. A generalization of the well-known least sig...
详细信息
We present a novel lossless (reversible) data-embedding technique, which enables the exact recovery of the original host signal upon extraction of the embedded information. A generalization of the well-known least significant bit (LSB) modification is proposed as the data-embedding method, which introduces additional operating points on the capacity-distortion curve. Lossless recovery of the original is achieved by compressing portions of the signal that are susceptible to embedding distortion and transmitting these compressed descriptions as a part of the embedded payload. A prediction-based conditional entropy coder which utilizes unaltered portions of the host signal as side-information improves the compression efficiency and, thus, the lossless data-embedding capacity.
暂无评论