Encryption is one of the fundamental technologies that is used in digital rights management. Unlike ordinary computer applications, multimedia applications generate large amounts of data that has to be processed in re...
详细信息
Encryption is one of the fundamental technologies that is used in digital rights management. Unlike ordinary computer applications, multimedia applications generate large amounts of data that has to be processed in real time. So, a number of encryption schemes for multimedia applications have been proposed in recent years. We analyze the following proposed methods for multimedia encryption: key-based multiple Huffman tables (MHT), arithmetic coding with key-based interval splitting (KSAC), and randomized arithmetic coding (RAC). Our analysis shows that MHT and KSAC are vulnerable to low complexity known- and/or chosen-plaintext attacks. Although we do not provide any attacks on RAC, we point out some disadvantages of RAC over the classical compress-then-encrypt approach.
This paper proposes a novel and efficient algorithm for single-rate compression of triangle meshes. The input mesh is traversed along its greedy Hamiltonian cycle in O(n) time. Based on the Hamiltonian cycle, the mesh...
详细信息
This paper proposes a novel and efficient algorithm for single-rate compression of triangle meshes. The input mesh is traversed along its greedy Hamiltonian cycle in O(n) time. Based on the Hamiltonian cycle, the mesh connectivity can be encoded by a face label sequence with low entropy containing only four kinds of labels (HETS) and the transmission delay at the decoding end that frequently occurs in the conventional single-rate approaches is obviously reduced. The mesh geometry is compressed with a global coordinate concentration strategy and a novel local parallelogram error prediction scheme. Experiments on realistic 3D models demonstrate the effectiveness of our approach in terms of compression rates and run time performance compared to the leading single-rate and progressive mesh compression methods.
We study the compression of polynomially samplable sources. In particular, we give efficient prefix-free compression and decompression algorithms for three classes of such sources (whose support is a subset of {0,1}(n...
详细信息
We study the compression of polynomially samplable sources. In particular, we give efficient prefix-free compression and decompression algorithms for three classes of such sources (whose support is a subset of {0,1}(n)). 1. We show how to compress sources X samplable by logspace machines to expected length H(X)+O(1). Our next results concern flat sources whose support is in P. 2. If H(X) <= k=n-O(log n), we show how to compress to expected length k+polylog(n-k). 3. If the support of X is the witness set for a self-reducible NP relation, then we show how to compress to expected length H(X)+5.
The QM-coder [1] is an arithmetic code that is used in common in JPEG and JBIG as the binary information source coding. The most remarkable feature of arithmetic code is that dynamic adaptation can easily be realized....
详细信息
The QM-coder [1] is an arithmetic code that is used in common in JPEG and JBIG as the binary information source coding. The most remarkable feature of arithmetic code is that dynamic adaptation can easily be realized. In other words, the code has high versatility. The same algorithm is effective for information sources with various statistical properties, and the code is expected to be applied to various kinds of information source coding [1-3]. However, the QM-coder uses a subtraction-type arithmetic code in order to simplify processing, with the augend operation executed only by constant substitution and subtraction. Thus, it is anticipated that the coding efficiency will be degraded. From such a viewpoint, this paper presents a proposal for the improvement of coding efficiency for subtraction-type arithmetic code [11, 12]. The proposed technique is to separate the augend into several states, and the constant to be assigned to the less probable symbol is varied according to the state. The probability estimation table in the proposed method is constructed by a theoretical investigation based on the table used in the QM-coder. As a result of the coding simulation for the binary image, it is seen that the proposed method can reduce the coding complexity by 0.686% to 1.301% compared to the QM-coder. (C) 1999 Scripta Technica.
This paper offers a simple and lossless compression method for compression of medical images. Method is based on wavelet decomposition of the medical images followed by the correlation analysis of coefficients. The co...
详细信息
ISBN:
(纸本)9781424428236
This paper offers a simple and lossless compression method for compression of medical images. Method is based on wavelet decomposition of the medical images followed by the correlation analysis of coefficients. The correlation analyses are the basis of prediction equation for each sub band. Predictor variable selection is performed through coefficient graphic method to avoid multicollinearity problem and to achieve high prediction accuracy and compression rate. The method is applied on MRI and CT images. Results show that the proposed approach gives a high compression rate for MRI and CT images comparing with state of the art methods.
Application of the lossless compression method to hide texts is considered as a novel trend in research projects. Evaluation of the proposed methods in the field of steganography reflects a variety of approaches to cr...
详细信息
ISBN:
(纸本)9781467361842
Application of the lossless compression method to hide texts is considered as a novel trend in research projects. Evaluation of the proposed methods in the field of steganography reflects a variety of approaches to create covert communication via text files. The extensiveness of steganographic issues and the presence of a huge variety of approaches make it difficult to precisely compare and evaluate these methods. Therefore, in this article a new steganography method that uses a statistical compression technique called 'arithmetic coding', will be presented. In addition, the comparison of this method capacity with other methods will be explained. The arithmetic coding technique that has very high compression rates, shall guarantee even a higher growth capacity and higher security compared to its similar techniques. Meanwhile, the secret messages were not revealed through rewriting or syntax/semantic checking and compared with similar methods, increased the capacity by up to 68.9%, and compared with other methods;this method improved the capacity of fifteen times.
CABAC is one of the main entropy coding methods in the H.264 video compression standard. As a binary arithmetic coding, CABAC can achieve extremely high compression efficiency, but it is sensitive to channel errors. A...
详细信息
ISBN:
(纸本)0819459763
CABAC is one of the main entropy coding methods in the H.264 video compression standard. As a binary arithmetic coding, CABAC can achieve extremely high compression efficiency, but it is sensitive to channel errors. After analyzing H.264 CABAC framework, an algorithm is proposed to integrate error detection into CABAC. A forbidden symbol is introduced into the coding alphabet and is allocated a small probability. Whenever this redundant interval is observed in the decoder output, an error is detected. Mathematical analysis shows that the distribution of error detection rate approximates geometric probability with the redundancy as its parameter. A small amount of extra redundancy can be very effective in detecting errors very quickly, and the compression efficiency of CABAC will not be noticeably undermined. The value of redundancy can easily be adjusted through a single parameter to suit the error characteristics of the channels in real implementations. Some useful information about the position of the error is also obtained through this error detection scheme, which can substantially improve the efficiency of succeeding error resilience processing. Experimental results confirm these conclusions.
Huge amount of genomic sequences have been generated with the development of high-throughput sequencing technologies, which brings challenges to data storage, processing, and transmission. Standard compression tools d...
详细信息
ISBN:
(纸本)9783030616090;9783030616083
Huge amount of genomic sequences have been generated with the development of high-throughput sequencing technologies, which brings challenges to data storage, processing, and transmission. Standard compression tools designed for English text are not able to compress genomic sequences well, so an effective dedicated method is needed urgently. In this paper, we propose a genomic sequence compression algorithm based on a deep learning model and an arithmetic encoder. The deep learning model is structured as a convolutional layer followed by an attention-based bi-directional long short-term memory network, which predicts the probabilities of the next base in a sequence. The arithmetic encoder employs the probabilities to compress the sequence. We evaluate the proposed algorithm with various compression approaches, including a state-of-the-art genomic sequence compression algorithm DeepDNA, on several real-world data sets. The results show that the proposed algorithm can converge stably and achieves the best compression performance which is even up to 3.7 times better than DeepDNA. Furthermore, we conduct ablation experiments to verify the effectiveness and necessity of each part in the model and implement the visualization of attention weight matrix to present different importance of various hidden states for final prediction. The source code for the model is available in Github (https://***/viviancui59/Compressing-Genomic-Sequences).
Block truncation coding (BTC) technique is a simple and fast image compression algorithm since complicated transforms are not used. The principle used in BTC algorithm is to use two‐level quantiser that adapts to loc...
详细信息
Block truncation coding (BTC) technique is a simple and fast image compression algorithm since complicated transforms are not used. The principle used in BTC algorithm is to use two‐level quantiser that adapts to local properties of the image while preserving the first‐ or first‐ and second‐order statistical moments. The parameters transmitter or stored in the BTC algorithm are statistical moments and bitplane yielding good quality images at a bitrate of 2 bits per pixel (bpp). In this paper, two algorithms for modified BTC (MBTC) are proposed for reducing the bitrate below 2 bpp. The principal used in the proposed algorithms is to use the ratio of moments which is a smaller value when compared to absolute moments. The ratio values are then entropy coded. The bitplane is also coded to remove the correlation among the bits. The proposed algorithms are compared with MBTC and the algorithms obtained by combining JPEG standard with MBTC in terms of bitrate, peak signal‐to‐noise ratio (PSNR) and subjective quality. It is found that the reconstructed images obtained using the proposed algorithms yield better results.
Wireless Sensor Networks (WSNs) have been successfully applied in many application areas. Understanding the wireless link performance is very helpful for both protocol designers and network managers. Loss tomography i...
详细信息
ISBN:
(纸本)9781467375887
Wireless Sensor Networks (WSNs) have been successfully applied in many application areas. Understanding the wireless link performance is very helpful for both protocol designers and network managers. Loss tomography is a popular approach to inferring the per-link loss ratios from end-to-end delivery ratios. Previous studies, however, are usually targeted for networks with static or slowly changing routing paths. In this work, we propose Dophy, a Dynamic loss tomography approach specifically designed for dynamic WSNs where each node dynamically selects the forwarding nodes towards the sink. The key idea of Dophy is based on an observation that most existing protocols use retransmissions to achieve high data delivery ratio. Dophy employs arithmetic encoding to compactly encode the number of retransmissions along the paths. Dophy incorporates two mechanisms to optimize its performance. First, Dophy intelligently reduces the size of symbol set by aggregating the number of retransmissions, reducing the encoding overhead significantly. Second, Dophy periodically updates the probability model to minimize the overall transmission overhead. We implement Dophy on the TinyOS platform and evaluate its performance extensively using large-scale simulations. Results show that Dophy achieves both high encoding efficiency and high estimation accuracy. Comparative studies show that Dophy significantly outperforms traditional loss tomography approaches in terms of accuracy.
暂无评论