This paper proposes a lossless data embedding scheme of great payload capacity and good image quality, which is based on difference expansion. In this scheme, every pixel in a host image is divided into two nibbles an...
详细信息
This paper proposes a lossless data embedding scheme of great payload capacity and good image quality, which is based on difference expansion. In this scheme, every pixel in a host image is divided into two nibbles and each nibble pair between two adjacent pixels can be used to hide a secret message. In order to completely recover the host image, the arithmetic coding is adopted based on prediction by partial matching (PPM) model to compress the restored information. This proposed scheme has been successfully applied to different images. According to the experimental results, embedded information can be extracted correctly and quickly from the embedded image. In addition, the proposed scheme can not only hide a large amount of information in a host image without making noticeable distortion, but can also completely restore the host image from the embedded image. (C) 2007 Elsevier B.V. All rights reserved.
In this paper, an adaptive multi-dictionary model for data compression is proposed. Dictionary techniques applied in lossless compression coding can be modeled from the dictionary management point of view which is sim...
详细信息
In this paper, an adaptive multi-dictionary model for data compression is proposed. Dictionary techniques applied in lossless compression coding can be modeled from the dictionary management point of view which is similar to that of cache memory. The behavior of a compression technique can be described by nine parameters defined in the proposed model, which provides a unified framework to describe the behavior of lossless compression techniques including existing probability-based Huffman coding and arithmetic coding, and dictionary-based LZ-family coding and its variants. Those methods can be interpreted as special cases under the proposed model. New compression techniques can be developed by choosing proper management policies in order to meet special encoding/decoding software or hardware requirements, or to achieve better compression performance. (C) 1998 Elsevier Science B.V. All rights reserved.
In this paper we propose a novel efficient adaptive binary arithmetic coder which is multiplication-free and requires no look-up tables. To achieve this, we combine the probability estimation based on a virtual slidin...
详细信息
In this paper we propose a novel efficient adaptive binary arithmetic coder which is multiplication-free and requires no look-up tables. To achieve this, we combine the probability estimation based on a virtual sliding window with the approximation of multiplication and the use of simple operations to calculate the next approximation after the encoding of each binary symbol. We show that in comparison with the M-coder the proposed algorithm provides comparable computational complexity, less memory footprint and bitrate savings from 0.5 to 2.3% on average for H.264/AVC standard and from 0.6 to 3.6% on average for HEVC standard.
This correspondence presents a scheme for the lossless compression of wideband spectrally and/or temporally sparse radio-frequency intercepts. It decomposes intercepts into time/frequency partitions using an adaptive ...
详细信息
This correspondence presents a scheme for the lossless compression of wideband spectrally and/or temporally sparse radio-frequency intercepts. It decomposes intercepts into time/frequency partitions using an adaptive invertible channelizer based on infinite impulse response orthogonal wavelet filter banks. The resulting subband decomposition is compressed via arithmetic coding.
In this paper, me present a method for reducing the memory requirements of an embedded system by using code compression. We compress the instruction segment of the executable running on the embedded system, and we sho...
详细信息
In this paper, me present a method for reducing the memory requirements of an embedded system by using code compression. We compress the instruction segment of the executable running on the embedded system, and we show how to design a run-time decompression unit to decompress code on the fly before execution. Our algorithm uses arithmetic coding in combination with a Markov model, which is adapted to the instruction set and the application, We provide experimental results on two architectures, Analog Devices Share and ARM's ARM and Thumb instruction sets, and show that programs can often be reduced more than 50%, Furthermore, we suggest a table-based design that allows multibit decoding to speed up decompression.
The One-Time Pad (OTP) is the only known unbreakable cipher, proved mathematically by Shannon in 1949. In spite of several practical drawbacks of using the OTP, it continues to be used in quantum cryptography. DNA cry...
详细信息
The One-Time Pad (OTP) is the only known unbreakable cipher, proved mathematically by Shannon in 1949. In spite of several practical drawbacks of using the OTP, it continues to be used in quantum cryptography. DNA cryptography and even in classical cryptography when the highest form of security is desired (other popular algorithms like RSA, ECC, AES are not even proven to be computationally secure). In this work, we prove that the OTP encryption and decryption is equivalent to finding the initial condition on a pair of binary maps (Bernoulli shift). The binary map belongs to a family of 1D nonlinear chaotic and ergodic dynamical systems known as Generalized Luroth Series (GLS). Having established these interesting connections, we construct other perfect secrecy systems on the GLS that are equivalent to the One-Time Pad, generalizing for larger alphabets. We further show that OTP encryption is related to Randomized arithmetic coding - a scheme for joint compression and encryption. (C) 2012 Elsevier B.V. All rights reserved.
JPEG 2000 is one of the most popular image compression standards offering significant performance advantages over previous image standards. High computational complexity of the JPEG 2000 algorithms makes it necessary ...
详细信息
JPEG 2000 is one of the most popular image compression standards offering significant performance advantages over previous image standards. High computational complexity of the JPEG 2000 algorithms makes it necessary to employ methods that overcomes the bottlenecks of the system and hence an efficient solution is imperative. One such crucial algorithms in JPEG 2000 is arithmetic coding and is completely based on bit level operations. In this paper, an efficient hardware implementation of arithmetic coding is proposed which uses efficient pipelining and parallel processing for intermediate blocks. The idea is to provide a two-symbol coding engine, which is efficient in terms of performance, memory and hardware. This architecture is implemented in Verilog hardware definition language and synthesized using Altera field programmable gate array. The only memory unit used in this design is a FIFO (first in first out) of 256 bits to store the CX-D pairs at the input, which is negligible compared to the existing arithmetic coding hardware designs. The simulation and synthesis results show that the operating frequency of the proposed architecture is greater than 100 MHz and it achieves a throughput of 212 Msymbols/sec, which is double the throughput of conventional one-symbol implementation and enables at least 50% throughput increase compared to the existing two-symbol architectures.
In the recent years,microarray technology gained attention for concurrent monitoring of numerous microarray *** remains a major challenge to process,store and transmit such huge volumes of microarray ***,image compres...
详细信息
In the recent years,microarray technology gained attention for concurrent monitoring of numerous microarray *** remains a major challenge to process,store and transmit such huge volumes of microarray ***,image compression techniques are used in the reduction of number of bits so that it can be stored and the images can be shared *** techniques have been proposed in the past with applications in different *** current research paper presents a novel image compression technique i.e.,optimized Linde–Buzo–Gray(OLBG)with Lempel Ziv Markov Algorithm(LZMA)coding technique called OLBG-LZMA for compressing microarray images without any loss of *** model is generally used in designing a local optimal codebook for image *** construction is treated as an optimizationissue and can be resolved with the help of Grey Wolf Optimization(GWO)*** the codebook is constructed by LBGGWO algorithm,LZMA is employed for the compression of index table and raise its compression efficiency *** were performed on high resolution Tissue Microarray(TMA)image dataset of 50 prostate tissue samples collected from prostate cancer *** compression performance of the proposed coding esd compared with recently proposed *** simulation results infer that OLBG-LZMA coding achieved a significant compression performance compared to other techniques.
Reversible information-embedding (RIE) is a technique transforming host signals and the message into the stego-signals, and the stego-signals can be losslessly reversed to the host signals and the message. We consider...
详细信息
Reversible information-embedding (RIE) is a technique transforming host signals and the message into the stego-signals, and the stego-signals can be losslessly reversed to the host signals and the message. We consider the conditions: 1) the host signals are composed of gray-scale independent and identically distributed (i.i.d.) samples;2) the mean squared error is adopted as the measure of distortion;and 3) the procedure is a scalar approach, i.e., the encoder only reads a host signal and then outputs the corresponding stego-signal in each iteration. In this paper, we propose an iterative algorithm to calculate the signal transition probabilities approximating the optimal rate-distortion bound. Then we propose an explicit implementation to embed a message in an i.i.d. host sequence. The experiments show that the proposed method closely approaches the expected rate-distortions in i.i.d. gray-scale signals. By the image prediction model, the proposed method can be applied to gray-scale images.
This letter considers a new approach for the lossless progressive compression of light detection and ranging (LiDAR) data stored within a LAS file (public file format for the interchange of three-dimensional point clo...
详细信息
This letter considers a new approach for the lossless progressive compression of light detection and ranging (LiDAR) data stored within a LAS file (public file format for the interchange of three-dimensional point cloud data), which is used for storing the results of LiDAR scanning. The presented method builds a hierarchical data model for arranging LAS points into different levels in one pass. The higher levels are compressed using variable length and arithmetic coding, whilst the lower levels apply a prediction model of the non-progressive compression schema. The order of the points, as captured by the LiDAR scanner, has to be preserved within each level as better compression ratios are achieved in this way.
暂无评论