Ensuring secure data transmission is critical for maintaining confidentiality and has become increasingly important. Steganography, which embeds secret information within digital images, has been widely explored to sa...
详细信息
Ensuring secure data transmission is critical for maintaining confidentiality and has become increasingly important. Steganography, which embeds secret information within digital images, has been widely explored to safeguard sensitive data from unauthorized access over public networks. However, existing steganographic algorithms often face a significant trade-off between payload capacity, image quality, and security. Embedding large amounts of data can cause noticeable distortion in image quality, undermining the technique's effectiveness. Furthermore, current methods lack adaptability to diverse cover media and struggle to maintain reversibility and high visual quality under increased embedding capacities. To address these challenges, this study proposes a novel steganographic algorithm integrating two key innovations: (1) Enhancement of stego image quality via stego image's segmentation into four images, reducing concentration-induced distortions, and (2) optimization of data embedding through huffman coding through a lossless compression minimizing the embedding-induced distortions while maximizing payload capacity. The experimental results show that the proposed method achieves high visual fidelity, with PSNR values ranging from 75.793 dB to 44.997 dB without encryption and from 51.159 dB to 44.316 dB with encryption for payloads between 10 KB and 100 KB. These values exceed the 30 dB threshold for acceptable image steganography, ensuring minimal perceptual distortion. Additionally, the SSIM remains consistently above 0.98, indicating strong structural preservation of stego images. Comparative analysis with existing methods confirms that the proposed approach outperforms in embedding capacity, structural similarity index measure (SSIM), and peak signal-to-noise ratio (PSNR), reflecting the stego images' quality.
With examples we provide a minimum theory framework to understand data compression of text files-using huffman coding-that will also provide a framework in designing experiments involving encoding/decoding. We propose...
详细信息
With examples we provide a minimum theory framework to understand data compression of text files-using huffman coding-that will also provide a framework in designing experiments involving encoding/decoding. We propose a parallelizable heuristic for the na & iuml;ve huffman encoding and decoding which addresses the difficulty in parallelizing the inherently sequential huffman decoding. While the proposal is amenable to a design of an efficient parallel algorithm for huffman decoding, it also achieves a better compression ratio in the sense that the fraction of inputs for it works is over 0.83. The results of simulations of the parallel algorithm on a 64-core machine show that the proposed parallel modified huffman encoding and decoding results in a faster algorithm when compared to the na & iuml;ve huffman scheme and the sequential version of the heuristic proposed. Further, the parallel implementation of the proposed encoding and decoding schemes resulted in a mean speed-up of Orlognrn$$ O\left(r{\log}_{\left(\frac{n}{r}\right)}n\right) $$ and O(r)$$ O(r) $$ respectively over the na & iuml;ve huffman encoding and decoding when processing an input of size n$$ n $$ on a multi-core processor with r$$ r $$ cores.
In many image sequence compression applications, huffman coding is used to eliminate statistical redundancy resident in given data. The huffman table is often pre-defined to reduce coding delay and table transmission ...
详细信息
In many image sequence compression applications, huffman coding is used to eliminate statistical redundancy resident in given data. The huffman table is often pre-defined to reduce coding delay and table transmission overhead. Local symbol statistics, however, may be much different from the global ones manifested in the pre-defined table. In this paper, we propose three huffman coding methods in which pre-defined codebooks are effectively manipulated according to local symbol statistics. The first proposed method dynamically modifies the symbol-codeword association without rebuilding the huffman tree itself. The encoder and decoder maintain identical symbol-codeword association by performing the same modifications to the huffman table, thus eliminating extra transmission overhead. The second method adaptively selects a codebook from a set of given ones, which produces the minimum number of bits. The transmission overhead in this method is the codebook selection information, which is observed to be negligible compared with the bit saving attained. Finally, we combine the two aforementioned methods to further improve compression efficiency. Experiments are carried out using five test image sequences to demonstrate the compression performance of the proposed methods. (C) 1998 Elsevier Science B.V. All rights reserved.
In this paper, source coding or data compression is viewed as a measurement problem. Given a measurement device with fewer states than the observable of a stochastic source, how can one capture their essential informa...
详细信息
In this paper, source coding or data compression is viewed as a measurement problem. Given a measurement device with fewer states than the observable of a stochastic source, how can one capture their essential information? We propose modeling stochastic sources as piecewise-linear discrete chaotic dynamical systems known as Generalized Luroth Series (GLS) which has its roots in Georg Cantor's work in 1869. These GLS are special maps with the property that their Lyapunov exponent is equal to the Shannon's entropy of the source (up to a constant of proportionality). By successively approximating the source with GLS having fewer states (with the nearest Lyapunov exponent), we derive a binary coding algorithm which turns out to be a rediscovery of huffman coding, the popular lossless compression algorithm used in the JPEG international standard for still image compression.
Antenna switch enables multiple antennas to share a common RF chain. It also offers an additional spatial dimension, i.e., antenna index, that can be utilized for data transmission via both signal space and spatial di...
详细信息
Antenna switch enables multiple antennas to share a common RF chain. It also offers an additional spatial dimension, i.e., antenna index, that can be utilized for data transmission via both signal space and spatial dimension. In this paper, we propose a huffman coding-based adaptive spatial modulation that generalizes both conventional spatial modulation and transmit antenna selection. Through the huffman coding, i.e., designing variable length prefix codes, the transmit antennas can be activated with different probabilities. When the input signal is Gaussian distributed, the optimal antenna activation probability is derived through optimizing channel capacity. To make the optimization tractable, closed form upper bound and lower bound are derived as the effective approximations of channel capacity. When the input is discrete QAM signal, the optimal antenna activation probability is derived through minimizing symbol error rate. Numerical results show that the proposed adaptive transmission offers considerable performance improvement over the conventional spatial modulation and transmit antenna selection.
Infrared line-scanning images have high redundancy and large file sizes. In JPEG2000 compression, the MQ arithmetic encoder's complexity slows down processing. huffman coding can achieve O(1) complexity based on a...
详细信息
Infrared line-scanning images have high redundancy and large file sizes. In JPEG2000 compression, the MQ arithmetic encoder's complexity slows down processing. huffman coding can achieve O(1) complexity based on a code table, but its integer-bit encoding mechanism and ignorance of the continuity of symbol distribution result in suboptimal compression performance. In particular, when encoding sparse quantized wavelet coefficients that contain a large number of consecutive zeros, the inaccuracy of the one-bit shortest code accumulates, reducing compression efficiency. To address this, this paper proposes Huf-RLC, a huffman-based method enhanced with Run-Length coding. By leveraging zero-run continuity, Huf-RLC optimizes the shortest code encoding, reducing the average code length to below one bit in sparse distributions. Additionally, this paper proposes a wavelet coefficient probability model to avoid the complexity of calculating statistics for constructing huffman code tables for different wavelet subbands. Furthermore, Differential Pulse Code Modulation (DPCM) is introduced to address the remaining spatial redundancy in the low-frequency wavelet subband. The experimental results indicate that the proposed method outperforms JPEG in terms of PSNR and SSIM, while maintaining minimal performance loss compared to JPEG2000. Particularly at low bitrates, the proposed method shows only a small gap with JPEG2000, while JPEG suffers from significant blocking artifacts. Additionally, the proposed method achieves compression speeds 3.155 times faster than JPEG2000 and 2.049 times faster than JPEG.
This paper presents a new huffman coding method based on number character. The traditional 256 code table is replaced by the 0 similar to 9 character, the space character and the enter character in this method. Meanwh...
详细信息
ISBN:
(纸本)9781424409723
This paper presents a new huffman coding method based on number character. The traditional 256 code table is replaced by the 0 similar to 9 character, the space character and the enter character in this method. Meanwhile it illustrates that the value of entropy has close relations with the probability model of signal;that the same signal has various entropies under different models;and that the probability model of signal which makes the entropy smaller can provide bigger compression space. Moreover this approach not only reduces the size of traditional huffman code table but also enhances the compression ratio of the image data to some extent. A large number of experimental data from random online traffic images indicate that absolutely the huffman code table can be controlled below 40bit, while the coding efficiency can be upwards of 95% and the compression ratio will exceed 60%.
With the development of information technology, image has become the mainstream of information transmission. Compared with character, image contains more information, but because image and character need more storage ...
详细信息
With the development of information technology, image has become the mainstream of information transmission. Compared with character, image contains more information, but because image and character need more storage capacity, it will occupy more bandwidth in network transmission. In order to transmit image information more quickly, image compression is a good choice. This paper is based on an eye of image compression. The method of image compression in this paper is that firstly, the image is filtered by wavelet transform to remove the redundant information in the image, and then the huffman method is used to encode the image. The simulation results of JPEG format image show that the size of the image can be reduced in the same image effect. (C) 2019 Elsevier Inc. All rights reserved.
huffman coding technique, which is often used in image processing problems, operates on the symbols that are constructed based on zero numbers of the DCT-transformed and quantized images. In this algorithm, probabilit...
详细信息
ISBN:
(纸本)9781467373869
huffman coding technique, which is often used in image processing problems, operates on the symbols that are constructed based on zero numbers of the DCT-transformed and quantized images. In this algorithm, probabilities of the symbols are calculated and codewords are produced for every single symbol. Highest probable symbols are represented in less number of bits. Hence, total number of bits transmitted is minimized. Similarly, it is possible to compress a speech signal that is primarily digitized. In this study, bandwidth change is investigated for a waveform-coding quantized speech signal compressed with huffman coding technique.
暂无评论