In this paper, we present a correlation-aware spread spectrum (CASS) embedding scheme for image data hiding. The basic idea is to explore the bit information and the correlation between the host signal and the waterma...
详细信息
In this paper, we present a correlation-aware spread spectrum (CASS) embedding scheme for image data hiding. The basic idea is to explore the bit information and the correlation between the host signal and the watermark signature code during embedding. We show that this correlation-aware embedding approach could yield watermark decoding performance improvement at the decoder side. To reduce the interference observed in CASS and thus improving the decoding performance, we further propose the correlation-aware hybrid spread spectrum (CAHSS) data hiding scheme by incorporating the idea of improved spread spectrum (ISS). The simulation results reveal the superior watermark decoding performances of the proposed correlation-aware schemes.
Many applications value an assessment of distorted natural images according to their usefulness, or utility, rather than their perceptual quality. For the quality task, human observers evaluate an image based on its p...
详细信息
Many applications value an assessment of distorted natural images according to their usefulness, or utility, rather than their perceptual quality. For the quality task, human observers evaluate an image based on its perceptual resemblance to a reference, whereas for the utility task, the usefulness of an image as a surrogate for a reference is under evaluation. This paper presents a novel technique for acquiring perceived utility scores derived from textual descriptions produced by observers viewing images. The technique uses an observer-centric approach, so observers dictate the relevant concepts that characterize image usefulness. This technique is used to collect perceived utility (PU) scores for 150 distorted images that simulate scenes captured by a surveillance system. The capability of both the natural image contour evaluation (NICE) utility estimator, which compares contours of the reference and test images, and popular quality estimators to estimate PU is reported. The conclusions drawn from the results augment previously reported results and establish that a multi-scale implementation of NICE (MS-NICE) is the most robust utility estimator among the estimators evaluated, since MS-NICE consistently performs as well as estimators producing the most accurate perceived utility estimates for various distortion types.
Assessing quality of distorted/decompressed images without reference to the original image is difficult due to vagueness in extracted features and complex relation between features and visual quality of images. The pa...
详细信息
Assessing quality of distorted/decompressed images without reference to the original image is difficult due to vagueness in extracted features and complex relation between features and visual quality of images. The paper aims at assessing the quality of distorted/decompressed images without any reference to the original image by developing an adaptive network based fuzzy inference system (ANFIS). First level Haar approximation entropies of test images from LIVE database and region based features extracted from the benchmark images are considered as inputs while mean opinion score (MOS) based quality of the images used as output to the fuzzy inference system (FIS). The input-output variables of the FIS are expressed using linguistic variables and fuzzified to measure the vagueness in extracted features. Takagi-Sugeno-Kang (TSK) inference rule has been applied to the FIS to predict the quality of a new distorted/decompressed image. The FIS has been trained to tune the parameters of the membership functions of the fuzzy sets that assess quality of the image more accurately. Quality of decompressed and various noise incorporated distorted test images are predicted using the proposed method producing output comparable with other existing no reference techniques. Results are validated with the objective and subjective image quality measures.
A simple directional extension of the JPEG standard is proposed, which consists in rearranging pixels in 8 × 8 blocks before the DCT-based transform coding, and then restoring the original pixel positions before ...
详细信息
A simple directional extension of the JPEG standard is proposed, which consists in rearranging pixels in 8 × 8 blocks before the DCT-based transform coding, and then restoring the original pixel positions before putting reconstructed blocks into the decoded image buffer. Pixels are shuffled so as to improve block reconstruction accuracy and/or to decrease the number of bits to be put into a JPEG-compliant bit stream. For numerous pictures, such a straightforward adapting a signal to a transform gives evident gains in compression efficiency over the original JPEG. Even though the more advanced known algorithms offer better results, the presented solution requires less computations and memory and is more hardware friendly.
We propose a 4K digital cinema wireless transmission over 1.2 Gbps wireless LAN system. The proposed system employs next generation wireless LAN system based on IEEE802.11TGac. It uses 80 MHz of bandwidth on 5 GHz ban...
详细信息
We propose a 4K digital cinema wireless transmission over 1.2 Gbps wireless LAN system. The proposed system employs next generation wireless LAN system based on IEEE802.11TGac. It uses 80 MHz of bandwidth on 5 GHz band. In this system, video data is compressed by JPEG 2000 with added error resilience tools to improve error performance against wireless channel. Computer simulations are used to evaluate the bit error performance's influences to the video quality.
This paper presents an IntDCT with only dyadic values such as k/2 n (k, n ∈ N). Although some IntDCTs have been proposed, they are unsuitable for lossless-to-lossy image coding in low-bit-word-length (coefficients)....
详细信息
This paper presents an IntDCT with only dyadic values such as k/2 n (k, n ∈ N). Although some IntDCTs have been proposed, they are unsuitable for lossless-to-lossy image coding in low-bit-word-length (coefficients). First, the proposed M-channel lossless WHT (LWHT) can be constructed by only (log 2 M)-bit-word-length and has structural regularity. Then, our 8-channel IntDCT keeps good coding performance even In low-bit-word-length because LWHT, which is main part of IntDCT, can be implemented by 3-bit-word-length. Finally, our method is validated In lossless-to-lossy image coding.
In this paper, a new rate adaptive control algorithm is proposed based on study of JPEG2000 rate control algorithm, in order to reduce the amount of bits sent to the arithmetic coder, without any significant changes i...
详细信息
In this paper, a new rate adaptive control algorithm is proposed based on study of JPEG2000 rate control algorithm, in order to reduce the amount of bits sent to the arithmetic coder, without any significant changes into the standard architecture and without loosing performance. After wavelet transform, the coefficient is modeled as Gaussian distribution, but remove the offspring of the lowest frequency. Each block bit rate of the actual encoding has been achieved based on rate-distortion theory in the overall rate certain, and the bit rate should be real-time deadline in the entropy coding. The offspring of the lowest frequency has been code at without distortion. This algorithm achieve bit rate deadline in real-time, reducing the encoder computation and memory usage. Experimental results show that after the removal of the most low frequencies, the remaining band coefficient modeled as a Gaussian distribution, the image compression is significantly better than results that the whole band is the Gaussian distribution, when the compression rate is not too high. And rate allocation algorithm is accurate and low complex, and that it is most suitable for hardware implementation.
An information hiding method on escape sequences of Huffman coding which can embed a great deal of secret information into AAC files is proposed based on the research of AAC coding standard. The proposed algorithm fir...
详细信息
An information hiding method on escape sequences of Huffman coding which can embed a great deal of secret information into AAC files is proposed based on the research of AAC coding standard. The proposed algorithm first unpacks the cover AAC file to search for the escape sequences, and then modifies least significant bit (LSB) of the escape sequences with the approach of matrix encoding, to improve the embedding efficiency. This method is low in computational complexity without any changes of the length of AAC coding. Experimental results reveal that the proposed algorithm can achieve higher hidden data capacity for AAC audio at the bitrate of 128kbps or above, furthermore, it has good imperceptibility and can resist the steganalysis to some extent.
In this paper, we have proposed an image compression/decompression algorithm for almost dual-colour images using k-means clustering, bit-map generation and run length encoding. The proposed algorithm is very simple in...
详细信息
In this paper, we have proposed an image compression/decompression algorithm for almost dual-colour images using k-means clustering, bit-map generation and run length encoding. The proposed algorithm is very simple in implementation, fast in encoding time. Experimental results show that compression ratio of this algorithm is better than our previous BOBC algorithm, dynamic BOBC-RLE algorithm and JPEG compression techniques. Image quality (PSNR) better as compared to that of the above mentioned compression techniques. In actuality the algorithm not only compresses the image but also sharpens it and removes color artefacts that result at the border between two colors.
In the traditional compression schemes of visual multimedia contents, such as video and image, scalable compression is usually achieved by splitting the information into one base layer and several enhanced layers. Suc...
详细信息
In the traditional compression schemes of visual multimedia contents, such as video and image, scalable compression is usually achieved by splitting the information into one base layer and several enhanced layers. Such principle for scalable compression encounters a problem in the presence of transmission loss that the unpredictable loss in the base layer causes tremendous reduction of recognizable information by human perception. Multiple Description coding (MDC) is defined to be robust in the presence of transmission loss and several MDC schemes have been proposed as extensions of the existing image and video compression standards. In this paper, a new presentation of image information based on the principle of MDC is proposed. The image is presented as samples whose density is variable according to the frequency energy. Moreover, the samples are able to be split into several partitions, promising that a recognizable approximation of the original content can be reconstructed by any partition, and more high frequency information can be reconstructed with more partitions, which fulfills the motivation of MDC. The reconstruction is realized by interpolating samples using normalized radial basis function (RBF) network. The result shows that the proposed framework of image information splitting provides acceptable quality scalability.
暂无评论