The basic algorithms for both lossless and lossy compression of images are discussed in the paper. The criteria of the contrastive analysis of compression algorithms are chosen. The classes of images are discussed. Th...
详细信息
The basic algorithms for both lossless and lossy compression of images are discussed in the paper. The criteria of the contrastive analysis of compression algorithms are chosen. The classes of images are discussed. The results of algorithms realization and testing in MATLAB environment are shown in the diagrams and discussed. The recommendations on future improvement of image compression algorithms are formulated in this paper.
Several quantization based watermarking techniques have been used for data hiding in which a watermark is embedded into the source image by quantizing the image coefficients and on the receiver end these coefficients ...
详细信息
Several quantization based watermarking techniques have been used for data hiding in which a watermark is embedded into the source image by quantizing the image coefficients and on the receiver end these coefficients are dequantized and watermarks are extracted from the watermarked image. In all these techniques after embedding the watermark some perceptual degradation is observed in the visual quality of the image. In quantization based embedding method all of the quantized values are equally separated in the range of the possible values and are alternately assigned to messages "1" and "0." The probabilities of watermark errors caused by malicious tampering and incidental distortion will be, respectively, maximized in this scheme. To enhance the security a lookup table (LUT) is generated first for embedding the binary watermark in which the quantization intervals are randomly assigned to "1" and "0" and then its robustness is analyzed. From the experimental results it is analyzed that LUT based embedding proves to be highly robust in comparison to other quantization based embedding method.
Digital watermarking techniques are used for the digital right managements and copyright protection. As known, it is a big deal in watermarking systems to make a good trade of between the robustness and imperceptibili...
详细信息
Digital watermarking techniques are used for the digital right managements and copyright protection. As known, it is a big deal in watermarking systems to make a good trade of between the robustness and imperceptibility. This paper presents a watermarking algorithm in the DCT domain using an evolutionary algorithm to satisfy both of robustness and imperceptibility. We employ a genetic-based algorithm to select pairs in DCT coefficients and insert watermark bit according to mathematical relation between selected coefficients in each 8×8 DCT block of image. The proposed method has been implemented and tested under various attacks including JPEG compression, additive noise distortion, and image filtering. The achieved results show that image remains imperceptible while the watermark survives the attacks especially in case of JPEG compression.
In this paper, a blind video watermarking algorithm based on 3D wavelet transform and Human Visual System (HVS) model is proposed. The proposed method is based on extracting temporal characteristics of video signal an...
详细信息
In this paper, a blind video watermarking algorithm based on 3D wavelet transform and Human Visual System (HVS) model is proposed. The proposed method is based on extracting temporal characteristics of video signal and using it to adjust spatial features of each frame. These frames are designed based on HVS model embed the message. Then the message will be extracted from video watermarked through blind way. Experimental results verifies the robustness of this method against frame swapping, frame dropping, frame averaging and MJPEG and MPEG2 compression.
In the last decade, an important research effort has been dedicated to quality assessment from subjective and objective points of view. The focus was mainly on Full Reference (FR) metrics because of the ability to com...
详细信息
In the last decade, an important research effort has been dedicated to quality assessment from subjective and objective points of view. The focus was mainly on Full Reference (FR) metrics because of the ability to compare to an original. Only few works were oriented to Reduced Reference (RR) or No Reference (NR) metrics, very useful for applications where the original image is not available such as transmission or monitoring. In this work, we propose a RR metric based on two concepts, the interest points of the image and the objects saliency on color images. This metric needs a very low amount of data (lower than 8 bytes) to be able to compute the quality scores. The results show a high correlation between the metric scores and the human judgement and a better quality range than well-known metrics like PSNR or SSIM. Finally, interest points have shown that they can predict the quality of compressed color images.
We investigate the impact of level-1 cache (CL1) parameters, level-2 cache (CL2) parameters, and cache organizations on the power consumption and performance of multi-core systems. We simulate two 4-core architectures...
详细信息
We investigate the impact of level-1 cache (CL1) parameters, level-2 cache (CL2) parameters, and cache organizations on the power consumption and performance of multi-core systems. We simulate two 4-core architectures - both with private CL1s, but one with shared CL2 and the other one with private CL2s. Simulation results with MPEG4, H.264, matrix inversion, and DFT workloads show that reductions in total power consumption and mean delay per task of up to 42% and 48%, respectively, are possible with optimized CL1s and CL2s. Total power consumption and the mean delay per task depend significantly on the applications including the code size and locality.
Autonomous Underwater Vehicles (AUVs) often communicate with scientists on the surface over an unreliable acoustic channel. The challenges of operating in deep waters, over long distances, and with surface ship noise ...
详细信息
Autonomous Underwater Vehicles (AUVs) often communicate with scientists on the surface over an unreliable acoustic channel. The challenges of operating in deep waters, over long distances, and with surface ship noise amount to a communication channel with a very low effective bandwidth. This restriction makes transmission of images, even highly compressed images, quite difficult. We present an image compression algorithm designed to convey the gist of an image to surface operators in a very small number of bytes. Our technique divides a large existing database of underwater images into `tiles', and uses these to reconstruct an approximation to new underwater images from a similar domain. We achieve significantly higher compression ratios than conventional image compression techniques, such as JPEG or SPIHT, while still being able to provide useful visual feedback to the surface.
The main issue in Digital Terrestrial Television and in IPTV networks is the Quality of Experience received by the end users. For this reason, there are needed mechanisms to automatically measure the video quality of ...
详细信息
ISBN:
(纸本)9781424459278;9780769539690
The main issue in Digital Terrestrial Television and in IPTV networks is the Quality of Experience received by the end users. For this reason, there are needed mechanisms to automatically measure the video quality of the images received. In this paper we analyze video quantization in order to determinate an optimal quantizer_scale factor value for its transmission. Then, it is used as an automatic measure to improve the video quality received by the end user. The paper shows the measurements taken for Video Quality, for Video Quality Metric and the bandwidth consumed for several types of video quality. Finally, we show the visual comparison between a high quantizer_scale factor and a reference video. Our work shows that an optimal quantizer_scale factor can be used to save bandwidth in an IPTV network or to improve the Video Quality for the same bandwidth consumption.
To authenticate image content tolerant to lossy compression, a semi-fragile image watermarking algorithm based on contour is proposed in this paper. The Y subdivision of original image is subdivided to 4×4 blocks...
详细信息
ISBN:
(纸本)9781424463886;9780769539874
To authenticate image content tolerant to lossy compression, a semi-fragile image watermarking algorithm based on contour is proposed in this paper. The Y subdivision of original image is subdivided to 4×4 blocks and executed 2-level DWT transform first, then filtered contour image derived from Canny edge detector is used as image feature to generate a watermark. Arnold transform is performed on watermark image to destroy space relativity. Watermark embedding is realized through changing the relationship of selected middle DWT coefficients according corresponding watermark bit. The subtraction result of calculated contour image and extracted watermark image is used to authenticate the content of the image. There is no perceptible degradation to original image. Experiments show that the scheme is useful to meet the requirements of image content authentication and no acceptable JPEG compression is rejected.
Most work on steganalysis, except a few exceptions, have primarily focused on providing features with high discrimination power without giving due consideration to issues concerning practical deployment of steganalysi...
详细信息
Most work on steganalysis, except a few exceptions, have primarily focused on providing features with high discrimination power without giving due consideration to issues concerning practical deployment of steganalysis methods. In this work, we focus on machine learning aspect of steganalyzer design and utilize a hierarchical ensemble of classifiers based approach to tackle two main issues. Firstly, proposed approach provides a workable and systematic procedure to incorporate several steganalyzers together in a composite steganalyzer to improve detection performance in a scalable and cost-effective manner. Secondly, since the approach can be readily extended to multi-class classification it can also be used to infer the steganographic technique deployed in generation of a stego-object. We provide results to demonstrate the potential of the proposed approach.
暂无评论