Digital image forensics seeks to detect statistical traces left by image acquisition or post-processing in order to establish an images source and authenticity. Digital cameras acquire an image with one sensor overlay...
详细信息
Digital image forensics seeks to detect statistical traces left by image acquisition or post-processing in order to establish an images source and authenticity. Digital cameras acquire an image with one sensor overlayed with a color filter array (CFA), capturing at each spatial location one sample from the three necessary color channels. The missing pixels must be interpolated in a process known as demosaicking. This process is highly nonlinear and can vary greatly between different camera brands and models. Most practical algorithms, however, introduce correlations between the color channels, which are often different between algorithms. In this paper, we show how these correlations can be used to construct a characteristic map that is useful in matching an image to its source. Results show that our method employing inter-channel traces can distinguish between sophisticated demosaicking algorithms. It can complement existing classifiers based on inter-pixel correlations by providing a new feature dimension.
Nowadays, Digital Television offers a wide variety of contents to the viewers. All these contents and their associated information are sent to the user's decoders. But the way this information is managed does not ...
详细信息
Nowadays, Digital Television offers a wide variety of contents to the viewers. All these contents and their associated information are sent to the user's decoders. But the way this information is managed does not allow us to create a suitable organization of the data. It's necessary to define more powerful data structures that support new and useful functionalities for the viewer. The main objective in this research is to define the specification for an ontology-based content system that optimizes the way the viewer accesses to the data. First, this system reads the information that is contained in the transport stream. Then, it arranges and storages all the data using ontologies, adding information from other sources when necessary. Finally it allows the user to perform searches and offers him recommendations according to his preferences.
Lossy compression of hyperspectral and ultraspectral images is traditionally performed using 3D transform coding. This approach yields good performance, but the complexity and memory requirements make it unsuitable fo...
详细信息
Lossy compression of hyperspectral and ultraspectral images is traditionally performed using 3D transform coding. This approach yields good performance, but the complexity and memory requirements make it unsuitable for onboard compression. In this paper we propose a low-complexity lossy compression scheme based on prediction, quantization and rate-distortion optimization. The scheme employs coset codes coupled with the newconcept of “informed quantization”, and requires no entropy coding. The performance of the resulting algorithm is competitive with that of state-of-the-art 3D transform coding schemes, but the complexity is immensely lower, making it suitable for onboard compression at high throughputs.
In this paper, a new forensic marking algorithm which traces illegal distributors at each distribution step and embeds holographic forensic mark into multiple DCT-SVD domain is proposed. This algorithm can embed high ...
详细信息
In this paper, a new forensic marking algorithm which traces illegal distributors at each distribution step and embeds holographic forensic mark into multiple DCT-SVD domain is proposed. This algorithm can embed high capacity information using the off-axis hologram and ensure the robustness using DCT-SVD domain. This also can achieve enough payloads for multiple steps of distribution and robustly survive various attacks such as rotation, additive noise, and compression.
In this paper, we present a new method to statistically recover the full 3D shape of a face from a set of sparse feature points. We attribute noise in the feature point positions to generalisation error of the model. ...
详细信息
In this paper, we present a new method to statistically recover the full 3D shape of a face from a set of sparse feature points. We attribute noise in the feature point positions to generalisation error of the model. We learn the variance of these feature points empirically using out-of-sample data. This allows the shape reconstruction to probabilistically model the way in which feature points deviate from their true position. We are able to reduce the reconstruction error by as much as 12%.
H.264/AVC achieved better coding efficiency and visual quality than the previous video coding standards such as H.262, MPEG-1, MPEG-2, H.263 and MPEG-4 part2 at the same bit rate. To achieve the better coding efficien...
详细信息
ISBN:
(纸本)9781424456956
H.264/AVC achieved better coding efficiency and visual quality than the previous video coding standards such as H.262, MPEG-1, MPEG-2, H.263 and MPEG-4 part2 at the same bit rate. To achieve the better coding efficiency, H.264/AVC uses the rate distortion optimization technique (RDO). The computational complexity is increased by using RDO calculation in all possible modes of intra prediction. To reduce this complexity, a number of fast algorithms for inter/intra-predications are proposed. One of approaches to reducing complexity is based on the minimization of the candidate modes by using preprocessing. In this paper, we propose a fast intra prediction mode decision method using high correlation in video sequences. The proposed method decides intra prediction mode using high correlation of the video sequence. The experimental results show that the encoding time saving can be achieved up to maximally 59% with negligible PSNR (Peak Signal-to-Noise Ratio) drop and a slight increment in bit rate compared to the results of the joint model (JM) reference software of H.264/AVC proceedings.
Watermark resistance to geometric attacks is a crucial issue in watermark system design. Challenging cropping and recently reported random bending attacks (RBAs) are still the Achilles heel for most of the existing wa...
详细信息
ISBN:
(纸本)9781424465972;9780769540115
Watermark resistance to geometric attacks is a crucial issue in watermark system design. Challenging cropping and recently reported random bending attacks (RBAs) are still the Achilles heel for most of the existing watermarking schemes. This paper presents a geometric attacks resistant zero-watermarking scheme for color images, which is based on one of the image geometric invariant representations-the mean-based 2D color histogram extracted from two different color components by reference to the mean value of the effective pixels of each component. Experimental results demonstrate that the proposed method provides a satisfactory performance for various geometric attacks and common image processing operations, including affine transformations, cropping, RBAs, additive noise, filtering, JPEG compression, etc.
To modify the security of Plus-Minus 1 algorithm used in JPEG steganography (J-PM1), a new method is proposed based on adaptive flipping probability estimation. First, the least square matching(LSM) method is introduc...
详细信息
To modify the security of Plus-Minus 1 algorithm used in JPEG steganography (J-PM1), a new method is proposed based on adaptive flipping probability estimation. First, the least square matching(LSM) method is introduced to calculate the flipping probability of each coefficient by minimizing the difference between the original distribution (i.e. the histogram) and that of the stego image. Second, a plus 1 or minus 1 (PM1) operation is performed on a host signal according to the flipping probabilities. Thus the embedded bits could be more imperceptible to a great extent. Experimental results show that less changes of the coefficient distributions occurred when compared to J-PM1, totally with around 58.8% decrease of histogram distortion; it also resists the chi-square analysis and cropping analysis more successfully than J-PM1 and F5 algorithms.
In this paper, we propose a novel method to weight features for their relevance to the given classification problem. The weight of a feature is computed by its Localized Generalization Error model (L-GEM). Then, a Rad...
详细信息
In this paper, we propose a novel method to weight features for their relevance to the given classification problem. The weight of a feature is computed by its Localized Generalization Error model (L-GEM). Then, a Radial Basis Function Neural Network (RBFNN) is trained by those weighted features. Experimental results on image classification problem show that the proposed method is efficient and effective in comparison to current methods.
Wavelet transformation is widely used in the fields of image compression because of good characteristics. The algorithm what is discussed in this paper employs wavelet toolbox of MATLAB. The original image is de-noise...
详细信息
Wavelet transformation is widely used in the fields of image compression because of good characteristics. The algorithm what is discussed in this paper employs wavelet toolbox of MATLAB. The original image is de-noised by using the functions of wavelet toolbox first. Then image must be quantified and coded. The simulation results show that the algorithm has excellent effects in the image reconstruction, and the study shows valuable in image compression.
暂无评论