With the dimension of steganalysis feature increases rapidly, ensemble steganalysis has become the trend, and its performance is greatly influenced by the selection of feature subspaces. In order to select feature sub...
详细信息
With the dimension of steganalysis feature increases rapidly, ensemble steganalysis has become the trend, and its performance is greatly influenced by the selection of feature subspaces. In order to select feature subspaces more effectively to improve the performance of ensemble steganalysis, a feature subspace selection algorithm based on Fisher criterion is proposed. The proposed selection algorithm computes weight for each feature component according to its Fisher criterion value and a base probability value, then selects the feature components with the probabilities in proportion to their weights. When it is used to improve the ensemble steganalysis, the appropriate base probability value is searched by steps. Experimental results show that for J-UNIWARD (JPEG UNIversal WAvelet Relative Distortion) steganography, the proposed feature subspace selection algorithm can select more effective feature subspaces, and enhance the detection performance of GFR (Gabor Filter Residual) feature.
We propose a novel lossy block-based image compression approach. Our approach builds on non-linear autoencoders that can, when properly trained, explore non-linear statistical dependencies in the image blocks for redu...
详细信息
We propose a novel lossy block-based image compression approach. Our approach builds on non-linear autoencoders that can, when properly trained, explore non-linear statistical dependencies in the image blocks for redundancy reduction. In contrast the DCT employed in JPEG is inherently restricted to exploration of linear dependencies using a second-order statistics framework. The coder is based on pre-trained class-specific Restricted Boltzmann Machines (RBM). These machines are statistical variants of neural network autoencoders that directly map pixel values in image blocks into coded bits. Decoders can be implemented with low computational complexity in a codebook design. Experimental results show that our RBM-codec outperforms JPEG at high compression rates, both in terms of PSNR, SSIM and subjective results.
The problem of optimal bit rate allocation for 3-D JPEG2000 compression is dicussed. In this paper we consider data where the 2-D slices are compressed using JPEG2000, and the third dimension is decorrelated using the...
详细信息
The problem of optimal bit rate allocation for 3-D JPEG2000 compression is dicussed. In this paper we consider data where the 2-D slices are compressed using JPEG2000, and the third dimension is decorrelated using the Karhunen-Loeve transform. Here two new methods are proposed. The first approach is called the Rate Distortion Optimal (RDO) method and is based on Post-Compression Rate-Distortion (PCRD) optimization concept. The second approach is here called the Mixed Model (MM) approach and consists of extending the traditional high-resolution model with a region that is accurate for low bit rates. The proposed bit allocation methods are tested by applying them to Meteorological (Met) data. The specific data set used was generated by the Battlescale Forecast Model (BFM). The test results shows that these approach significantly reduces the computational complexity.
The description of natural phenomena from different Earth Observation (EO) activity domains is represented by complex processes that involve a solid understanding of the phenomena, the syntactic and semantic descripti...
详细信息
The description of natural phenomena from different Earth Observation (EO) activity domains is represented by complex processes that involve a solid understanding of the phenomena, the syntactic and semantic description of the proposed solutions, the experimental data collection, and the analysis and interpretation of the results. Such a use case scenario is modeled as a collection of operators that are able to generate in a finite amount of time a valid output, based on a range of input data sets. The current paper aims at identifying the main EO data processing types and providing a set of basic operators that represent the core of the KEOPS (Kernel Operators) system. At the moment, several researches are conducted to find the best solution of integrating this system within the BigEarth platform, but the main idea is to use KEOPS as a plugin that can fit within any EO related platform that aims at processing spatial data. One main advantage of using the KEOPS system is the possibility of easily extending its core dataset with new operators that fulfill the needs of developing complex use case scenarios.
We propose an approach to unsupervised segmentation of moving video objects (VOs) over the MPEG compressed domain. The proposed algorithm utilizes the homogeneity property of the spatiotemporally localized VO's in...
详细信息
We propose an approach to unsupervised segmentation of moving video objects (VOs) over the MPEG compressed domain. The proposed algorithm utilizes the homogeneity property of the spatiotemporally localized VO's information (macroblock motion vectors (MVs) and DCT's DC coefficients) in order to achieve segmentation with an accuracy of 8/spl times/8 DCT block size. First, macroblock MVs are utilized to identify the locations of moving VOs. DC coefficients are then exploited to achieve finer boundary segmentation. For achieving both objectives, a maximum entropy fuzzy clustering algorithm is proposed to classify MVs and DC coefficients into homogeneous regions, respectively. Experimental results show that the developed algorithm can accurately segment VOs with an accuracy of 8/spl times/8 DCT block size without any user intervention.
In this paper, an image format named Virtual Character Animation Image (VCAI) is presented for providing an efficient form of representation for humanoid motion data. By mapping the VCA motion information as 2-D image...
详细信息
In this paper, an image format named Virtual Character Animation Image (VCAI) is presented for providing an efficient form of representation for humanoid motion data. By mapping the VCA motion information as 2-D images, characteristics of joint's correlation for the skeletal avatar and temporal coherence within the motion data are jointly reflected as spatial correlation of an image to aid compression. Since the VCA is now encoded as an image, the use of image processing tools and image delivery techniques are now possible. Lastly, a modified motion filter (MMF) is proposed to minimize the visual discontinuity in VCA's motion due to the quantization and transmission noise at high compression rate. The MMF helps to remove high frequency noise components and smoothen the motion signal providing perceptually improved VCA with reduction in distortion. Simulation results demonstrate the effectiveness of the proposed scheme ensuring the minor degradation of VCA quality measured by objective error metric and perceptual loss to the VCA for highly compressed motion stream.
The sample by sample DPCM (SbS DPCM) is an important prediction technique for the H.264/AVC lossless intra compression. In this paper, we propose a new prediction method that is more efficient than the conventional Sb...
详细信息
The sample by sample DPCM (SbS DPCM) is an important prediction technique for the H.264/AVC lossless intra compression. In this paper, we propose a new prediction method that is more efficient than the conventional SbS DPCM, thereby improving the overall compression performance. The proposed method prepares 5 partition patterns for each 4×4 block such as 4×4 (no partition), 4×2, 2×4, 2×2 and 1×1. The pixels in each partition is intra predicted by SbS DPCM and the best partition which produces minimum bit is selected as the partition pattern for the 4×4 block. Also, the number of available intra prediction directions is determined according to the partition pattern to avoid too much side-information transmission. The experimental results show that the proposed method gives 3.62 % point bit rate saving on average and 4.74 % point bit rate saving at maximum compared to the conventional SbS DPCM.
Growing demand for networked video applications has led to increased interest in VBR traffic, since VBR traffic allows statistical multiplexing to conserve bandwidth. However, transporting VBR traffic poses challenges...
详细信息
Growing demand for networked video applications has led to increased interest in VBR traffic, since VBR traffic allows statistical multiplexing to conserve bandwidth. However, transporting VBR traffic poses challenges in guaranteeing QoS and efficiently utilizing resources relative to CBR traffic. Modeling VBR traffic aids in estimating QoS metrics, evaluating scheduling algorithms, and determining statistical multiplexing gains. While most VBR traffic models have focused on high-bandwidth traffic, we propose a traffic model for H.263 encoded video sequences, appropriate for low-bit-rate video. The traffic model uses a multi-state Markov chain representing a pseudo-histogram of observed bit-rates. To reduce correlation between multiplexed sequences and enhance multiplexer performance, each stream is deterministically smoothed over one frame length.
Shape (object) description consists a key part of image content description in MPEG-7. Most of existing shape descriptors are usually either application dependent or non-robust, making them undesirable for generic sha...
详细信息
ISBN:
(纸本)0780374029
Shape (object) description consists a key part of image content description in MPEG-7. Most of existing shape descriptors are usually either application dependent or non-robust, making them undesirable for generic shape description. In this paper, an Enhanced Generic Fourier Descriptor (EGFD) is presented to overcome the drawbacks of existing shape representation techniques. The EGFD is obtained based on our previously proposed Generic Fourier Descriptors (GFD). It is acquired by deriving GFD from the rotation and scale normalized shape. Experimental results show that the proposed EGFD outperforms GFD and Zernike moments descriptors (ZMD) significantly.
暂无评论