We present an intracoding method that is applicable to depth map coding in multiview plus depth systems. Our approach combines skip prediction and plane segmentation-based prediction. The proposed depth intraskip pred...
详细信息
We present an intracoding method that is applicable to depth map coding in multiview plus depth systems. Our approach combines skip prediction and plane segmentation-based prediction. The proposed depth intraskip prediction uses the estimated direction at both the encoder and decoder, and does not need to encode residual data. Our plane segmentation-based intraprediction divides the current block into biregions, and applies a different prediction scheme for each segmented region. This method avoids incorrect estimations across different regions, resulting in higher prediction accuracy. Simulation results demonstrate that the proposed scheme is superior to H.264/advanced videocoding intraprediction and has the ability to improve the subjective rendering quality. (C) 2015 Society of Photo-Optical Instrumentation Engineers (SPIE)
We present a complexity reduction scheme for the depth map intraprediction of three-dimensional high-efficiency videocoding (3-D-HEVC). The 3-D-HEVC introduces a new set of specific tools for depth map coding, insert...
详细信息
We present a complexity reduction scheme for the depth map intraprediction of three-dimensional high-efficiency videocoding (3-D-HEVC). The 3-D-HEVC introduces a new set of specific tools for depth map coding, inserting additional complexity to intraprediction, which results in new challenges in terms of complexity reduction. Therefore, we present the DMMFast (depth modeling modes fast prediction), a scheme composed of two new algorithms: the simplified edge detector (SED) and the gradient-based mode one filter (GMOF). The SED anticipates the blocks that are likely to be better predicted by the traditional intramodes, avoiding the evaluation of DMMs. The GMOF applies a gradient-based filter in the borders of the block and predicts the best positions to evaluate the DMM 1. Software evaluations showed that DMMFast is capable of achieving a time saving of 11.9% on depth map intraprediction, considering the random access mode, without affecting the quality of the synthesized views. Considering the all intraconfigurations, the proposed scheme is capable of achieving, on average, a time saving of 35% considering the whole encoder. Subjective quality assessment was also performed, showing that the proposed technique inserts minimal quality losses in the final encoded video. (C) 2015 SPIE and IS&T
The view synthesis prediction (VSP) method utilizes interview correlations between views by generating an additional reference frame in the multiview videocoding. This paper describes a multiview depth videocoding s...
详细信息
The view synthesis prediction (VSP) method utilizes interview correlations between views by generating an additional reference frame in the multiview videocoding. This paper describes a multiview depth videocoding scheme that incorporates depth view synthesis and additional prediction modes. In the proposed scheme, we exploit the reconstructed neighboring depth frame to generate an additional reference depth image for the current viewpoint to be coded using the depth image-based-rendering technique. In order to generate high-quality reference depth images, we used pre-processing on depth, depth image warping, and two types of hole filling methods depending on the number of available reference views. After synthesizing the additional depth image, we encode the depth video using the proposed additional prediction modes named VSP modes;those additional modes refer to the synthesized depth image. In particular, the VSP_SKIP mode refers to the co-located block of the synthesized frame without the coding motion vectors and residual data, which gives most of the coding gains. Experimental results demonstrate that the proposed depth view synthesis method provides high-quality depth images for the current view and the proposed VSP modes provide high coding gains, especially on the anchor frames. (C) 2011 Society of Photo-Optical Instrumentation Engineers (SPIE). [DOI: 10.1117/1.3600575]
暂无评论