咨询与建议

看过本文的还看了

相关文献

该作者的其他文献

文献详情 >Spatial-Temporal Transformer b... 收藏
arXiv

Spatial-Temporal Transformer based Video Compression Framework

作     者:Gao, Yanbo Huang, Wenjia Li, Shuai Yuan, Hui Ye, Mao Ma, Siwei 

作者机构:School of Software Shandong University Jinan China School of Control Science and Engineering Shandong University Jinan China School of Computer Science and Engineering University of Electronic Science and Technology of China Chengdu China National Engineering Research Center of Visual Technology School of Computer Science Peking University Beijing China 

出 版 物:《arXiv》 (arXiv)

年 卷 期:2023年

核心收录:

主  题:Image compression 

摘      要:Learned video compression (LVC) has witnessed remarkable advancements in recent years. Similar as the traditional video coding, LVC inherits motion estimation/compensation, residual coding and other modules, all of which are implemented with neural networks (NNs). However, within the framework of NNs and its training mechanism using gradient backpropagation, most existing works often struggle to consistently generate stable motion information, which is in the form of geometric features, from the input color features. Moreover, the modules such as the inter-prediction and residual coding are independent from each other, making it inefficient to fully reduce the spatial-temporal redundancy. To address the above problems, in this paper, we propose a novel Spatial-Temporal Transformer based Video Compression (STT-VC) framework. It contains a Relaxed Deformable Transformer (RDT) with Uformer based offsets estimation for motion estimation and compensation, a Multi-Granularity Prediction (MGP) module based on multi-reference frames for prediction refinement, and a Spatial Feature Distribution prior based Transformer (SFD-T) for efficient temporal-spatial joint residual compression. Specifically, RDT is developed to stably estimate the motion information between frames by thoroughly investigating the relationship between the similarity based geometric motion feature extraction and self-attention. MGP is designed to fuse the multi-reference frame information by effectively exploring the coarse-grained prediction feature generated with the coded motion information. SFD-T is to compress the residual information by jointly exploring the spatial feature distributions in both residual and temporal prediction to further reduce the spatial-temporal redundancy. Experimental results demonstrate that our method achieves the best result with 13.5% BD-Rate saving over VTM and 68.7% BD-Rate saving over the baseline without the proposed modules. Ablation study validates the effectiveness

读者评论 与其他读者分享你的观点

用户名:未登录
我的评分