咨询与建议

看过本文的还看了

相关文献

该作者的其他文献

文献详情 >Deep Multi-Resolution Mutual L... 收藏
TechRxiv

Deep Multi-Resolution Mutual Learning for Image Inpainting

作     者:Zheng, Huan Zhang, Zhao Zhang, Haijun Yang, Yi Yan, Shuicheng Wang, Meng 

作者机构:The School of Computer Science and Information Engineering Hefei University of Technology Hefei230601 China  Hefei University of Technology Hefei230601 China The Intelligent Interconnected Systems Laboratory of Anhui Province Hefei University of Technology Hefei230601 China  Shenzhen China The Centre for Artificial Intelligence University of Technology Sydney SydneyNSW Australia  Singapore The National University of Singapore Singapore117583 Singapore 

出 版 物:《TechRxiv》 (TechRxiv)

年 卷 期:2021年

核心收录:

主  题:Textures 

摘      要:Deep image inpainting methods have improved the inpainting performance greatly due to the powerful representation ability of deep learning. However, current deep inpainting models still tend to produce unreasonable structures and blurry textures due to the ill-posed properties of the task, i.e., image inpainting is still a challenging topic. In this paper, we therefore propose a novel deep multi-resolution mutual learning (DMRML) strategy for progressive inpainting, which can fully explore the information from various resolutions. Specifically, we design a new image inpainting network, termed multi-resolution mutual network (MRM-Net), which takes the damaged images of various resolutions as input, then excavates and exploits the correlation among different resolutions to guide the image inpainting process. Technically, the setting of MRM-Net designs two new modules called multi-resolution information interaction (MRII) and adaptive content enhancement (ACE). MRII aims at discovering the correlation of multiple resolutions and exchanging information, and ACE focuses on enhancing the contents using the interacted features. We also present an memory preservation mechanism (MPM) to prevent from the information loss with the increasing layers. Extensive experiments on Paris Street View, Places2 and CelebA-HQ datasets demonstrate that our proposed MRM-Net can effectively recover the textures and structures, and performs favorably against other state-of-the-art methods. © , CC BY.

读者评论 与其他读者分享你的观点

用户名:未登录
我的评分