咨询与建议

看过本文的还看了

相关文献

该作者的其他文献

文献详情 >High-Fidelity Image Inpainting... 收藏
arXiv

High-Fidelity Image Inpainting with GAN Inversion

作     者:Yu, Yongsheng Zhang, Libo Fan, Heng Luo, Tiejian 

作者机构:Institute of Software Chinese Academy of Sciences China University of Chinese Academy of Sciences China Nanjing Institute of Software Technology China Department of Computer Science and Engineering University of North Texas United States 

出 版 物:《arXiv》 (arXiv)

年 卷 期:2022年

核心收录:

主  题:Modulation 

摘      要:Image inpainting seeks a semantically consistent way to recover the corrupted image in the light of its unmasked content. Previous approaches usually reuse the well-trained GAN as effective prior to generate realistic patches for missing holes with GAN inversion. Nevertheless, the ignorance of a hard constraint in these algorithms may yield the gap between GAN inversion and image inpainting. Addressing this problem, in this paper, we devise a novel GAN inversion model for image inpainting, dubbed InvertFill, mainly consisting of an encoder with a pre-modulation module and a GAN generator with F&W+ latent space. Within the encoder, the pre-modulation network leverages multi-scale structures to encode more discriminative semantics into style vectors. In order to bridge the gap between GAN inversion and image inpainting, F&W+ latent space is proposed to eliminate glaring color discrepancy and semantic inconsistency. To reconstruct faithful and photorealistic images, a simple yet effective Soft-update Mean Latent module is designed to capture more diverse in-domain patterns that synthesize high-fidelity textures for large corruptions. Comprehensive experiments on four challenging datasets, including Places2, CelebA-HQ, MetFaces, and Scenery, demonstrate that our InvertFill outperforms the advanced approaches qualitatively and quantitatively and supports the completion of out-of-domain images well. Copyright © 2022, The Authors. All rights reserved.

读者评论 与其他读者分享你的观点

用户名:未登录
我的评分