咨询与建议

看过本文的还看了

相关文献

该作者的其他文献

文献详情 >Fast Text-to-3D-Aware Face Gen... 收藏
arXiv

Fast Text-to-3D-Aware Face Generation and Manipulation via Direct Cross-modal Mapping and Geometric Regularization

作     者:Zhang, Jinlu Zhou, Yiyi Zheng, Qiancheng Du, Xiaoxiong Luo, Gen Peng, Jun Sun, Xiaoshuai Ji, Rongrong 

作者机构:Key Laboratory of Multimedia Trusted Perception and Efficient Computing Ministry of Education of China Xiamen University 361005 China Peng Cheng Laboratory Shenzhen518000 China 

出 版 物:《arXiv》 (arXiv)

年 卷 期:2024年

核心收录:

主  题:Mapping 

摘      要:Text-to-3D-aware face (T3D Face) generation and manipulation is an emerging research hot spot in machine learning, which still suffers from low efficiency and poor quality. In this paper, we propose an End-to-End Efficient and Effective network for fast and accurate T3D face generation and manipulation, termed E3-FaceNet. Different from existing complex generation paradigms, E3-FaceNet resorts to a direct mapping from text instructions to 3D-aware visual space. We introduce a novel Style Code Enhancer to enhance cross-modal semantic alignment, alongside an innovative Geometric Regularization objective to maintain consistency across multi-view generations. Extensive experiments on three benchmark datasets demonstrate that E3-FaceNet can not only achieve picture-like 3D face generation and manipulation, but also improve inference speed by orders of magnitudes. For instance, compared with Latent3D, E3-FaceNet speeds up the five-view generations by almost 470 times, while still exceeding in generation quality. Our code is released at https://***/Aria-Zhangjl/E3-FaceNet. Copyright © 2024, The Authors. All rights reserved.

读者评论 与其他读者分享你的观点

用户名:未登录
我的评分