咨询与建议

看过本文的还看了

相关文献

该作者的其他文献

文献详情 >CA-Edit: Causality-Aware Condi... 收藏
arXiv

CA-Edit: Causality-Aware Condition Adapter for High-Fidelity Local Facial Attribute Editing

作     者:Xian, Xiaole He, Xilin Niu, Zenghao Zhang, Junliang Xie, Weicheng Song, Siyang Yu, Zitong Shen, Linlin 

作者机构:Computer Vision Institute School of Computer Science & Software Engineering Shenzhen University China National Engineering Laboratory for Big Data System Computing Technology Shenzhen University China Guangdong Key Laboratory of Intelligent Information Processing China University of Exeter United Kingdom Great Bay University China 

出 版 物:《arXiv》 (arXiv)

年 卷 期:2024年

核心收录:

摘      要:For efficient and high-fidelity local facial attribute editing, most existing editing methods either require additional finetuning for different editing effects or tend to affect beyond the editing regions. Alternatively, inpainting methods can edit the target image region while preserving external areas. However, current inpainting methods still suffer from the generation misalignment with facial attributes description and the loss of facial skin details. To address these challenges, (i) a novel data utilization strategy is introduced to construct datasets consisting of attribute-text-image triples from a data-driven perspective, (ii) a Causality-Aware Condition Adapter is proposed to enhance the contextual causality modeling of specific details, which encodes the skin details from the original image while preventing conflicts between these cues and textual conditions. In addition, a Skin Transition Frequency Guidance technique is introduced for the local modeling of contextual causality via sampling guidance driven by low-frequency alignment. Extensive quantitative and qualitative experiments demonstrate the effectiveness of our method in boosting both fidelity and editability for localized attribute editing. The code is available at https://***/connorxian/CA-Edit. © 2024, CC BY.

读者评论 与其他读者分享你的观点

用户名:未登录
我的评分