咨询与建议

看过本文的还看了

相关文献

该作者的其他文献

文献详情 >TryOn-Adapter: Efficient Fine-... 收藏

TryOn-Adapter: Efficient Fine-Grained Clothing Identity Adaptation for High-Fidelity Virtual Try-On

作     者:Xing, Jiazheng Xu, Chao Qian, Yijie Liu, Yang Dai, Guang Sun, Baigui Liu, Yong Wang, Jingdong 

作者机构:Zhejiang Univ Coll Control Sci & Engn Lab Adv Percept Robot & Intelligent Learning Hangzhou 310027 Zhejiang Peoples R China Alibaba Grp Hangzhou Peoples R China State Grid Shaanxi Elect Power Co SGIT AI Lab Xian Peoples R China Baidu Inc Beijing Peoples R China 

出 版 物:《INTERNATIONAL JOURNAL OF COMPUTER VISION》 (Int J Comput Vision)

年 卷 期:2025年第133卷第6期

页      面:3781-3802页

核心收录:

学科分类:08[工学] 0812[工学-计算机科学与技术(可授工学、理学学位)] 

主  题:Virtual Try-On Large-scale generative models Diffusion model Identity preservation 

摘      要:Virtual try-on focuses on adjusting the given clothes to fit a specific person seamlessly while avoiding any distortion of the patterns and textures of the garment. However, the clothing identity uncontrollability and training inefficiency of existing diffusion-based methods, which struggle to maintain the identity even with full parameter training, are significant limitations that hinder the widespread applications. In this work, we propose an effective and efficient framework, termed TryOn-Adapter. Specifically, we first decouple clothing identity into fine-grained factors: style for color and category information, texture for high-frequency details, and structure for smooth spatial adaptive transformation. Our approach utilizes a pre-trained exemplar-based diffusion model as the fundamental network, whose parameters are frozen except for the attention layers. We then customize three lightweight modules (Style Preserving, Texture Highlighting, and Structure Adapting) incorporated with fine-tuning techniques to enable precise and efficient identity control. Meanwhile, we introduce the training-free T-RePaint strategy to further enhance clothing identity preservation while maintaining the realistic try-on effect during the inference. Our experiments demonstrate that our approach achieves state-of-the-art performance on two widely-used benchmarks. Additionally, compared with recent full-tuning diffusion-based methods, we only use about half of their tunable parameters during training. The code will be made publicly available at https://***/jiazheng-xing/TryOn-Adapter.

读者评论 与其他读者分享你的观点

用户名:未登录
我的评分