咨询与建议

看过本文的还看了

相关文献

该作者的其他文献

文献详情 >Weakly-Supervised 3D Visual Gr... 收藏
arXiv

Weakly-Supervised 3D Visual Grounding based on Visual Language Alignment

作     者:Xu, Xiaoxu Yuan, Yitian Zhang, Qiudan Wu, Wenhui Jie, Zequn Ma, Lin Wang, Xu 

作者机构:College of Computer Science and Software Engineering Shenzhen University Shenzhen518060 China Meituan Inc. China College of Electronics and Information Engineering Shenzhen University China Guangdong Key Laboratory of Intelligent Information Processing Shenzhen China 

出 版 物:《arXiv》 (arXiv)

年 卷 期:2023年

核心收录:

主  题:Semantics 

摘      要:Learning to ground natural language queries to target objects or regions in 3D point clouds is quite essential for 3D scene understanding. Nevertheless, existing 3D visual grounding approaches require a substantial number of bounding box annotations for text queries, which is time-consuming and labor-intensive to obtain. In this paper, we propose 3D-VLA, a weakly supervised approach for 3D visual grounding based on Visual Language Alignment. Our 3D-VLA exploits the superior ability of current large-scale vision-language models (VLMs) on aligning the semantics between texts and 2D images, as well as the naturally existing correspondences between 2D images and 3D point clouds, and thus implicitly constructs correspondences between texts and 3D point clouds with no need for fine-grained box annotations in the training procedure. During the inference stage, the learned text-3D correspondence will help us ground the text queries to the 3D target objects even without 2D images. To the best of our knowledge, this is the first work to investigate 3D visual grounding in a weakly supervised manner by involving large scale vision-language models, and extensive experiments on ReferIt3D and ScanRefer datasets demonstrate that our 3D-VLA achieves comparable and even superior results over the fully supervised *** code will be available at https://***/xuxiaoxxxx/3D-VLA. Copyright © 2023, The Authors. All rights reserved.

读者评论 与其他读者分享你的观点

用户名:未登录
我的评分