咨询与建议

看过本文的还看了

相关文献

该作者的其他文献

文献详情 >Superpixel Tokenization for Vi... 收藏
arXiv

Superpixel Tokenization for Vision Transformers: Preserving Semantic Integrity in Visual Tokens

作     者:Lew, Jaihyun Jang, Soohyuk Lee, Jaehoon Yoo, Seungryong Kim, Eunji Lee, Saehyung Mok, Jisoo Kim, Siwon Yoon, Sungroh 

作者机构:Interdisciplinary Program in AI Seoul National University Korea Republic of Department of Electrical and Computer Engineering Seoul National University Korea Republic of Amazon United States AIIS ASRI INMC ISRC Seoul National University Korea Republic of 

出 版 物:《arXiv》 (arXiv)

年 卷 期:2024年

核心收录:

主  题:Semantics 

摘      要:Transformers, a groundbreaking architecture proposed for natural language processing (NLP), have also achieved remarkable success in computer vision. A cornerstone of their success lies in the attention mechanism, which models relationships among tokens. While the tokenization process in NLP inherently ensures that each token maintains semantic integrity without containing multiple meanings, the grid-based tokenization of Vision Transformer (ViT) relies on uniformly partitioned square image patches, which may result in an arbitrary mixing of visual concepts within a token. In this work, we propose a novel tokenization pipeline that replaces the grid-based tokenization with superpixels, encouraging each token to capture a distinct visual concept. Unlike square image patches, superpixels are formed in varying shapes, sizes, and locations, making direct substitution challenging. To address this, our pipeline first generates pixel-level embeddings and efficiently aggregates them within superpixel clusters, producing superpixel tokens that seamlessly replace patch tokens in ViT. Extensive experiments demonstrate that our approach enhances the performance of ViT on various downstream tasks and introduces intriguing properties such as adaptive inference and semantic integrity in tokens. © 2024, CC BY.

读者评论 与其他读者分享你的观点

用户名:未登录
我的评分