版权所有:内蒙古大学图书馆 技术提供:维普资讯• 智图
内蒙古自治区呼和浩特市赛罕区大学西街235号 邮编: 010021
作者机构:Interdisciplinary Program in AI Seoul National University Korea Republic of Department of Electrical and Computer Engineering Seoul National University Korea Republic of Amazon United States AIIS ASRI INMC ISRC Seoul National University Korea Republic of
出 版 物:《arXiv》 (arXiv)
年 卷 期:2024年
核心收录:
主 题:Semantics
摘 要:Transformers, a groundbreaking architecture proposed for natural language processing (NLP), have also achieved remarkable success in computer vision. A cornerstone of their success lies in the attention mechanism, which models relationships among tokens. While the tokenization process in NLP inherently ensures that each token maintains semantic integrity without containing multiple meanings, the grid-based tokenization of Vision Transformer (ViT) relies on uniformly partitioned square image patches, which may result in an arbitrary mixing of visual concepts within a token. In this work, we propose a novel tokenization pipeline that replaces the grid-based tokenization with superpixels, encouraging each token to capture a distinct visual concept. Unlike square image patches, superpixels are formed in varying shapes, sizes, and locations, making direct substitution challenging. To address this, our pipeline first generates pixel-level embeddings and efficiently aggregates them within superpixel clusters, producing superpixel tokens that seamlessly replace patch tokens in ViT. Extensive experiments demonstrate that our approach enhances the performance of ViT on various downstream tasks and introduces intriguing properties such as adaptive inference and semantic integrity in tokens. © 2024, CC BY.