The efficacy of intelligent recognition and inference in space target imagery is profoundly dependent on the dataset's scale and its quality. Given the rarity of such images, labor-intensive manual annotation has ...
详细信息
The efficacy of intelligent recognition and inference in space target imagery is profoundly dependent on the dataset's scale and its quality. Given the rarity of such images, labor-intensive manual annotation has been the traditional recourse. To address the demand for automatic segmentation annotation in space target imagery, we propose a groundbreaking framework utilizing the segment anything model, which leverages object detection prompts to generate precise instance boundaries. Our framework incorporates a scalable set of prompts for foreground and background classes, maximizing the zero-shot potential of our visual model. In addition, we address frequent issues with anomalous mask regions in annotations by implementing an easily integrated mask quality enhancement strategy, which has led to a precision increase of 1.0%-2.0% and an 86.7% improvement in quality. Through extensive evaluation across various public space target image datasets, our method has proven highly accurate and generalizable. To support further research, we are releasing both our specialized dataset and the project code, aiming to mitigate the limitations imposed by the scarcity of annotated training data in current on-orbit machine vision systems.
暂无评论