The bio-inspired emerging dynamic vision sensor (DVS), characterized by its exceptional high temporal resolution and immediate response, possesses an innate advantage in capturing rapidly changing scenes. Nevertheless...
详细信息
The bio-inspired emerging dynamic vision sensor (DVS), characterized by its exceptional high temporal resolution and immediate response, possesses an innate advantage in capturing rapidly changing scenes. Nevertheless, it is also susceptible to severe noise interference, especially in challenging conditions like low illumination and high exposure. Notably, the existing noise processing approaches tend to oversimplify data into 2-dimensional (2D) patterns, disregarding the sparse and irregular crucial event structure information that the DVS intrinsically provides via its asynchronous output. Aiming at these problems, we propose a residual graph neural network (RGNN) framework based on density spatial clustering for event denoising, called DBRGNN. Leveraging the temporal window rule, we extract non-overlapping event segments from the DVS event stream and adopt a density-based spatial clustering algorithm to obtain event groups with spatial correlations. To fully exploit the inherent sparsity and plentiful spatiotemporal information of the raw event stream, we transform each event group as compact graph representations via directed edges and feed them into a graph coding module composed of a series of graph convolutional and pooling layers to learn robust geometric features from event sequences. Importantly, our approach effectively reduces noise levels without compromising the spatial structure and temporal coherence of spike events. Compared with other baseline methods, our DBRGNN achieves competitive performance by quantitative and qualitative evaluations on publicly available datasets under varying lighting conditions and noise ratios.
暂无评论