版权所有:内蒙古大学图书馆 技术提供:维普资讯• 智图
内蒙古自治区呼和浩特市赛罕区大学西街235号 邮编: 010021
T=题名(书名、题名),A=作者(责任者),K=主题词,P=出版物名称,PU=出版社名称,O=机构(作者单位、学位授予单位、专利申请人),L=中图分类号,C=学科分类号,U=全部字段,Y=年(出版发行年、学位年度、标准发布年)
AND代表“并且”;OR代表“或者”;NOT代表“不包含”;(注意必须大写,运算符两边需空一格)
范例一:(K=图书馆学 OR K=情报学) AND A=范并思 AND Y=1982-2016
范例二:P=计算机应用与软件 AND (U=C++ OR U=Basic) NOT K=Visual AND Y=2011-2016
Active vision enables dynamic and robust visual perception, offering an alternative to the static, passive nature of feedforward architectures commonly used in computer vision, which depend on large datasets and high computational resources. Biological selective attention mechanisms allow agents to focus on salient regions of interest (ROIs), reducing computational demand while maintaining real-time responsiveness. Event-based cameras, inspired by the mammalian retina, further enhance this capability by capturing asynchronous scene changes, enabling efficient, low-latency processing. To distinguish moving objects while the event-based camera is also in motion, the agent requires an object motion segmentation mechanism to accurately detect targets and position them at the centre of the visual field (fovea). Integrating event-based sensors with neuromorphic algorithms represents a paradigm shift, using spiking neural networks (SNNs) to parallelise computation and adapt to dynamic environments. This work presents a spiking convolutional neural network bioinspired attention system for selective attention through object motion sensitivity. The system generates events via fixational eye movements using a dynamic vision sensor integrated into the Speck neuromorphic hardware, mounted on a Pan–Tilt unit, to identify the ROI and saccade toward it. The system, characterised using ideal gratings and benchmarked against the event camera motion segmentation dataset, reaches a mean IoU of 82.2% and a mean structural similarity index of 96% in multi-object motion segmentation. Additionally, the detection of salient objects reaches an accuracy of 88.8% in office scenarios and 89.8% in challenging indoor and outdoor low-light conditions, as evaluated on the event-assisted low-light video object segmentation dataset. A real-time demonstrator showcases the system's capabilities of detecting the salient object through object motion sensitivity in 0.124 s in dynamic scenes. Its learni
电话和邮箱必须正确填写,我们会与您联系确认。
版权所有:内蒙古大学图书馆 技术提供:维普资讯• 智图
内蒙古自治区呼和浩特市赛罕区大学西街235号 邮编: 010021
暂无评论