咨询与建议

看过本文的还看了

相关文献

该作者的其他文献

文献详情 >Flexible and Scalable Deep Den... 收藏
arXiv

Flexible and Scalable Deep Dendritic Spiking Neural Networks with Multiple Nonlinear Branching

作     者:Huang, Yifan Fang, Wei Ma, Zhengyu Li, Guoqi Tian, Yonghong 

作者机构:School of Computer Science Peking University Beijing China School of Electronic and Computer Engineering Shenzhen Graduate School Peking University Shenzhen China Peng Cheng Laboratory Shenzhen China Institute of Automation Chinese Academy of Sciences Beijing China 

出 版 物:《arXiv》 (arXiv)

年 卷 期:2024年

核心收录:

主  题:Neurons 

摘      要:Recent advances in spiking neural networks (SNNs) have a predominant focus on network architectures, while very little attention has been paid to the underlying neuron model. The point neuron models, a cornerstone of existing deep SNNs, pose a bottleneck on the network-level expressivity since they depict somatic dynamics only. In contrast, the multi-compartment models in neuroscience offer remarkable expressivity and biological plausibility by introducing dendritic morphology and dynamics, but remain underexplored in deep learning due to their unaffordable computational overhead and inflexibility. To combine the advantages of both sides for a flexible, efficient yet more powerful computational model, we propose the dendritic spiking neuron (DendSN) incorporating multiple dendritic branches with nonlinear dynamics. Compared to the point spiking neurons, DendSN exhibits significantly higher expressivity. DendSN s flexibility enables its seamless integration into diverse deep SNN architectures. To accelerate dendritic SNNs (DendSNNs), we parallelize dendritic state updates across time steps, and develop Triton kernels for GPU-level acceleration. As a result, we can construct various large-scale DendSNNs with depth comparable to their point SNN counterparts. Next, we comprehensively evaluate DendSNNs performance on various demanding tasks. We first propose a novel algorithm called dendritic branch gating (DBG) for continual learning. By modulating dendritic branch strengths using an additional task context signal, DBG significantly mitigates catastrophic forgetting of DendSNNs. Moreover, DendSNNs demonstrate enhanced robustness against noise and adversarial attacks compared to point SNNs, and excel in few-shot learning settings. Our work firstly demonstrates the possibility of training bio-plausible dendritic SNNs with depths and scales comparable to traditional point SNNs, and reveals superior expressivity and robustness of reduced dendritic neuron models in deep lea

读者评论 与其他读者分享你的观点

用户名:未登录
我的评分