咨询与建议

看过本文的还看了

相关文献

该作者的其他文献

文献详情 >Iterative temporal differencin... 收藏
arXiv

Iterative temporal differencing with random synaptic feedback weights support error backpropagation for deep learning

作     者:Dargazany, Aras R. 

作者机构:Department of Electrical Computer and Biomedical Engineering University of Rhode Island KingstonRI02881 United States 

出 版 物:《arXiv》 (arXiv)

年 卷 期:2019年

核心收录:

主  题:Network architecture 

摘      要:This work shows that a differentiable activation function is not necessary any more for error backpropagation. The derivative of the activation function can be replaced by an iterative temporal differencing using fixed random feedback alignment. Using fixed random synaptic feedback alignment with an iterative temporal differencing is transforming the traditional error backpropagation into a more biologically plausible approach for learning deep neural network architectures. This can be a big step toward the integration of STDP-based error backpropagation in deep learning. Copyright © 2019, The Authors. All rights reserved.

读者评论 与其他读者分享你的观点

用户名:未登录
我的评分