Deep learning has significantly advanced the audio classification, achieving remarkable results. However, these successes often rely on extensive manual annotation of audio, a labor-intensive and costly process. Activ...
详细信息
Deep reinforcement learning has achieved encouraging performance in many realms. However, one of its primary challenges is the sparsity of extrinsic rewards, which is still far from solved. complementarylearning syst...
详细信息
To survive in the dynamically-evolving world, we accumulate knowledge and improve our skills based on experience. In the process, gaining new knowledge does not disrupt our vigilance to external stimuli. In other word...
详细信息
To survive in the dynamically-evolving world, we accumulate knowledge and improve our skills based on experience. In the process, gaining new knowledge does not disrupt our vigilance to external stimuli. In other words, our learning process is 'accumulative' and 'online' without interruption. However, despite the recent success, artificial neural networks (ANNs) must be trained offline and suffer catastrophic interference between old and new learning, indicating that ANNs' conventional learning algorithms may not be suitable for building intelligent agents comparable to our brain. In this study, we propose a novel neural network architecture (DynMat) consisting of dual learningsystems inspired by the complementary learning system (CLS) theory suggesting that the brain relies on short- and long-term learningsystems to learn continuously. Our empirical evaluations show that (1) DynMat can learn a new class without catastrophic interference and (2) it does not strictly require offline training. (C) 2019 Elsevier Ltd. All rights reserved.
暂无评论