Hopfield networks can exhibit many different attractors of which most are local optima. It has been demonstrated that combining states randomization and Hebbian learning enlarges the basin of attraction of globally op...
详细信息
Hopfield networks can exhibit many different attractors of which most are local optima. It has been demonstrated that combining states randomization and Hebbian learning enlarges the basin of attraction of globally optimal attractors. The procedure is called self-modeling and it has been applied in symmetric Hopfield networks with discrete states and without self-recurrent connections. We are interested in knowing which topological constraints can be relaxed. So, the self-modeling process is tested in asymmetric Hopfield networks with continuous states and self-recurrent connections. The best results are obtained in networks with modular structure. (C) 2018 The Authors. Published by Elsevier Ltd. This is an open access article under the CC BY-NC-ND license (http://***/licenses/by-nc-nd/3.0/) Peer-review under responsibility of the scientific committee of the 8th Annual International Conference on Biologically Inspired Cognitive Architectures
Hopfield networks can exhibit many different attractors of which most are local optima. It has been demonstrated that combining states randomization and Hebbian learning enlarges the basin of attraction of globally op...
详细信息
Hopfield networks can exhibit many different attractors of which most are local optima. It has been demonstrated that combining states randomization and Hebbian learning enlarges the basin of attraction of globally optimal attractors. The procedure is called self-modeling and it has been applied in symmetric Hopfield networks with discrete states and without self-recurrent connections. We are interested in knowing which topological constraints can be relaxed. So, the self-modeling process is tested in asymmetric Hopfield networks with continuous states and self-recurrent connections. The best results are obtained in networks with modular structure.
This paper presents an algorithm to extract symbolic rules from trained artificial neural networks (ANNs), called ERANN. In many applications, it is desirable to extract knowledge from ANNs for the users to gain a bet...
详细信息
This paper presents an algorithm to extract symbolic rules from trained artificial neural networks (ANNs), called ERANN. In many applications, it is desirable to extract knowledge from ANNs for the users to gain a better understanding of how the networks solve the problems. Although ANN usually achieves high classification accuracy, the obtained results sometimes may be incomprehensible, because the knowledge embedded within them is distributed over the activationfunctions and the connection weights. This problem can be solved by extracting rules from trained ANNs. To do so, a rule extraction algorithm has been proposed in this paper to extract symbolic rules from trained ANNs. A standard three-layer feedforward ANN with four-phase training is the basis of the proposed algorithm. Extensive experimental studies on a set of benchmark classification problems, including breast cancer, iris, diabetes, wine, season, golfplaying, and lenses classification, demonstrates the applicability of the proposed method. Extracted rules are comparable with other methods in terms of number of rules, average number of conditions for a rule, and the rules accuracy. The proposed method achieved accuracy values 96.28, 98.67, 76.56, 91.01, 100, 100, and 100 for the above problems, respectively. It has been seen that these results are one of the best results comparing with results obtained from related previous studies.
暂无评论