A spiking neuron is a simplified model of the biological neuron as the input, output, and internal representation of information based on the relative timing of individual spikes, and is closely related to the biologi...
详细信息
A spiking neuron is a simplified model of the biological neuron as the input, output, and internal representation of information based on the relative timing of individual spikes, and is closely related to the biological network. We extend the learning algorithms with spiking neurons developed by earlier workers. These algorithms explicitly concerned a single pair of pre- and postsynaptic spikes and cannot be applied to situations involving multiple spikes arriving at the same synapse. The aim of the algorithm presented here is to achieve synaptic plasticity by using relative timing between single pre- and postsynaptic spikes and therefore to improve the performance on large datasets. The learning algorithm is based on spike timing-dependent synaptic plasticity, which uses exact spike timing to optimize the information stream through the neural network as well as to enforce the competition between neurons during unsupervised Hebbian learning. We demonstrate the performance of the proposed spiking neuron model and learning algorithm on clustering and provide a comparative analysis with other state-of-the-art approaches.
This paper focuses on learning algorithms for approximating functional data that are chosen from some Hilbert spaces. An effective algorithm, called Hilbert parallel overrelaxation backpropagation (HPORBP) algorithm, ...
详细信息
This paper focuses on learning algorithms for approximating functional data that are chosen from some Hilbert spaces. An effective algorithm, called Hilbert parallel overrelaxation backpropagation (HPORBP) algorithm, is proposed for training the Hilbert feedforward neural networks that are extensions of feedforward neural networks from Euclidean space Rn to some Hilbert spaces. Furthermore, the convergence of the iterative HPORBP algorithm is analyzed, and a deterministic convergence theorem is proposed for the HPORBP algorithm on the basis of the perturbation results of Mangasarian and Solodov. Some experimental results of learning functional data on some Hilbert spaces illustrate the convergence theorem and show that the proposed HPORBP algorithm has a better accuracy than the Hilbert backpropagation algorithm. Copyright (c)?2012 John Wiley & Sons, Ltd.
A Hopfield neural network (FINN) is a neural network model with mutual connections. A quaternionic FINN (QHNN) is an extension of HNN. Several QHNN models have been proposed. The hybrid QHNN utilizes the non-commutati...
详细信息
A Hopfield neural network (FINN) is a neural network model with mutual connections. A quaternionic FINN (QHNN) is an extension of HNN. Several QHNN models have been proposed. The hybrid QHNN utilizes the non-commutativity of quaternions. It has been shown that hybrid QHNNs with the Hebbian learning rule outperformed QHNNs in noise tolerance. The Hebbian learning rule, however, is a primitive learning algorithm, and it is necessary that we study advanced learning algorithms. Although the projection rule is one of a few promising learning algorithms, it is restricted by network topologies and cannot be applied to hybrid QHNNs. In the present work, we propose gradient descent learning, which can be applied to hybrid QHNNs. We compare the performance of gradient descent learning with that of projection rule. Results showed that the gradient descent learning outperformed projection rule in noise tolerance in computer simulations. For small training-pattern sets, hybrid QHNNs with gradient descent learning produced the best performances. QHNNs did so for large training-pattern set. In future, gradient descent learning will be extended to QHNNs with different network topology and activation function. (C) 2017 Elsevier B.V. All rights reserved.
The stability-plasticity problem (i.e. how the brain incorporates new information into its model of the world, while at the same time preserving existing knowledge) has been at the forefront of computational memory re...
详细信息
The stability-plasticity problem (i.e. how the brain incorporates new information into its model of the world, while at the same time preserving existing knowledge) has been at the forefront of computational memory research for several decades. In this paper, we critically evaluate how well the Complementary learning Systems theory of hippocampo-cortical interactions addresses the stability-plasticity problem. We identify two major challenges for the model: Finding a learning algorithm for cortex and hippocampus that enacts selective strengthening of weak memories, and selective punishment of competing memories;and preventing catastrophic forgetting in the case of non-stationary environments (i.e. when items are temporarily removed from the training set). We then discuss potential solutions to these problems: First, we describe a recently developed learning algorithm that leverages neural oscillations to find weak parts of memories (so they can be strengthened) and strong competitors (so they can be punished), and we show how this algorithm outperforms other learning algorithms (CPCA Hebbian learning and Leabra at memorizing overlapping patterns. Second, we describe how autonomous re-activation of memories (separately in cortex and hippocampus) during REM sleep, coupled with the oscillating learning algorithm, can reduce the rate of forgetting of input patterns that are no longer present in the environment. We then present a simple demonstration of how this process can prevent catastrophic interference in an AB-AC learning paradigm. (c) 2005 Elsevier Ltd. All rights reserved.
Complex-valued associative memories (CAMs) are one of the most promising associative memory models by neural networks. However, the low noise tolerance of CAMs is often a serious problem. A projection learning rule wi...
详细信息
Complex-valued associative memories (CAMs) are one of the most promising associative memory models by neural networks. However, the low noise tolerance of CAMs is often a serious problem. A projection learning rule with large constant terms improves the noise tolerance of CAMs. However, the projection learning rule can be applied only to CAMs with full connections. In this paper, we propose a gradient descent learning rule with large constant terms, which is not restricted by network topology. We realize large constant terms by regularization to connection weights. By computer simulations, we prove that the proposed learning algorithm improves noise tolerance. (c) 2016 Institute of Electrical Engineers of Japan. Published by John Wiley & Sons, Inc.
In our education system, teacher hopes his students to learn something specific from his demonstrations and textbook, his students try to understand his teacher's demonstration and textbook by their own learning m...
详细信息
In our education system, teacher hopes his students to learn something specific from his demonstrations and textbook, his students try to understand his teacher's demonstration and textbook by their own learning methods. Obviously, the aim of the teacher may not be achievable for all students' learning methods. Therefore, students' final learning results are different from teacher's expectation in general, which is called the gap between teaching and learning (in short, GTL) in this paper. As the goal of machine learning is to design a computer program with learning ability, it is naturally questioned if GTL occurs in the machine learning fields. In this paper, we prove that there exists GTL in machine learning. As a common assumption in the current learning theory is that a learning algorithm usually realizes the original expectation, GTL provides a new insight into learning theory. According to the GTL Theory, the learning algorithms can be classified into four types, Type I through Type-IV. Comparison with human learning, the GTL Theory substantiates an intuitive observation: artificial intelligence can never surpass human intelligence from the learning point of view.
The aerodynamic drag coefficient curve of spin-stabilized projectiles is very important to the fast generation of accurate firing tables. To identify it from Doppler tracking radar measured velocity data in flight tes...
详细信息
The aerodynamic drag coefficient curve of spin-stabilized projectiles is very important to the fast generation of accurate firing tables. To identify it from Doppler tracking radar measured velocity data in flight tests, an iterative learning concept (ILC) is applied. High-order ILC algorithms are proposed. Convergence conditions are given in a general problem setting. A 3-DOF point mass trajectory prediction model is proposed. The learning gains, which vary with respect to both time and iteration number, have been used for a faster convergence compared to the constant learning parameter choices. Furthermore, in this paper, a bi-linear ILC scheme is proposed to produce even faster learning convergence. The flight testing data reduction results of an actual firing practice demonstrate that the iterative learning method is very effective in curve identification. Copyright (C) 1997 Elsevier Science Ltd.
The paper presents the universal approach to the determination of the sensitivity functions for dynamic neural networks and its application in learning algorithms of adaptive networks. The method is based on the appli...
详细信息
The paper presents the universal approach to the determination of the sensitivity functions for dynamic neural networks and its application in learning algorithms of adaptive networks. The method is based on the application of signal flow graph and specially defined graph adjoint to it. The method is equally applied to either feed-forward or recurrent network structures. This paper is mainly concerned with neural network applications of the approach. Different kinds of dynamic neural networks are considered and discussed in the paper: the FIR dynamic multilayer perceptron (MLP), the cascade connection of dynamic MLPs as well as two non-linear recurrent systems: the dynamic recurrent MLP network and ARMA recurrent network. The rule of sensitivity determination has been applied in practical learning of neural networks. Chosen results of numerical experiments concerning the application of this approach to the learning processes of recurrent neural networks are also given and discussed. Copyright (C) 1999 John Wiley & Sons, Ltd.
Purpose This paper aims to deal with the optimal choice of a novel extreme learning machine (ELM) architecture based on an ensemble of classic ELM called Meta-ELM structural parameters by using a forecasting process. ...
详细信息
Purpose This paper aims to deal with the optimal choice of a novel extreme learning machine (ELM) architecture based on an ensemble of classic ELM called Meta-ELM structural parameters by using a forecasting process. Design/methodology/approach The modelling performance of the Meta-ELM architecture varies depending on the network parameters it contains. The choice of Meta-ELM parameters is important for the accuracy of the models. For this reason, the optimal choice of Meta-ELM parameters is investigated on the problem of wind speed forecasting in this paper. The hourly wind-speed data obtained from Bilecik and Bozcaada stations in Turkey are used. The different number of ELM groups (M) and nodes (N-h) are analysed for determining the best modelling performance of Meta-ELM. Also, the optimal Meta-ELM architecture forecasting results are compared with four different learning algorithms and a hybrid meta-heuristic approach. Finally, the linear model based on correlation between the parameters was given as three dimensions (3D) and calculated. Findings It is observed that the analysis has better performance for parameters of Meta-ELM, M = 15 - 20 and N-h = 5 - 10. Also considering the performance metric, the Meta-ELM model provides the best results in all regions and the Levenberg-Marquardt algorithm -feed forward neural network and adaptive neuro fuzzy inference system -particle swarm optimization show competitive results for forecasting process. In addition, the Meta-ELM provides much better results in terms of elapsed time. Originality/value The original contribution of the study is to investigate of determination Meta-ELM parameters based on forecasting process.
A self-supervised learning algorithm using fuzzy set and the concept of guard zones around the class representative vectors is presented and demonstrated for vowel recognition. An optimum guard zone having the best ma...
详细信息
A self-supervised learning algorithm using fuzzy set and the concept of guard zones around the class representative vectors is presented and demonstrated for vowel recognition. An optimum guard zone having the best match with the fully supervised performance is determined. Results are also compared with that of nonsupervised case for various orders of input patterns.
暂无评论