Although discovery of the Error Back Propagation (EBP) learning algorithm was a real breakthrough, this is not only a very slow algorithm, but it also is not capable of training networks with super compact architectur...
详细信息
ISBN:
(纸本)9781467379397
Although discovery of the Error Back Propagation (EBP) learning algorithm was a real breakthrough, this is not only a very slow algorithm, but it also is not capable of training networks with super compact architecture. The most noticeable progress was done with an adaptation of the LM algorithm to neural network training. The LM algorithm is capable of training networks with 100 to 1000 fewer iterations, but the size of the problems are significantly limited. Also, the LM algorithm was adopted primarily for traditional MLP architectures. More recently two new revolutionary concepts were developed: Support Vector Machine and Extreme Learning Machines. They are very fast, but they train only shallow networks with one hidden layer. It was shown that these shallow networks have very limited capabilities. It has already demonstrated much higher capabilities of super compact architectures having 10 to 100 times more processing power than commonly used learning architectures For example, such a shallow MLP architecture with 10 neurons can solve only a Parity-9 problem, but a special deep FCC (Fully Connected Cascade) architecture with the same 10 neurons can solve as large a problem as a Parity-1023. Unfortunately, with the vanishing gradient problem], deep architectures are very difficult to train. By introducing additional connections across layers it was possible to efficiently train deep networks using the powerful nbn algorithm. Our early results show that there is a solution for this difficult problem.
暂无评论