This paper presents a parameter by parameter (Pbp) algorithm for speeding up the training of multilayer perceptrons (MLP). This new algorithm uses an approach similar to that of the layer by layer (LBL) algorithm, tak...
详细信息
This paper presents a parameter by parameter (Pbp) algorithm for speeding up the training of multilayer perceptrons (MLP). This new algorithm uses an approach similar to that of the layer by layer (LBL) algorithm, taking into account the input errors of the output layer and hidden layer. The proposed Pbpalgorithm, however, is not burdened by the need to calculate the gradient of the error function. In each iteration step, the weights or thresholds can be optimized directly one by one with other variables fixed. Four classes of solution equations for parameters of networks are deducted. The effectiveness of the Pbpalgorithm is demonstrated using two benchmarks. In comparisons with the bp algorithm with momentum (bpM) and the conventional LBL algorithms, Pbp obtains faster convergences and better simulation performances.
暂无评论