作者:
Han, FeiLing, Qing-HuaJiangsu Univ
Sch Comp Sci & Telecommun Engn Zhenjiang 212013 Jiangsu Peoples R China Chinese Acad Sci
Hefei Inst Intelligent Machines Intelligent Comp Lab Hefei 230031 Anhui Peoples R China Jiangsu Univ Sci & Technol
Sch Elect & Informat Zhenjiang 212003 Jiangsu Peoples R China
In this paper, a new approach coupling adaptive particle swarm optimization (APSO) and a priori information for function approximation problem is proposed to obtain better generalization performance and faster converg...
详细信息
In this paper, a new approach coupling adaptive particle swarm optimization (APSO) and a priori information for function approximation problem is proposed to obtain better generalization performance and faster convergence rate. It is well known that gradient-based learning algorithms such as backpropagation (BP) algorithm have good ability of local search, whereas PSO has good ability of global search. Therefore, in the new approach, first, APSO encoding the first-order derivative information of the approximated function is applied to train network to near global minima. Second, with the connection weights produced by APSO, the network is trained with a gradient-based algorithm. Due to combining APSO with local search algorithm and considering a priori information, the new approach has better generalization performance and convergence rate than traditional learning ones. Finally, simulation results are given to verify the efficiency and effectiveness of the proposed approach. (C) 2008 Elsevier Inc. All rights reserved.
The authors provide relationships between the a priori and a posteriori errors of adaptation algorithms for real-time output-error nonlinear adaptive filters realised as feedforward or recurrent neural networks. The a...
详细信息
The authors provide relationships between the a priori and a posteriori errors of adaptation algorithms for real-time output-error nonlinear adaptive filters realised as feedforward or recurrent neural networks. The analysis is undertaken for a general nonlinear activation function of a neuron, and for gradient-based learning algorithms, for both a feedforward (FF) and recurrent neural network (RNN). Moreover, the analysis considers both contractive and expansive forms of the nonlinear activation functions within the networks. The relationships so obtained provide the upper and lower error bounds for general gradientbased a posteriori learning in neural networks.
暂无评论