In this paper the dynamic interaction strategy based on the population learning algorithm (PLA) for the A-Team solving the Multi-mode Resource-Constrained Project Scheduling Problem (MRCPSP) is proposed and experiment...
详细信息
ISBN:
(纸本)9783319396309;9783319396293
In this paper the dynamic interaction strategy based on the population learning algorithm (PLA) for the A-Team solving the Multi-mode Resource-Constrained Project Scheduling Problem (MRCPSP) is proposed and experimentally validated. The MRCPSP belongs to the NP-hard problem class. To solve this problem a team of asynchronous agents (A-Team) has been implemented using multiagent system. An A-Team is the set of objects including multiple agents and the common memory which through interactions produce solutions of optimization problems. These interactions are usually managed by some static strategy. In this paper the dynamic learning strategy based on PLA is suggested. The proposed strategy supervises interactions between optimization agents and the common memory. To validate the proposed approach computational experiment has been carried out.
In this paper the agent-based population learning algorithm designed to train RBF networks (RBFN's) is proposed. The algorithm is used to network initialization and estimation of its output weights. The approach i...
详细信息
ISBN:
(纸本)9783642347078
In this paper the agent-based population learning algorithm designed to train RBF networks (RBFN's) is proposed. The algorithm is used to network initialization and estimation of its output weights. The approach is based on the assumption that a location of the radial based function centroids can be modified during the training process. It is shown that such a floating centroids may help to find the optimal neural network structure. In the proposed implementation of the agent-based population learning algorithm, RBFN initialization and RBFN training based on the floating centroids are carried-out by a team of agents, which execute various local search procedures and cooperate to find-out a solution to the considered RBFN training problem. Two variants of the approach are suggested in the paper. The approaches are implemented and experimentally evaluated.
暂无评论