This article designs a new hierarchical distributed data-driven adaptive learning control algorithm to accomplish the leader-following tracking control objective for nonaffine nonlinear multiagent systems (MASs). The ...
详细信息
This article designs a new hierarchical distributed data-driven adaptive learning control algorithm to accomplish the leader-following tracking control objective for nonaffine nonlinear multiagent systems (MASs). The proposed hierarchical control structure is composed of a distributed observer and a decentralized data-driven adaptive learning controller. Considering that some followers cannot directly receive information from the leader, a distributed observer is designed to estimate the information of the leader. Based on this, a decentralized data-driven adaptive learning controller is further devised to enable the follower to track the estimated information of the leader, where the model parameter learning algorithm is developed to capture the dynamic characteristics of the original system. One advantage of the developed hierarchical control learningalgorithm is that neither the leader's system model nor the follower's system model is needed. The other one is the elimination of the noncausal problem without the additional assumption. Simulation results exemplify the merits of the theoretical results by comparisons.
Although Extreme learning Machine (ELM) can learn thousands of times faster than traditional slow gradient algorithms for training neural networks, ELM fitting accuracy is limited. This paper develops Functional Extre...
详细信息
Although Extreme learning Machine (ELM) can learn thousands of times faster than traditional slow gradient algorithms for training neural networks, ELM fitting accuracy is limited. This paper develops Functional Extreme learning Machine (FELM), which is a novel regression and classifier. It takes functional neurons as the basic computing units and uses functional equation-solving theory to guide the modeling process of functional extreme learning machines. The functional neuron function of FELM is not fixed, and its learning process refers to the process of estimating or adjusting the coefficients. It follows the spirit of extreme learning and solves the generalized inverse of the hidden layer neuron output matrix through the principle of minimum error, without iterating to obtain the optimal hidden layer coefficients. To verify the performance of the proposed FELM, it is compared with ELM, OP-ELM, SVM and LSSVM on several synthetic datasets, XOR problem, benchmark regression and classification datasets. The experimental results show that although the proposed FELM has the same learning speed as ELM, its generalization performance and stability are better than ELM.
IntroductionExtreme learning machine (ELM) is a training algorithm for single hidden layer feedforward neural network (SLFN), which converges much faster than traditional methods and yields promising performance. Howe...
详细信息
IntroductionExtreme learning machine (ELM) is a training algorithm for single hidden layer feedforward neural network (SLFN), which converges much faster than traditional methods and yields promising performance. However, the ELM also has some shortcomings, such as structure selection, overfitting and low generalization performance. MethodsThis article a new functional neuron (FN) model is proposed, we takes functional neurons as the basic unit, and uses functional equation solving theory to guide the modeling process of FELM, a new functional extreme learning machine (FELM) model theory is proposed. ResultsThe FELM implements learning by adjusting the coefficients of the basis function in neurons. At the same time, a simple, iterative-free and high-precision fast parameter learning algorithm is proposed. DiscussionThe standard data sets UCI and StatLib are selected for regression problems, and compared with the ELM, support vector machine (SVM) and other algorithms, the experimental results show that the FELM achieves better performance.
This paper is concerned with parameterlearning of chips with small outline transistor (SOT) package, which is one of the most widely used package in surface mount technology (SMT) and has various subcategories. Previ...
详细信息
This paper is concerned with parameterlearning of chips with small outline transistor (SOT) package, which is one of the most widely used package in surface mount technology (SMT) and has various subcategories. Previously learned parameter is crucial to most SOT-related industrial applications, such as location and defect inspection. However, parameterlearning is a challenging work because of package diversity and image-quality deterioration in practical industrial applications. The conventional methods, checking data sheet or manual measuring, cannot meet the accuracy requirement of SMT. This paper proposes a hierarchical-backtracking-based parameter learner for SOT chips. The Gaussian mixture model based clustering algorithm and random walker algorithm are firstly applied to extracting lead regions of SOT chip;Then, chip models are inferred by grouping these lead regions with a hierarchical backtracking algorithm. Finally, redundant models are eliminated with root set pyramids and the valid chip model is obtained. The experimental results show that the proposed parameter learner performs well on SOT chips and is robust to noisy sets.
This paper proposes a novel parameter learning algorithm for a self-constructing fuzzy neural network (SCFFN) design. It concludes dynamic prior adjustment (DPA) which is employed to adjust parameters according to the...
详细信息
ISBN:
(纸本)9780769536415
This paper proposes a novel parameter learning algorithm for a self-constructing fuzzy neural network (SCFFN) design. It concludes dynamic prior adjustment (DPA) which is employed to adjust parameters according to the distribution of the input samples and group-based symbiotic evolution (GSE) which is applied to train all the free parameters for the desired outputs. DPA considers the relevance between input samples space and the IF-part parameters, which intends to accomplish coarse adjustment. Then, GSE is adopted to search the global optimum solution. Unlike traditional GA with each gene representing a whole fuzzy system, GSE divides the population into several groups that each one only represents a fuzzy rule. The full solutions can be generated by all possible combinations of the groups. The simulations results have verified that the proposed algorithm achieves superior performance in learning accuracy.
暂无评论