This paper proposed an optimal control scheme based on the actor-critic neural network(NN) for the complex mechanical manipulator system with dynamic disturbance. The actor's goal is to optimize control behavior, ...
详细信息
This paper proposed an optimal control scheme based on the actor-critic neural network(NN) for the complex mechanical manipulator system with dynamic disturbance. The actor's goal is to optimize control behavior, while the critic's goal is to evaluate control performance. The optimal control update law in the scheme can guarantee the system error and the weight estimation error SGUUB, and its stability and convergence are proved based on the direct Lyapunov method. Finally, the connecting rods on two degrees of freedom are tested to verify the effectiveness of the proposed optimal control scheme.
adaptive dynamic programming (ADP) is an effective algorithm that has been successfully deployed in various control tasks. For many emerging applications where power consumption is a major design consideration, the co...
详细信息
adaptive dynamic programming (ADP) is an effective algorithm that has been successfully deployed in various control tasks. For many emerging applications where power consumption is a major design consideration, the conventional way of implementing ADP as software executing on a general-purpose processor is not sufficient. This paper proposes a scalable and low-power hardware architecture for implementing one of the most popular forms of ADP called action-dependent heuristic dynamicprogramming. Different from most machine-learning accelerators that mainly focus on the inference operation, the proposed architecture is also designed for energy-efficient learning, considering the highly iterative and interactive nature of the ADP algorithm. In addition, a virtual update technique is proposed to speed up the computation and to improve the energy efficiency of the accelerators. Two design examples are presented to demonstrate the proposed algorithm and architecture. Compared with the software approach running on a general-purpose processor, the accelerator operating at 175 MHz achieves 270 times improvement in computational time while consuming merely 25 mW power. Furthermore, it is demonstrated that the proposed virtual update algorithm can effectively boost the energy efficiency of the accelerator. Improvements up to 1.64 times are observed in the benchmark tasks employed.
A novel data-driven robust approximate optimal Maximum Power Point Tracking (MPPT) control method is proposed for the wind power generation system by using the adaptive dynamic programming (ADP) algorithm. First, a da...
详细信息
A novel data-driven robust approximate optimal Maximum Power Point Tracking (MPPT) control method is proposed for the wind power generation system by using the adaptive dynamic programming (ADP) algorithm. First, a data-driven model is established by a recurrent neural network (NN) to reconstruct the wind power system dynamics using available input-output data. Then, in the design of the controller, based on the obtained data-driven model, the ADP algorithm is utilized to design the approximate optimal tracking controller, which consists of the steady-state controller and the optimal feedback controller. Ulteriorly, developing a robustifying term to compensate for the NN approximation errors introduced by implementing the ADP method. Based on the Lyapunov approach, it proves the stability of the designed model and controller to show that the proposed controller guarantees the system power asymptotically tracking the maximum power. Finally, the simulation results demonstrate that the control method stabilizes the tip speed ratio near the optimal value when the wind speed is lower than the rated wind speed. Moreover, the tracking response speed of the proposed method is fast, which enhances the stability and robustness of the system.
Space tether system has a wide application prospect in space mission. Due to the characteristics of strong non-linearity and under-actuation, as well as the interference of complex space environment, it is difficult t...
详细信息
Space tether system has a wide application prospect in space mission. Due to the characteristics of strong non-linearity and under-actuation, as well as the interference of complex space environment, it is difficult to model the tethered system accurately. Hence, the controller based on the parameters of the system model will cause large errors in the process of control. In this paper, an adaptive dynamic programming algorithm based on reinforcement learning theory is adopted. By training two Back Propagation (BP) neural networks, namely critic neural network (NN) and actor NN, the performance index function and control law of the system approach approximate optimal values respectively. The controller design is independent of the system model, so model-free control of the system is realized by implementing this control method. First, assuming that the out-of-plane motion of the system is stable, the optimal deployment trajectory of the tethered system is obtained by parameter optimization based on Nelder-Mead method. The optimal trajectory is taken as the nominal trajectory and the trajectory tracking is carried out by reinforcement learning controller. The simulation results show that the reinforcement learning algorithm has a good control effect on the in-plane trajectory tracking of the tethered system, which proves the feasibility and robustness of the control method.
Model-based reinforcement learning techniques accelerate the learning task by employing a transition model to make predictions. In this paper, a model-based learning approach is presented that iteratively computes the...
详细信息
Model-based reinforcement learning techniques accelerate the learning task by employing a transition model to make predictions. In this paper, a model-based learning approach is presented that iteratively computes the optimal value function based on the most recent update of the model. Assuming a structured continuous-time model of the system in terms of a set of bases, we formulate an infinite horizon optimal control problem addressing a given control objective. The structure of the system along with a value function parameterized in the quadratic form provides a flexibility in analytically calculating an update rule for the parameters. Hence, a matrix differential equation of the parameters is obtained, where the solution is used to characterize the optimal feedback control in terms of the bases, at any time step. Moreover, the quadratic form of the value function suggests a compact way of updating the parameters that considerably decreases the computational complexity. Considering the state-dependency of the differential equation, we exploit the obtained framework as an online learning-based algorithm. In the numerical results, the presented algorithm is implemented on four nonlinear benchmark examples, where the regulation problem is successfully solved while an identified model of the system is obtained with a bounded prediction error.
In this paper, a data-based optimal tracking control approach is developed by involving the iterative dual heuristic dynamicprogramming algorithm for nonaffine systems. In order to gain the steady control correspondi...
详细信息
In this paper, a data-based optimal tracking control approach is developed by involving the iterative dual heuristic dynamicprogramming algorithm for nonaffine systems. In order to gain the steady control corresponding to the desired trajectory, a novel strategy is established with regard to the unknown system function. Then, according to the iterative adaptive dynamic programming algorithm, the updating formula of the costate function and the new optimal control policy for unknown nonaffine systems are provided to solve the optimal tracking control problem. Moreover, three neural networks are used to facilitate the implementation of the proposed algorithm. In order to improve the accuracy of the steady control corresponding to the desired trajectory, we employ a model network to directly approximate the unknown system function instead of the error dynamics. Finally, the effectiveness of the proposed method is demonstrated through a simulation example.
This paper proposes an off-policy learning-based dynamic state feedback protocol that achieves the optimal synchronization of heterogeneous multi-agent systems (MAS) over a directed communication network. Note that mo...
详细信息
This paper proposes an off-policy learning-based dynamic state feedback protocol that achieves the optimal synchronization of heterogeneous multi-agent systems (MAS) over a directed communication network. Note that most of the recent works on heterogeneous MAS are not formed in an optimal manner. By formulating the cooperative output regulation problem as an H-infinity optimization problem, we can use reinforcement learning to find output synchronization protocols online along with the system trajectories without solving output regulator equations. In contrast to the existing optimal literature where leader's states are assumed to be globally or distributively available for the communication, we only allow the relative system outputs to transmit through the network;namely, no leader's states are needed now for the control or learning purpose. (C) 2020 Elsevier Ltd. All rights reserved.
In this paper, a data-driven optimal control method based on adaptive dynamic programming and game theory is presented for solving the output feedback solutions of the H-infinity control problem for linear discrete-ti...
详细信息
In this paper, a data-driven optimal control method based on adaptive dynamic programming and game theory is presented for solving the output feedback solutions of the H-infinity control problem for linear discrete-time systems with multiple players subject to multi-source disturbances. We first transform the H-infinity control problem into a multi-player game problem following the theoretical solutions according to game theory. Since the system state may not be measurable, we derive the output feedback based control policies and disturbances through mathematical operations. Considering the advantages of off-policy reinforcement learning (RL) over on-policy RL, a novel off-policy game Q-learning algorithm dealing with mixed competition and cooperation among players is developed, such that the H-infinity control problem can be finally solved for linear multi-player systems without the knowledge of system dynamics. Moreover, rigorous proofs of algorithm convergence and unbiasedness of solutions are presented. Finally, simulation results demonstrated the effectiveness of the proposed method.
This paper deals with the optimal fault estimation and accommodation problem for a class of linear systems in the framework of Stackelberg differential game theory. In this framework, the observer plays the role of th...
详细信息
This paper deals with the optimal fault estimation and accommodation problem for a class of linear systems in the framework of Stackelberg differential game theory. In this framework, the observer plays the role of the follower, while the system plays the role of the leader in making sequential decisions. A dual controller approach is used to design an auxiliary controller for the observer such that it can non-cooperate with the controller of the system to achieve the Stackelberg equilibrium. To achieve the online updating of the fault-tolerant controller, an adaptive dynamic programming methodology is used by establishing two critic neural networks for the observer and system respectively. Finally, a simulation is presented to illustrate the efficiency and applicability of the theoretical results.
This paper studies the data-driven structural control of monopile wind turbine towers based on machine learning approach, by using an active tuned mass damper (TMD) located in the nacelle. The adaptivedynamic program...
详细信息
This paper studies the data-driven structural control of monopile wind turbine towers based on machine learning approach, by using an active tuned mass damper (TMD) located in the nacelle. The adaptive dynamic programming (ADP) approach is employed to obtain the optimal controller which is derived on the modern large-scale machine learning platform Tensorflow. The proposed network structure includes three simple three-layer neural networks (NNs), i.e. a plant network, a critic network, and an action network. The plant network is used to capture the fully nonlinear dynamics of the structural system while the action network is used to approximate the optimal controller. Their training requires the gradient information flowing through the whole network. The automatic differentiation is used in this paper for all the gradient derivations, which greatly improves the employed ADP algorithm’s ability in solving complex practical problems. The simulation results of structural control of monopile turbine towers show that on average the active TMD achieves 15% performance improvement on tower fatigue load reduction over a passive TMD, with small active power consumption (less than 0.24% of the turbine’s nominal power production). Besides, the controller design considers the trade-off between control performance and power consumption.
暂无评论