Disturbance widely exists in control *** disturbance rejection control(ADRC) has been proved to be an efficient way to deal with disturbance and achieves great success in *** most important part of ADRC is extended st...
详细信息
ISBN:
(纸本)9781509009107
Disturbance widely exists in control *** disturbance rejection control(ADRC) has been proved to be an efficient way to deal with disturbance and achieves great success in *** most important part of ADRC is extended state observer(ESO),which estimates the states of system and total *** regards internal disturbance and external disturbance as a total disturbance,estimates it and eliminates it in the *** faster ESO estimates total disturbance,the more quickly controller eliminates *** paper focuses on the discrete linear extended state observer(DLESO),and presents discrete predictive linear extended state observer(DPLESO).Stability and performances are *** result shows that DPLESO has better performances for disturbance rejection than DLESO.
In this article, I would like to share my daughter's experience on self-STEAM education. In our daily life, she can always find problems and then dream them with her imagination, finally narrows down her solutions...
详细信息
In this article, I would like to share my daughter's experience on self-STEAM education. In our daily life, she can always find problems and then dream them with her imagination, finally narrows down her solutions to solve problems in her own way. During these problem-solving processes, I noticed that she went through sciences, technology, engineering, arts and mathematics in a very natural way. So I would like to say, STEAM is the way we learn and grow ever since we were born. But later at school, we are trained in a way that subjects are completely disconnected. It might take time to change this situation at school, but at home, as parents, we need to realize that children can discover new things by themselves even from young ages, and provide rich learning environment for them and let them "STEAM " themselves in their own way.
In this paper,we propose an improved part-based tracking method based on compressive *** traditional compressive tracking can hardly deal with occlusion and scale variation,which is not robust enough in these *** our ...
详细信息
ISBN:
(纸本)9781509009107
In this paper,we propose an improved part-based tracking method based on compressive *** traditional compressive tracking can hardly deal with occlusion and scale variation,which is not robust enough in these *** our part-based method,the occluded image patches which are selected by K-means are not able to participate in determining the location of the target in order to increase tracking *** solve the problem of scale variation,an algorithm is proposed to estimate the size of target with the information of the whole *** addition,we also use the positions of every part to help us to get a precise size of the *** the combination of two methods,the scale of the target is *** compared with five state-of-the-arts have been done to prove the effectiveness and robustness of our method.
Experience replay is a promising approach to improve the learning efficiency of adaptive dynamic programming. A general model-free adaptive dynamic programming (ADP) approach with the experience replay technology is i...
详细信息
ISBN:
(纸本)9781509006212
Experience replay is a promising approach to improve the learning efficiency of adaptive dynamic programming. A general model-free adaptive dynamic programming (ADP) approach with the experience replay technology is investigated in this paper to solve the optimal control problems in continuous state and action spaces. Both the critic network and action network are modeled with a feedforward neural network with one hidden layer. During the learning process, a number of recently observed data samples are recorded in a database. When updating the parameters of the neural networks, the data in the sample database are repeatedly used to update the weights of the action network and the critic network. Implementation details of the algorithm are given, and simulation experiments are utilized to verify the learning efficiency of the proposed approach.
In this paper, an optimal self-learning control scheme for discrete-time nonlinear systems is developed using a new local value iteration based adaptive dynamic programming (ADP) algorithm. The developed local value i...
详细信息
ISBN:
(纸本)9781509006212
In this paper, an optimal self-learning control scheme for discrete-time nonlinear systems is developed using a new local value iteration based adaptive dynamic programming (ADP) algorithm. The developed local value iteration algorithm permits an arbitrary positive semi-definite function to initialize the algorithm. In the developed local value iteration algorithm, the iterative value function and iterative control law are updated by a subset of the state space. A new analysis method of the convergence property is presented to show that the iterative value functions will converge to the optimum. The convergence criterion for the local value iteration algorithm is presented. A simulation example is given to demonstrate the validity of the present optimal control scheme.
This paper deals with optimal tracking control problems for a class of discrete-time nonlinear systems using a generalized policy iteration adaptive dynamic programming(ADP) ***,by system transformation,the optimal tr...
详细信息
ISBN:
(纸本)9781467397155
This paper deals with optimal tracking control problems for a class of discrete-time nonlinear systems using a generalized policy iteration adaptive dynamic programming(ADP) ***,by system transformation,the optimal tracking control problem is transformed into an optimal regulation *** the generalized policy iteration ADP algorithm is employed to obtain the optimal tracking controller with convergence and optimality *** developed algorithm uses the idea of two iteration procedures to obtain the iterative tracking control laws and the iterative value *** neural networks,including model network,critic network and action network,are used to implement the developed *** last,an simulation example is given to demonstrate the effectiveness of the developed method.
In this paper, an adaptive dynamic programming (ADP) based method is developed to optimize electricity consumption of rooms in office buildings through optimal battery management. Rooms in office buildings are general...
详细信息
ISBN:
(纸本)9781509006212
In this paper, an adaptive dynamic programming (ADP) based method is developed to optimize electricity consumption of rooms in office buildings through optimal battery management. Rooms in office buildings are generally divided into office room, computer room, storage room, meeting room, etc., each of which has different characteristics of electricity consumption, as divided into electricity consumption from sockets, lights and air-conditioners in this paper. The developed method based on ADP is elaborated, and different optimization strategies of electricity consumption in different categories of rooms are proposed in accordance with the developed method. Finally, a detailed case study on an office building is given to show the practical effect of the developed method.
In this paper,a new data-based self-learning control scheme is developed to solve infinite horizon optimal control problems for continuous-time nonlinear *** developed optimal control scheme can be implement without k...
详细信息
ISBN:
(纸本)9781467397155
In this paper,a new data-based self-learning control scheme is developed to solve infinite horizon optimal control problems for continuous-time nonlinear *** developed optimal control scheme can be implement without knowing the mathematical model of the *** to the input-output data of the nonlinear systems,a recurrent neural network(RNN) is employed to reconstruct the dynamics of the nonlinear *** to the RNN model of the system,a new two-person zero-sum adaptive dynamic programming(ADP) algorithm is developed to obtain the optimal control,where the reconstruction error and the system disturbance are considered the control input of the ***-layer neural networks are used to construct the critic and action networks,which are presented to approximate the performance index function and the control law,***,simulation results will show the effectiveness of the developed data-based ADP methods.
In this paper a deep reinforcement learning (DRL) method is proposed to solve the control problem which takes raw image pixels as input states. A convolutional neural network (CNN) is used to approximate Q functions, ...
详细信息
ISBN:
(纸本)9781509006212
In this paper a deep reinforcement learning (DRL) method is proposed to solve the control problem which takes raw image pixels as input states. A convolutional neural network (CNN) is used to approximate Q functions, termed as Q-CNN. A pretrained network, which is the result of a classification challenge on a vast set of natural images, initializes the parameters of Q-CNN. Such initialization assigns Q-CNN with the features of image representation, so it is more concentrated on the control tasks. The weights are tuned under the scheme of fitted Q iteration (FQI), which is an offline reinforcement learning method with the stable convergence property. To demonstrate the performance, a modified Food-Poison problem is simulated. The agent determines its movements based on its forward view. In the end the algorithm successfully learns a satisfied policy which has better performance than the results of previous researches.
The origin of artificial intelligence is investigated, based on which the concepts of hybrid intelligence and parallel intelligence are presented. The paradigm shift in Intelligence indicates the ''new normal&...
详细信息
暂无评论