approximatedynamicprogramming has been formulated and applied mainly to discrete-time systems. Expressing the ADP concept for continuous-time systems raises difficult issues related to sampling time and system model...
详细信息
approximatedynamicprogramming has been formulated and applied mainly to discrete-time systems. Expressing the ADP concept for continuous-time systems raises difficult issues related to sampling time and system model knowledge requirements. In this paper is presented a novel online adaptive critic (AC) scheme, based on approximatedynamicprogramming (ADP), to solve the infinite horizon optimal control problem for continuous-time dynamical systems; thus bringing together concepts from the fields of computational intelligence and control theory. Only partial knowledge about the system model is used, as knowledge about the plant internal dynamics is not needed. The method is thus useful to determine the optimal controller for plants with partially unknown dynamics. It is shown that the proposed iterative ADP algorithm is in fact a quasi-Newton method to solve the underlying algebraic Riccati equation (ARE) of the optimal control problem. An initial gain that determines a stabilizing control policy is not required. In control theory terms, in this paper is developed a direct adaptive control algorithm for obtaining the optimal control solution without knowing the system A matrix
Ant colony optimization was originally presented under the inspiration during collective behavior study results on real ant system, and it has strong robustness and easy to combine with other methods in optimization. ...
详细信息
Ant colony optimization was originally presented under the inspiration during collective behavior study results on real ant system, and it has strong robustness and easy to combine with other methods in optimization. Although ant colony optimization for the heuristic solution of hard combinational optimization problems enjoy a rapidly growing popularity, but little research is conducted on the optimum configuration strategy for the adjustable parameters in the ant colony optimization, and the performance of ant colony optimization depends on the appropriate setting of parameters which requires both human experience and luck to some extend. Memetic algorithm is a population-based heuristic search approach which can be used to solve combinatorial optimization problem based on cultural evolution. Based on the introduction of these two meta-heuristic algorithms, a novel kind of adjustable parameters configuration strategy based on memetic algorithm is developed in this paper, and the feasibility and effectiveness of this approach are also verified through the famous traveling salesman problem (TSP). This hybrid approach is also valid for other types of combinational optimization problems
Autonomous drive of wheeled mobile robot (WMR) needs implementing velocity and path tracking control subject to complex dynamical constraints. Conventionally, this control design is obtained by analysis and synthesis ...
详细信息
Autonomous drive of wheeled mobile robot (WMR) needs implementing velocity and path tracking control subject to complex dynamical constraints. Conventionally, this control design is obtained by analysis and synthesis of the WMR system. This paper presents the dual heuristic programming (DHP) adaptive critic design of the motion control system that enables WMR to achieve the control purpose simply by learning through trial. The design consists of an adaptive critic velocity neuro-control loop and a posture neuro-control loop. The neural weights in the velocity neuro-controller (VNC) are corrected with the DHP adaptive critic method. The designer simply expresses the control objective with a utility function. The VNC learns by sequential optimization to satisfy the control objective. The posture neuro-controller (PNC) approximates the inverse velocity model of WMR so as to map planned positions to desired velocities. Supervised drive of WMR in variant velocities supplies training samples for the PNC and VNC to setup the neural weights. In autonomous drive, the learning mechanism keeps improving the PNC and VNC. The design is evaluated on an experimental WMR. The excellent results make it certain that the DHP adaptive critic motion control design enables WMR to develop the control ability autonomously.
This paper addresses the call admission control (CAC) problem for multiple services in the uplink of a cellular system using direct sequential code division multiple access (DS-CDMA) when taking into account the physi...
详细信息
ISBN:
(纸本)9781424405220
This paper addresses the call admission control (CAC) problem for multiple services in the uplink of a cellular system using direct sequential code division multiple access (DS-CDMA) when taking into account the physical layer channel and receiver structure at the base station. The problem is formulated as a semi-Markov decision process (SMDP) with constraints on the blocking probabilities and signal-to-interference ratio (SIR). The objective is to find a CAC policy which maximizes the throughput while still satisfying these quality-of-service (QoS) constraints. To solve for a near optimal CAC policy, an online decision-making algorithm based on an actor-critic with temporal-difference learning from a recent paper is modified by parameterizing the reward signal to deal with the QoS constraints. The proposed algorithm circumvents the computational complexity experienced in conventional dynamicprogramming techniques.
In this work, we design a policy-iteration-based Q-learning approach for on-line optimal control of ionized hypersonic flow at the inlet of a scramjet engine. Magneto-hydrodynamics (MHD) has been recently proposed as ...
详细信息
ISBN:
(纸本)1424407060
In this work, we design a policy-iteration-based Q-learning approach for on-line optimal control of ionized hypersonic flow at the inlet of a scramjet engine. Magneto-hydrodynamics (MHD) has been recently proposed as a means for flow control in various aerospace problems. This mechanism corresponds to applying external magnetic fields to ionized flows towards achieving desired flow behavior. The applications range from external flow control for producing forces and moments on the air-vehicle to internal flow control designs, which compress and extract electrical energy from the flow. The current work looks at the later problem of internal flow control. The baseline controller and Q-function parameterizations are derived from an off-line mixed predictive-control and dynamic-programming-based design. The nominal optimal neural network Q-function and controller are updated on-line to handle modeling errors in the off-line design. The on-line implementation investigates key concerns regarding the conservativeness of the update methods. Value-iteration-based update methods have been shown to converge in a probabilistic sense. However, simulations results illustrate that realistic implementations of these methods face significant training difficulties, often failing in learning the optimal controller on-line. The present approach, therefore, uses a policy-iteration-based update, which has time-based convergence guarantees. Given the special finite-horizon nature of the problem, three novel on-line update algorithms are proposed. These algorithms incorporate different mix of concepts, which include bootstrapping, and forward and backward dynamicprogramming update rules. Simulation results illustrate success of the proposed update algorithms in re-optimizing the performance of the MHD generator during system operation
The control of probabilistic Boolean networks as a model of genetic regulatory networks is formulated as an optimal stochastic control problem and has been solved using dynamicprogramming; however, the proposed metho...
详细信息
The control of probabilistic Boolean networks as a model of genetic regulatory networks is formulated as an optimal stochastic control problem and has been solved using dynamicprogramming; however, the proposed methods fail when the number of genes in the network goes beyond a small number. Their complexity exponentially increases with the number of genes due to the estimation of model-dependent probability distributions, and the expected curse of dimensionality associated with dynamicprogramming algorithm. We propose a model-free approximate stochastic control method based on reinforcementlearning thatmitigates the twin curses of dimensionality and provides polynomial time complexity. By using a simulator, the proposed method eliminates the complexity of estimating the probability distributions. The method can be applied on networks for which dynamicprogramming cannot be used owing to computational limitations. Experimental results demonstrate that the performance of the method is close to optimal stochastic control.
The goal of the work described in this paper is to develop a particular optimal control technique based on a Cell-Mapping technique in combination with the Q-learningreinforcementlearning method to control wheeled m...
详细信息
ISBN:
(纸本)1424408296;97
The goal of the work described in this paper is to develop a particular optimal control technique based on a Cell-Mapping technique in combination with the Q-learningreinforcementlearning method to control wheeled mobile vehicles. This approach manages 4 state variables due to a dynamic model is performed instead of a kinematics model which can be done with less variables. This new solution can be applied to non-linear continuous systems where reinforcementlearning methods have multiple constraints. Emphasis is given to the new combination of techniques, which applied to optimal control problems produce satisfactory results. The proposed algorithm is very robust to any change involved in the vehicle parameters because the vehicle model is estimated in real time from received experience.
暂无评论