Practical stabilization of input-affine systems in the presence of measurement errors and input constraints is considered in this brief note. Assuming that a Lyapunov function and a stabilizing control exist for an in...
详细信息
This work studies robustness to system disturbance and measurement noise of some popular general practical stabilization techniques, namely, Dini aiming, optimization-based stabilization and inf-convolution stabilizat...
详细信息
Model predictive control (MPC) is the standard approach to infinite-horizon optimal control which usually optimizes a finite initial fragment of the cost function so as to make the problem computationally tractable. G...
详细信息
A multiple time-scale hierarchical control approach with guarantees for active fault diagnosability (detection and isolation) is presented. Robust positive invariance notions are used to provide a set-membership test ...
详细信息
We present an approach to design stabilizing controllers for a set of linear systems without restrictions regarding their modeling order. To this end, the systems are treated as abstract objects in the space of the ν...
详细信息
We present an approach to design stabilizing controllers for a set of linear systems without restrictions regarding their modeling order. To this end, the systems are treated as abstract objects in the space of the ν-gap metric. Via a cluster analysis the set of systems is split into ν-gap similar clusters which are treated separately. For this purpose we provide an algorithm that constructs an explicit prototype system by generalizing the information of a given set of systems. Applying this algorithm to each cluster a set of prototype systems is obtained. Given these prototypes we design controllers in such a way that all systems assigned to a cluster will be stabilized by a corresponding controller. The approach is demonstrated for a set of 80 linear systems.
This work is concerned with the application of reinforcement learning (RL) techniques to adaptive dynamic programming (ADP) for systems with partly unknown models. In ADP, one seeks to approximate an optimal infinite ...
详细信息
This work is concerned with the application of reinforcement learning (RL) techniques to adaptive dynamic programming (ADP) for systems with partly unknown models. In ADP, one seeks to approximate an optimal infinite horizon cost function, the value function. Such an approximation, i.e., critic, does not in general yield a stabilizing control policies, i.e., stabilizing actors. Guaranteeing stability of nonlinear systems under RL/ADP is still an open issue. In this work, it is suggested to use a stability constraint directly in the actor-critic structure. The system model considered in this work is assumed to be only partially known, specifically, it contains an unknown parameter vector. A suitable stabilizability assumption for such systems is an adaptive Lyapunov function, which is commonly assumed in adaptive control. The current approach formulates a stability constraint based on an adaptive Lyapunov function to ensure closed-loop stability. Convergence of the actor and critic parameters in a suitable sense is shown. A case study demonstrates how the suggested algorithm preserves closed-loop stability, while at the same time improving an infinite-horizon performance.
Fertilization is commonly used to increase harvests. The lack of knowledge of soil properties and the excessive use of fertilizers can result in overfertilization. Current sensor technology is able to measure the conc...
详细信息
Fertilization is commonly used to increase harvests. The lack of knowledge of soil properties and the excessive use of fertilizers can result in overfertilization. Current sensor technology is able to measure the concentrations of some of the involved substances only at selected locations and depths. Point measurements of adjacent sensors in coarse sensor networks can be used to infer upon the state of nitrate concentrations in the sensor surroundings. For this purpose, a high-gain observer is proposed. Models of the nitrification process as well as the measurement dynamics for the observer design are derived and discretized on a grid to obtain a system of ordinary differential equations. It is shown that the nonlinearities of the model can be bounded and how the observer gain can be computed via linear matrix inequalities. Furthermore, a model reduction is proposed, which allows the consideration of more grid points. A simulation study demonstrates the proposed approach.
Stability under model predictive control (MPC) schemes is frequently ensured by terminal ingredients. Employing a (control) Lyapunov function as the terminal cost constitutes a common choice. Learning-based methods ma...
详细信息
Stability under model predictive control (MPC) schemes is frequently ensured by terminal ingredients. Employing a (control) Lyapunov function as the terminal cost constitutes a common choice. Learning-based methods may be used to construct the terminal cost by relating it to, for instance, an infinite-horizon optimal control problem in which the optimal cost is a Lyapunov function. Value iteration, an approximate dynamic programming (ADP) approach, refers to one particular cost approximation technique. In this work, we merge the results of terminally unconstrained predictive control and approximate value iteration to draw benefits from both fields. A prediction horizon is derived in dependence on different factors, such as approximation-related errors, to render the closed-loop asymptotically stable further allowing a suboptimality estimate in comparison to an infinite-horizon optimal cost. The result extends recent studies on predictive control with ADP-based terminal costs, not requiring a local initial stabilizing controller. We compare this controller in simulation with other terminal cost options to show that the proposed approach leads to a shorter minimal horizon in comparison to previous results.
Reliably determining system trajectories is essential in many analysis and control design approaches. To this end, an initial value problem has to be usually solved via numerical algorithms which rely on a certain sof...
详细信息
Reliably determining system trajectories is essential in many analysis and control design approaches. To this end, an initial value problem has to be usually solved via numerical algorithms which rely on a certain software realization. Because software realizations can be error-prone, proof assistants may be used to verify the underlying mathematical concepts and corresponding algorithms. In this work we present a computer-certified formalization of the solution of the initial value problem of ordinary differential equations. The concepts are performed in the framework of constructive analysis and the proofs are written in the Minlog proof system. We show the extraction of a program, which solves an ODE numerically and provide some possible optimization regarding the efficiency. Finally, we provide numerical experiments to demonstrate how programs of a certain high level of abstraction can be obtained efficiently. The presented concepts may also be viewed as a part of preliminary work for the development of formalized nonlinear control theory, hence offering the possibility of computer-assisted controller design and program extraction for the controller implementation.
Reliably determining system trajectories is essential in many analysis and control design approaches. To this end, an initial value problem has to be usually solved via numerical algorithms which rely on a certain sof...
详细信息
暂无评论