咨询与建议

看过本文的还看了

相关文献

该作者的其他文献

文献详情 >Robust Policy Iteration for Co... 收藏

Robust Policy Iteration for Continuous-Time Linear Quadratic Regulation

为连续时间的线性二次的规定的柔韧的政策重复

作     者:Pang, Bo Bian, Tao Jiang, Zhong-Ping 

作者机构:NYU Tandon Sch Engn Dept Elect & Comp Engn Control & Networks Lab Brooklyn NY 11201 USA Bank Amer Merrill Lynch New York NY 10036 USA 

出 版 物:《IEEE TRANSACTIONS ON AUTOMATIC CONTROL》 (IEEE自动控制汇刊)

年 卷 期:2022年第67卷第1期

页      面:504-511页

核心收录:

学科分类:0808[工学-电气工程] 08[工学] 0811[工学-控制科学与工程] 

基  金:U.S. National Science Foundation [ECCS-1501044  EPCN-1903781] 

主  题:Robustness Biological system modeling Optimal control Numerical stability Heuristic algorithms Approximation algorithms Symmetric matrices Adaptive dynamic programming adaptive optimal control data-driven control policy iteration reinforcement learning robustness 

摘      要:This article studies the robustness of policy iteration in the context of continuous-time infinite-horizon linear quadratic regulator (LQR) problem. It is shown that Kleinman s policy iteration algorithm is small-disturbance input-to-state stable, a property that is stronger than Sontag s local input-to-state stability but weaker than global input-to-state stability. More precisely, whenever the error in each iteration is bounded and small, the solutions of the policy iteration algorithm are also bounded and enter a small neighborhood of the optimal solution of the LQR problem. Based on this result, an off-policy data-driven policy iteration algorithm for the LQR problem is shown to be robust when the system dynamics are subject to small additive unknown bounded disturbances. The theoretical results are validated by a numerical example.

读者评论 与其他读者分享你的观点

用户名:未登录
我的评分