版权所有:内蒙古大学图书馆 技术提供:维普资讯• 智图
内蒙古自治区呼和浩特市赛罕区大学西街235号 邮编: 010021
作者机构:NYU Tandon Sch Engn Dept Elect & Comp Engn Control & Networks Lab Brooklyn NY 11201 USA Bank Amer Merrill Lynch New York NY 10036 USA
出 版 物:《IEEE TRANSACTIONS ON AUTOMATIC CONTROL》 (IEEE自动控制汇刊)
年 卷 期:2022年第67卷第1期
页 面:504-511页
核心收录:
学科分类:0808[工学-电气工程] 08[工学] 0811[工学-控制科学与工程]
基 金:U.S. National Science Foundation [ECCS-1501044 EPCN-1903781]
主 题:Robustness Biological system modeling Optimal control Numerical stability Heuristic algorithms Approximation algorithms Symmetric matrices Adaptive dynamic programming adaptive optimal control data-driven control policy iteration reinforcement learning robustness
摘 要:This article studies the robustness of policy iteration in the context of continuous-time infinite-horizon linear quadratic regulator (LQR) problem. It is shown that Kleinman s policy iteration algorithm is small-disturbance input-to-state stable, a property that is stronger than Sontag s local input-to-state stability but weaker than global input-to-state stability. More precisely, whenever the error in each iteration is bounded and small, the solutions of the policy iteration algorithm are also bounded and enter a small neighborhood of the optimal solution of the LQR problem. Based on this result, an off-policy data-driven policy iteration algorithm for the LQR problem is shown to be robust when the system dynamics are subject to small additive unknown bounded disturbances. The theoretical results are validated by a numerical example.