版权所有:内蒙古大学图书馆 技术提供:维普资讯• 智图
内蒙古自治区呼和浩特市赛罕区大学西街235号 邮编: 010021
作者机构:Sogang University Department of Computer Science and Engineering Seoul121-742 Korea Republic of
出 版 物:《IEEE Transactions on Automatic Control》 (IEEE Trans Autom Control)
年 卷 期:2024年第70卷第6期
页 面:4031-4036页
核心收录:
学科分类:0711[理学-系统科学] 0808[工学-电气工程] 08[工学] 0714[理学-统计学(可授理学、经济学学位)] 0701[理学-数学]
摘 要:An important question about finite constrained Markov decision process (CMDP) problem is if there exists a condition under which a uniformly-optimal and uniformly-feasible policy exists in the set of deterministic, history-independent, and stationary policies that achieves the optimal value at all initial states and if the CMDP problem with the condition can be solved by dynamic programming (DP). This is because the crux of the unconstrained MDP theory developed by Bellman lies in the answer to the same existence question of such an optimal policy to MDP. Even if the topic of CMDP has been studied over the years, there has not been any relevant responsive work since the open question was raised about three decades ago in the literature. We establish (as some answer to this question) that any finite CMDP problem Mc containsinherently a DP-structure in its subordinateCMDP problem Mc induced from the parameters of Mc and Mc is DP-solvable. We drive a policy-iteration-type algorithm for solving Mc providing an approximate solution to Mc or Mc with a fixed initial-state. © 1963-2012 IEEE.