版权所有:内蒙古大学图书馆 技术提供:维普资讯• 智图
内蒙古自治区呼和浩特市赛罕区大学西街235号 邮编: 010021
作者机构:Swiss Data Science Center Paul Scherrer Institute Villigen Switzerland Institute for Machine Learning D-INFK ETH Zürich Zurich Switzerland STI-IGM-Sycamore EPFL Lausanne Switzerland
出 版 物:《The Journal of Machine Learning Research》
年 卷 期:2024年第25卷第1期
页 面:8101-8154页
学科分类:08[工学] 0812[工学-计算机科学与技术(可授工学、理学学位)]
主 题:stochastic optimization safe learning black-box optimization smooth constrained optimization reinforcement learning
摘 要:Optimizing noisy functions online, when evaluating the objective requires experiments on a deployed system, is a crucial task arising in manufacturing, robotics and various other domains. Often, constraints on safe inputs are unknown ahead of time, and we only obtain noisy information, indicating how close we are to violating the constraints. Yet, safety must be guaranteed at all times, not only for the final output of the *** introduce a general approach for seeking a stationary point in high dimensional non-linear stochastic optimization problems in which maintaining safety during learning is crucial. Our approach called LB-SGD, is based on applying stochastic gradient descent (SGD) with a carefully chosen adaptive step size to a logarithmic barrier approximation of the original problem. We provide a complete convergence analysis of non-convex, convex, and strongly-convex smooth constrained problems, with first-order and zeroth-order feedback. Our approach yields efficient updates and scales better with dimensionality compared to existing *** empirically compare the sample complexity and the computational cost of our method with existing safe learning approaches. Beyond synthetic benchmarks, we demonstrate the effectiveness of our approach on minimizing constraint violation in policy search tasks in safe reinforcement learning (RL).