版权所有:内蒙古大学图书馆 技术提供:维普资讯• 智图
内蒙古自治区呼和浩特市赛罕区大学西街235号 邮编: 010021
作者机构:Chair of Information-oriented Control Department of Electrical and Computer Engineering Technical University of Munich MunichD-80333 Germany
出 版 物:《IFAC-PapersOnLine》
年 卷 期:2019年第51卷第34期
页 面:8-14页
核心收录:
基 金:The ERC Grant "Control based on Human Models" supported this work under flrant aflreement no. 337654
主 题:Motion tracking Cell proliferation Demonstrations Dynamical systems Gaussian distribution Gaussian noise (electronic) Learning systems Lyapunov functions Lyapunov methods Motion estimation Nonlinear systems Social robots State space methods Uncertainty analysis Control Lyapunov function Human performance Human centered automation Human robot collaboration Learning by demonstration Path tracking State of the art methods Uniformly globally asymptotically stables
摘 要:Data-driven approaches are well suited to represent human motion because arbitrary complex trajectories can be captured. Gaussian process state space models allow to encode human motion while quantifying uncertainty due to missing data. Such human motion models are relevant for many application domains such as learning by demonstration and motion prediction in human-robot collaboration. For goal-directed tasks it is essential to impose stability constraints on the model representing the human motion. Motivated by learning by demonstration applications, this paper proposes an uncertainty-based control Lyapunov function approach for goal-directed path tracking. We exploit the model fidelity which is related to the location of the training and test data: Our approach actively strives into regions with more demonstration data and thus higher model certainty. This achieves accurate reproduction of the human motion independent of the initial condition and we show that generated trajectories are uniformly globally asymptotically stable. The approach is validated in a nonlinear learning by demonstration task where human-demonstrated motions are reproduced by the learned dynamical system, and higher precision than competitive state of the art methods is achieved. © 2019