This paper presents a neural network model of demyelination of the mouse motor pathways, coupled to a central pattern generation (CPG) model for quadruped walking. Demyelination is the degradation of the myelin layer ...
详细信息
This paper presents a neural network model of demyelination of the mouse motor pathways, coupled to a central pattern generation (CPG) model for quadruped walking. Demyelination is the degradation of the myelin layer covering the axons which can be caused by several neurodegenerative autoimmune diseases such as multiple sclerosis. We use this model - to our knowledge first of its kind - to investigate the locomotion deficits that appear following demyelination of axons in the spinal cord. Our model meets several physiological and behavioral results and predicts that whereas locomotion can still occur at high percentages of demyelination damage, the distribution and location of the lesion are the most critical factors for the locomotor performance.
Real time planning strategy is crucial for robots working in dynamic environments. In particular, robot grasping tasks require quick reactions in many applications such as human-robot interaction. In this paper, we pr...
Real time planning strategy is crucial for robots working in dynamic environments. In particular, robot grasping tasks require quick reactions in many applications such as human-robot interaction. In this paper, we propose an approach for grasp learning that enables robots to plan new grasps rapidly according to the object's position and orientation. This is achieved by taking a three-step approach. In the first step, we compute a variety of stable grasps for a given object. In the second step, we propose a strategy that learns a probability distribution of grasps based on the computed grasps. In the third step, we use the model to quickly generate grasps. We have tested the statistical method on the 9 degrees of freedom hand of the iCub humanoid robot and the 4 degrees of freedom Barrett hand. The average computation time for generating one grasp is less than 10 milliseconds. The experiments were run in Matlab on a machine with 2.8GHz processor.
To perform robust grasping, a multi-fingered robotic hand should be able to adapt its grasping configuration, i.e., how the object is grasped, to maintain the stability of the grasp. Such a change of grasp configurati...
详细信息
ISBN:
(纸本)9781479969357
To perform robust grasping, a multi-fingered robotic hand should be able to adapt its grasping configuration, i.e., how the object is grasped, to maintain the stability of the grasp. Such a change of grasp configuration is called grasp adaptation and it depends on the controller, the employed sensory feedback and the type of uncertainties inherit to the problem. This paper proposes a grasp adaptation strategy to deal with uncertainties about physical properties of objects, such as the object weight and the friction at the contact points. Based on an object-level impedance controller, a grasp stability estimator is first learned in the object frame. Once a grasp is predicted to be unstable by the stability estimator, a grasp adaptation strategy is triggered according to the similarity between the new grasp and the training examples. Experimental results demonstrate that our method improves the grasping performance on novel objects with different physical properties from those used for training.
We address the problem of representations for anthropomorphic robot hands and their suitability for use in methods for learning or control. We approach hand configuration from the perspective of ultimate hand function...
详细信息
We address the problem of representations for anthropomorphic robot hands and their suitability for use in methods for learning or control. We approach hand configuration from the perspective of ultimate hand function and propose 2 parameterizations based on the ability of the hand to engage oppositional forces. These parameters can be extracted from grasp examples making them suitable for use in practical learning-from-demonstration frameworks. We propose a qualitative method to span hand functional space in a principled manner. This is used to construct a grasp set for evaluation and a qualitative baseline metric derived from human experience. Our results from human grasp data show that hand representations based on shape are not able to disambiguate hand-function. However, those based on hand-opposition primitives result in the widest separations among grasps that have radically different functions and can even clearly separate grasps whose functions overlap a great degree. We trust that these “functional parameterizations” can bridge the contrasting goals of task-oriented robotic grasping, that of controlling a dexterous robot hand to manifest hand-shape but with the ability to exercise specific hand-function.
The human hand is a versatile and complex system with dexterous manipulation capabilities. For the transfer of human grasping capabilities to humanoid robotic and prosthetic hands, an understanding of the dynamic char...
详细信息
ISBN:
(数字)9781538676301
ISBN:
(纸本)9781538676318
The human hand is a versatile and complex system with dexterous manipulation capabilities. For the transfer of human grasping capabilities to humanoid robotic and prosthetic hands, an understanding of the dynamic characteristics of grasp motions is fundamental. Although the analysis of grasp synergies, especially for kinematic hand postures, is a very active field of research, the description and transfer of grasp forces is still a challenging task. In this work, we introduce a novel representation of grasp synergies in the force space, socalled force synergies, which describe forces applied at contact locations in a low dimensional space and are inspired by the correlations between grasp forces in fingers and palm. To evaluate this novel representation, we conduct a human grasping study with eight subjects performing handover and tool use tasks on 14 objects with varying content and weight using 16 different grasp types. We capture contact forces at 18 locations within the hand together with the joint angle values of a data glove with 22 degrees of freedom. We identify correlations between contact forces and derive force synergies using dimensionality reduction techniques, which allow to represent grasp forces applied during grasping with only eight parameters.
Machine learning models can solve complex tasks but often require significant computational resources during inference. This has led to the development of various post-training computation reduction methods that tackl...
详细信息
We consider the problem of learning robust models of robot motion through demonstration. An approach based on Hidden Markov Model (HMM) and Gaussian Mixture Regression (GMR) is proposed to extract redundancies across ...
详细信息
ISBN:
(纸本)9781424445875
We consider the problem of learning robust models of robot motion through demonstration. An approach based on Hidden Markov Model (HMM) and Gaussian Mixture Regression (GMR) is proposed to extract redundancies across multiple demonstrations, and build a time-independent model of a set of movements demonstrated by a human user. Two experiments are presented to validate the method, that consist of learning to hit a ball with a robotic arm, and of teaching a humanoid robot to manipulate a spoon to feed another humanoid. The experiments demonstrate that the proposed model can efficiently handle several aspects of learning by imitation. We first show that it can be utilized in an unsupervised learning manner, where the robot is autonomously organizing and encoding variants of motion from the multiple demonstrations. We then show that the approach allows to robustly generalize the observed skill by taking into account multiple constraints in task space during reproduction.
This article combines programming by demon- stration and adaptive control for teaching a robot to physically interact with a human in a collaborative task requiring sharing of a load by the two partners. learning a ta...
详细信息
This article combines programming by demon- stration and adaptive control for teaching a robot to physically interact with a human in a collaborative task requiring sharing of a load by the two partners. learning a task model allows the robot to anticipate the partner’s intentions and adapt its motion according to perceived forces. As the human represents a highly complex contact environment, direct reproduction of the learned model may lead to sub-optimal results. To compen- sate for unmodelled uncertainties, in addition to learning we propose an adaptive control algorithm that tunes the impedance parameters, so as to ensure accurate reproduction. To facilitate the illustration of the concepts introduced in this paper and provide a systematic evaluation, we present experimental results obtained with simulation of a dyad of two planar 2-DOF robots.
Because of the increasing developments of humanoid robots, humans and robots are going to interact more and more often in the near future. Thus, the need for a well-defined ethical framework in which these interaction...
详细信息
Because of the increasing developments of humanoid robots, humans and robots are going to interact more and more often in the near future. Thus, the need for a well-defined ethical framework in which these interactions will take place is very acute. In this article, we will show why responsibility ascription is a key concept to understand today's and tomorrow's ethical issues related to human-robot interactions. By analyzing how the myths surrounding the figure of the robot in western societies have been built through centuries, we will be able to demonstrate that the question of responsibility ascription is biased in the sense that it assigns to autonomous robots a role that should be devoted to humans.
This paper introduces a hierarchical framework that is capable of learning complex sequential tasks from human demonstrations through kinesthetic teaching, with minimal human intervention. Via an automatic task segmen...
详细信息
ISBN:
(纸本)9781467383707
This paper introduces a hierarchical framework that is capable of learning complex sequential tasks from human demonstrations through kinesthetic teaching, with minimal human intervention. Via an automatic task segmentation and action primitive discovery algorithm, we are able to learn both the high-level task decomposition (into action primitives), as well as low-level motion parameterizations for each action, in a fully integrated framework. In order to reach the desired task goal, we encode a task metric based on the evolution of the manipulated object during demonstration, and use it to sequence and parametrize each action primitive. We illustrate this framework with a pizza dough rolling task and show how the learned hierarchical knowledge is directly used for autonomous robot execution.
暂无评论