We present an algorithm enabling a humanoid robot to visually learn its body schema, knowing only the number of degrees of freedom in each limb. By "body schema" we mean the joint positions and orientations ...
详细信息
We present an algorithm enabling a humanoid robot to visually learn its body schema, knowing only the number of degrees of freedom in each limb. By "body schema" we mean the joint positions and orientations and thus the kinematic function. The learning is performed by visually observing its end-effectors when moving them. With simulations involving a body schema of more than 20 degrees of freedom, results show that the system is scalable to a high number of degrees of freedom. Real robot experiments confirm the practicality of our approach. Our results illustrate how subjective space representation can develop as a result of sensorimotor contingencies.
When learning models for real-world robot spatial perception tasks, one might have access only to partial labels: this occurs for example in semi-supervised scenarios (in which labels are not available for a subset of...
详细信息
When learning models for real-world robot spatial perception tasks, one might have access only to partial labels: this occurs for example in semi-supervised scenarios (in which labels are not available for a subset of the training instances) or in some types of self-supervised robot learning (where the robot autonomously acquires a labeled training set, but only acquires labels for a subset of the output variables in each instance). We introduce a general approach to deal with this class of problems using an auxiliary loss enforcing the expectation that the perceived environment state should not abruptly change;then, we instantiate the approach to solve two robot perception problems: a simulated ground robot learning long-range obstacle mapping as a 400-binary-label classification task in a self-supervised way in a static environment;and a real nano-quadrotor learning human pose estimation as a 3-variable regression task in a semi-supervised way in a dynamic environment. In both cases, our approach yields significant quantitative performance improvements (average increase of 6 AUC percentage points in the former;relative improvement of the R-2 metric ranging from 7% to 33% in the latter) over baselines.
One effective approach for equipping artificial agents with sensorimotor skills is to use self-exploration. To do this efficiently is critical, as time and data collection are costly. In this study, we propose an expl...
详细信息
One effective approach for equipping artificial agents with sensorimotor skills is to use self-exploration. To do this efficiently is critical, as time and data collection are costly. In this study, we propose an exploration mechanism that blends action, object, and action outcome representations into a latent space, where local regions are formed to host forward model (FM) learning. The agent uses intrinsic motivation to select the FM with the highest learning progress (LP) to adopt at a given exploration step. This parallels how infants learn, as high LP indicates that the learning problem is neither too easy nor too difficult in the selected region. The proposed approach is validated with a simulated robot in a tabletop environment. The simulation scene comprises a robot and various objects, where the robot interacts with one of them each time using a set of parameterized actions and learns the outcomes of these interactions. With the proposed approach, the robot organizes its curriculum of learning as in existing intrinsic motivation approaches and outperforms them in learning speed. Moreover, the learning regime demonstrates features that partially match infant development;in particular, the proposed system learns to predict the outcomes of different skills in a staged manner.
We present a follow-up study on our unified visuomotor neural model for the robotic tasks of identifying, localizing, and grasping a target object in a scene with multiple objects. Our Retinanet-based model enables en...
详细信息
We present a follow-up study on our unified visuomotor neural model for the robotic tasks of identifying, localizing, and grasping a target object in a scene with multiple objects. Our Retinanet-based model enables end-to-end training of visuomotor abilities in a biologically inspired developmental approach. In our initial implementation, a neural model was able to grasp selected objects from a planar surface. We embodied the model on the NICO humanoid robot. In this follow-up study, we expand the task and the model to reaching for objects in a 3-D space with a novel data set based on augmented reality and a simulation environment. We evaluate the influence of training with auxiliary tasks, i.e., if learning of the primary visuomotor task is supported by learning to classify and locate different objects. We show that the proposed visuomotor model can learn to reach for objects in a 3-D space. We analyze the results for biologically plausible biases based on object locations or properties. We show that the primary visuomotor task can be successfully trained simultaneously with one of the two auxiliary tasks. This is enabled by a complex neurocognitive model with shared and task-specific components, similar to models found in biological systems.
By learning a range of possible times over which the effect of an action can take place, a robot can reason more effectively about causal and contingent relationships in the world. An algorithm is presented for learni...
详细信息
By learning a range of possible times over which the effect of an action can take place, a robot can reason more effectively about causal and contingent relationships in the world. An algorithm is presented for learning the interval [t(1min), t(1max)] of possible times during which a response to an action can take place. The algorithm was implemented on a physical robot for the domains of visual self-recognition and auditory social-partner recognition. The environment model assumes that natural environments generate Poisson distributions of random events at all scales. A linear-time algorithm called Poisson threshold learning can generate a threshold T that provides an arbitrarily small rate of background events lambda(T), if such a threshold exists for the specified error rate.
We describe how a robot can develop knowledge of the objects in its environment directly from unsupervised sensorimotor experience. The object knowledge consists of multiple integrated representations: trackers that f...
详细信息
We describe how a robot can develop knowledge of the objects in its environment directly from unsupervised sensorimotor experience. The object knowledge consists of multiple integrated representations: trackers that form spatio-temporal clusters of sensory experience, percepts that represent properties for the tracked objects, classes that support efficient generalization front past experience, and actions that reliably change object percepts, we evaluate how well this intrinsically acquired object knowledge call be used to solve externally specified tasks, including object recognition and achieving goals that require both planning and continuous control. (C) 2008 Elsevier B.V. All rights reserved.
Building robots capable of acting independently in unstructured environments is still a challenging task for roboticists. The capability to comprehend and produce language in a 'human-like' manner represents a...
详细信息
Building robots capable of acting independently in unstructured environments is still a challenging task for roboticists. The capability to comprehend and produce language in a 'human-like' manner represents a powerful tool for the autonomous interaction of robots with human beings, for better understanding situations and exchanging information during the execution of tasks that require cooperation. In this work, we present a robotic model for grounding abstract action words (i.e. USE, MAKE) through the hierarchical organization of terms directly linked to perceptual and motor skills of a humanoid robot. Experimental results have shown that the robot, in response to linguistic commands, is capable of performing the appropriate behaviors on objects. Results obtained in case of inconsistency between the perceptual and linguistic inputs have shown that the robot executes the actions elicited by the seen object.
Gaze control requires the coordination of movements of both eyes and head to fixate on a target. We present a biologically constrained architecture for gaze control and show how the relationships between the coupled s...
详细信息
Gaze control requires the coordination of movements of both eyes and head to fixate on a target. We present a biologically constrained architecture for gaze control and show how the relationships between the coupled sensorimotor systems can be learnt autonomously from scratch, allowing for adaptation as the system grows or changes. Infant studies suggest developmental learning strategies, which can be applied to sensorimotor learning in humanoid robots. We examine two strategies (sequential and synchronous) for the learning of eye and head coupled mappings, and give results from implementations on an iCub robot. The results show that the developmental approach can give fast, cumulative, on-line learning of coupled sensorimotor systems.
In this paper we propose a method for interactive recognition of household objects using proprioceptive and auditory feedback. In our experiments, the robot observed the changes in its proprioceptive and auditory sens...
详细信息
In this paper we propose a method for interactive recognition of household objects using proprioceptive and auditory feedback. In our experiments, the robot observed the changes in its proprioceptive and auditory sensory streams while performing five exploratory behaviors (lift, shake, drop, crush, and push) on 50 common household objects (e.g. bottles, cups, balls, toys, etc.). The robot was tasked with recognizing the objects it was manipulating by feeling them and listening to the sounds that they make without using any visual information. The results show that both proprioception and audio, coupled with exploratory behaviors, can be used successfully for object recognition. Furthermore, the robot was able to integrate feedback from the two modalities, to achieve even better recognition accuracy. Finally, the results show that the robot can boost its recognition rate even further by applying multiple different exploratory behaviors on the object.
This paper presents a review of available research concerning social emotions in robotics. In robotics, the study of emotions has been pursued since a long time. The popular research endeavors in robotics concern the ...
详细信息
This paper presents a review of available research concerning social emotions in robotics. In robotics, the study of emotions has been pursued since a long time. The popular research endeavors in robotics concern the study of robotic recognition, expression, and computational modeling of the basic mechanisms that underlie them. The advancements in research relevant to this domain are in accordance with well-known psychological findings obtained using the category and dimension theories. Many studies are based on these basic theories, which exclusively address the basic emotions. However, social emotions, also referred to as high-level emotions, have been investigated in psychology. We believe that these high-level emotions are worth investigating for the development of next-generation robotic systems - socially aware robots. This paper summarizes the findings concerning social emotions reported through psychology and neuroscience research endeavors along with a survey of studies concerning social emotions in robotics conducted to date. Moreover, this paper discusses future research directions to facilitate the implementation of social emotions in robots.
暂无评论