Robocup@Home proposes a challenge related to Person Recognition: after presented, a new ‘operator’ should become ‘immediately’ recognizable by the robot. The presentation procedure may require the operator to corr...
Robocup@Home proposes a challenge related to Person Recognition: after presented, a new ‘operator’ should become ‘immediately’ recognizable by the robot. The presentation procedure may require the operator to correctly interact with the robot, following a certain procedure, as instructed by the robot itself (for example, staying in front of the robot, so that the robot can take pictures of this person). In this paper, we propose the use of the KNN (K-Nearest Neighbor) supervised machine learning algorithm to include a new ‘operator’ in a database of persons recognizable by the robot. This algorithm uses information taken from an image segmentation of the face of the operator. The experiment evaluates how long it takes to include a new operator if the robot has from 1 to 12 current operators, evaluating also how long it takes to include this operator based on 1, 2 or more images of the new operator, taken from slightly different points of view. The results confirm that KNN can be used to ‘present to the robot’ up to 13 new operators, with up to 15 images for each operator, in less than 60 seconds.
This article describes the estimation of a 3D point using a Kinect sensor and the Robot Operating System (ROS) along with You Only Look Once (YOLO) for object detection. The Kinect sensor provides RGB-D images, which ...
This article describes the estimation of a 3D point using a Kinect sensor and the Robot Operating System (ROS) along with You Only Look Once (YOLO) for object detection. The Kinect sensor provides RGB-D images, which are used to create a Point Cloud representing the geometry of the environment. ROS is used as a robotics development framework, while YOLO is employed to identify objects in the scene. The article presents the packages used, the datasets used for measurement, and the configuration of ROS and YOLO. Additionally, the functionalities of RViz, a 3D visualization tool used in the tests, are explored. Furthermore, it covers the methods employed, the acquired data, and an analysis of the error margin in relation to the measurement of the distance between the Kinect and the object. The findings and techniques presented in this study contribute to addressing the challenges faced in the RoboCup@Home competition, specifically in the context of object manipulation tasks.
暂无评论