This paper utilizes a human–robot interface system which incorporates Camshift, Cerebellar Model Articulation Controller(CMAC) to interact with the robot manipulator. In this system, a position sensor is used to loca...
详细信息
ISBN:
(纸本)9781784660468
This paper utilizes a human–robot interface system which incorporates Camshift, Cerebellar Model Articulation Controller(CMAC) to interact with the robot manipulator. In this system, a position sensor is used to locate the human hand and an IMU is employed to measure the orientation. Camshift algorithm is used to track the human hand. Although the location and the orientation of the human can be obtained from the two sensors, the measure error increases over time due to the noise of the devices and the tracking error. CMACs are used to estimate the location and the orientation of the human hand. To be subject to the perceptive limitations and the motor limitations, human operator is hard to carry out the high precision operation. The human–robot interface system was experimentally tested in a lab environment, and the results indicate that such a system can successfully control robot manipulator.
The recent advancements in the development of robotic systems that offer advanced loco-manipulation capabilities have opened new opportunities in the employment of such platforms in various domains. However, despite t...
详细信息
The recent advancements in the development of robotic systems that offer advanced loco-manipulation capabilities have opened new opportunities in the employment of such platforms in various domains. However, despite the increased range of offered capabilities, the collaboration with these robotic platforms to execute tasks through common human–robot interaction interfaces is still an open challenge. In this article, we present a novel human–robot interaction interface that permits to intuitively command and control the manipulation and the locomotion abilities of the robot by exploring a visual servoing guidance method realized with a laser emitter device. By pointing the laser to locations and objects in the environment where the robot is operating, the operator is able to command even highly articulated robots intuitively and efficiently. The detection of the laser projection is performed by a neural network that provides robust and real-time tracking of laser spot. Combined with the responsiveness of the laser detection, a Behavior Trees-based motion planner is employed to reactively select and generate the autonomous robot motions to reach the indicated target. This combination allows the operator to communicate goal locations and paths to follow without requiring prior knowledge of the system, and without worrying about the generation of the potential complex loco-manipulation robot actions. The effectiveness of the proposed interface is demonstrated with the CENTAURO robot, a hybrid leg-wheel platform with an anthropomorphic upper body, exploiting its abilities to accomplish a number of locomotion and manipulation tasks.
This paper presents the design and implementation of a human–robot interface capable of evaluating robot localization performance and maintaining full control of robot behaviors in the RoboCup domain. The system cons...
详细信息
This paper presents the design and implementation of a human–robot interface capable of evaluating robot localization performance and maintaining full control of robot behaviors in the RoboCup domain. The system consists of legged robots, behavior modules, an overhead visual tracking system, and a graphic user interface. A human–robot communication framework is designed for executing cooperative and competitive processing tasks between users and robots by using object oriented and modularized software architecture, operability, and functionality. Some experimental results are presented to show the performance of the proposed system based on simulated and real-time information.
暂无评论