Visual homing is a navigation method based on comparing a stored image of the goal location and the current image (current view) to determine how to navigate to the goal location. It is theorized that insects, such as...
详细信息
ISBN:
(纸本)9780819494351
Visual homing is a navigation method based on comparing a stored image of the goal location and the current image (current view) to determine how to navigate to the goal location. It is theorized that insects, such as ants and bees, employ visual homing methods to return to their nest [1]. Visual homing has been applied to autonomous robot platforms using two main approaches: holistic and feature-based. Both methods aim at determining distance and direction to the goal location. Navigational algorithms using Scale Invariant Feature Transforms (SIFT) have gained great popularity in the recent years due to the robustness of the feature operator. Churchill and Vardy [2] have developed a visual homing method using scale change information (Homing in Scale Space, HiSS) from SIFT. HiSS uses SIFT feature scale change information to determine distance between the robot and the goal location. Since the scale component is discrete with a small range of values [3], the result is a rough measurement with limited accuracy. We have developed a method that uses stereo data, resulting in better homing performance. Our approach utilizes a pan-tilt based stereo camera, which is used to build composite wide-field images. We use the wide-field images combined with stereo-data obtained from the stereo camera to extend the keypoint vector described in [3] to include a new parameter, depth (z). Using this info, our algorithm determines the distance and orientation from the robot to the goal location. We compare our method with HiSS in a set of indoor trials using a Pioneer 3-AT robot equipped with a BumbleBee2 stereo camera. We evaluate the performance of both methods using a set of performance measures described in this paper.
Mobile robots are those with the capability of moving in an environment. The controller of these robots needs to have the ability to do intelligent behaviours in an uncertain environment. Today fuzzy control of mobile...
详细信息
ISBN:
(纸本)9781479933433
Mobile robots are those with the capability of moving in an environment. The controller of these robots needs to have the ability to do intelligent behaviours in an uncertain environment. Today fuzzy control of mobile robots in non-structured environments is increasing. Many represented algorithms for mobile robot navigation don't manifest appropriate performance in dynamic environments. This article aims at designing and making an intelligence robot. This robot is able to detect a specified target and track it. Velocity and steering of this robot is controlled by the fuzzy logic. Robot velocity is adjusted according to its distance from the target. This article is expressed the features of this mobile robot including different mechanical and electronic infrastructures, fuzzy navigation and vision controller. Navigation system of this robot makes its fuzzy decisions only based on the machine visiontechniques and unlike many robots, it doesn't depend on sensor. It is concluded that robot with fuzzy navigation have 60% time saving than the non-fuzzy. The results obtained indicate its appropriate performance in order to navigate a target-oriented mobile robot in an undetermined and dynamic environment.
Mobile robots are those with the capability of moving in an environment. The controller of these robots needs to have the ability to do intelligent behaviours in an uncertain environment. Today fuzzy control of mobile...
详细信息
Mobile robots are those with the capability of moving in an environment. The controller of these robots needs to have the ability to do intelligent behaviours in an uncertain environment. Today fuzzy control of mobile robots in non-structured environments is increasing. Many represented algorithms for mobile robot navigation don't manifest appropriate performance in dynamic environments. This article aims at designing and making an intelligence robot. This robot is able to detect a specified target and track it. Velocity and steering of this robot is controlled by the fuzzy logic. Robot velocity is adjusted according to its distance from the target. This article is expressed the features of this mobile robot including different mechanical and electronic infrastructures, fuzzy navigation and vision controller. Navigation system of this robot makes its fuzzy decisions only based on the machine visiontechniques and unlike many robots, it doesn't depend on sensor. It is concluded that robot with fuzzy navigation have 60% time saving than the non-fuzzy. The results obtained indicate its appropriate performance in order to navigate a target-oriented mobile robot in an undetermined and dynamic environment.
Motion estimation is an open research field in control and robotic applications. Sensor fusion algorithms are generally used to achieve an accurate estimation of the vehicle motion by combining heterogeneous sensors m...
详细信息
ISBN:
(纸本)9781467363563
Motion estimation is an open research field in control and robotic applications. Sensor fusion algorithms are generally used to achieve an accurate estimation of the vehicle motion by combining heterogeneous sensors measurements with different statistical characteristics. In this paper, a new method that combines measurements provided by an inertial sensor and a vision system is presented. Compared to classical model-based techniques, the method relies on a Pareto optimization that trades off the statistical properties of the measurements. The proposed technique is evaluated with simulations in terms of computational requirements and estimation accuracy with respect to a classical Kalman filter approach. It is shown that the proposed method gives an improved estimation accuracy at the cost of a slightly increased computational complexity.
In June 2011, Worcester Polytechnic Institute's (WPI) unmanned ground vehicle Prometheus participated in the 8th Annual Robotic Lawnmower and 19th Annual intelligent Ground Vehicle Competitions back-to-back. This ...
详细信息
ISBN:
(纸本)9780819489487
In June 2011, Worcester Polytechnic Institute's (WPI) unmanned ground vehicle Prometheus participated in the 8th Annual Robotic Lawnmower and 19th Annual intelligent Ground Vehicle Competitions back-to-back. This paper details the two-year design and development cycle for WPI's intelligent ground vehicle, Prometheus. The on-board intelligence algorithms include lane detection, obstacle avoidance, path planning, world representation and waypoint navigation. The authors present experimental results and discuss practical implementations of the intelligence algorithms used on the robot.
Loop closing is a fundamental part of 3D simultaneous localization and mapping (SLAM) that can greatly enhance the quality of long-term mapping. It is essential for the creation of globally consistent maps. Conceptual...
详细信息
ISBN:
(纸本)9780819489487
Loop closing is a fundamental part of 3D simultaneous localization and mapping (SLAM) that can greatly enhance the quality of long-term mapping. It is essential for the creation of globally consistent maps. Conceptually, loop closing is divided into detection and optimization. Recent approaches depend on a single sensor to recognize previously visited places in the loop detection stage. In this study, we combine data of multiple sensors such as GPS, vision, and laser range data to enhance detection results in repetitively changing environments that are not sufficiently explained by a single sensor. We present a fast and robust hierarchical loop detection algorithm for outdoor robots to achieve a reliable environment representation even if one or more sensors fail.
The intelligent Ground Vehicle Competition (IGVC) is one of four, unmanned systems, student competitions that were founded by the Association for Unmanned Vehicle Systems International (AUVSI). The IGVC is a multidisc...
详细信息
ISBN:
(纸本)9780819489487
The intelligent Ground Vehicle Competition (IGVC) is one of four, unmanned systems, student competitions that were founded by the Association for Unmanned Vehicle Systems International (AUVSI). The IGVC is a multidisciplinary exercise in product realization that challenges college engineering student teams to integrate advanced control theory, machine vision, vehicular electronics and mobile platform fundamentals to design and build an unmanned system. Teams from around the world focus on developing a suite of dual-use technologies to equip ground vehicles of the future with intelligent driving capabilities. Over the past 19 years, the competition has challenged undergraduate, graduate and Ph.D. students with real world applications in intelligent transportation systems, the military and manufacturing automation. To date, teams from almost 80 universities and colleges have participated. This paper describes some of the applications of the technologies required by this competition and discusses the educational benefits. The primary goal of the IGVC is to advance engineering education in intelligent vehicles and related technologies. The employment and professional networking opportunities created for students and industrial sponsors through a series of technical events over the four-day competition are highlighted. Finally, an assessment of the competition based on participation is presented.
This paper describes our attempt to optimize a robot control program for the intelligent Ground Vehicle Competition (IGVC) by running computationally intensive portions of the system on a commodity graphics processing...
详细信息
ISBN:
(纸本)9780819489487
This paper describes our attempt to optimize a robot control program for the intelligent Ground Vehicle Competition (IGVC) by running computationally intensive portions of the system on a commodity graphics processing unit (GPU). The IGVC Autonomous Challenge requires a control program that performs a number of different computationally intensive tasks ranging from computervision to path planning. For the 2011 competition our Robot Operating System (ROS) based control system would not run comfortably on the multicore CPU on our custom robot platform. The process of profiling the ROS control program and selecting appropriate modules for porting to run on a GPU is described. A GPU-targeting compiler, Bacon, is used to speed up development and help optimize the ported modules. The impact of the ported modules on overall performance is discussed. We conclude that GPU optimization can free a significant amount of CPU resources with minimal effort for expensive user-written code, but that replacing heavily-optimized library functions is more difficult, and a much less efficient use of time.
Robotic vision is nowadays one of the most challenging branches of robotics. In the case of a humanoid robot, a robust vision system has to provide an accurate representation of the surrounding world and to cope with ...
详细信息
ISBN:
(纸本)9780819489487
Robotic vision is nowadays one of the most challenging branches of robotics. In the case of a humanoid robot, a robust vision system has to provide an accurate representation of the surrounding world and to cope with all the constraints imposed by the hardware architecture and the locomotion of the robot. Usually humanoid robots have low computational capabilities that limit the complexity of the developed algorithms. Moreover, their vision system should perform in real time, therefore a compromise between complexity and processing times has to be found. This paper presents a reliable implementation of a modular vision system for a humanoid robot to be used in color-coded environments. From image acquisition, to camera calibration and object detection, the system that we propose integrates all the functionalities needed for a humanoid robot to accurately perform given tasks in color-coded environments. The main contributions of this paper are the implementation details that allow the use of the vision system in real-time, even with low processing capabilities, the innovative self-calibration algorithm for the most important parameters of the camera and its modularity that allows its use with different robotic platforms. Experimental results have been obtained with a NAO robot produced by Aldebaran, which is currently the robotic platform used in the RoboCup Standard Platform League, as well as with a humanoid build using the Bioloid Expert Kit from Robotis. As practical examples, our vision system can be efficiently used in real time for the detection of the objects of interest for a soccer playing robot (ball, field lines and goals) as well as for navigating through a maze with the help of color-coded clues. In the worst case scenario, all the objects of interest in a soccer game, using a NAO robot, with a single core 500Mhz processor, are detected in less than 30ms. Our vision system also includes an algorithm for self-calibration of the camera parameters as well
This paper describes the design of a gesture-based Human Robot Interface (HRI) for an autonomous mobile robot entered in the 2010 intelligent Ground Vehicle Competition (IGVC). While the robot is meant to operate auto...
详细信息
ISBN:
(纸本)9780819489487
This paper describes the design of a gesture-based Human Robot Interface (HRI) for an autonomous mobile robot entered in the 2010 intelligent Ground Vehicle Competition (IGVC). While the robot is meant to operate autonomously in the various Challenges of the competition, an HRI is useful in moving the robot to the starting position and after run termination. In this paper, a user-friendly gesture-based embedded system called the Magic Glove is developed for remote control of a robot. The system consists of a microcontroller and sensors that is worn by the operator as a glove and is capable of recognizing hand signals. These are then transmitted through wireless communication to the robot. The design of the Magic Glove included contributions on two fronts: hardware configuration and algorithm development. A triple axis accelerometer used to detect hand orientation passes the information to a microcontroller, which interprets the corresponding vehicle control command. A Bluetooth device interfaced to the microcontroller then transmits the information to the vehicle, which acts accordingly. The user-friendly Magic Glove was successfully demonstrated first in a Player/Stage simulation environment. The gesture-based functionality was then also successfully verified on an actual robot and demonstrated to judges at the 2010 IGVC.
暂无评论