In this paper, we present an enhanced loop closure method* based on image-to-image matching relies on quantized local Zernike moments. In contradistinction to the previous methods, our approach uses additional depth i...
详细信息
ISBN:
(纸本)9781628414967
In this paper, we present an enhanced loop closure method* based on image-to-image matching relies on quantized local Zernike moments. In contradistinction to the previous methods, our approach uses additional depth information to extract Zernike moments in local manner. These moments are used to represent holistic shape information inside the image. The moments in complex space that are extracted from both grayscale and depth images are coarsely quantized. In order to find out the similarity between two locations, nearest neighbour (NN) classification algorithm is performed. Exemplary results and the practical implementation case of the method are also given with the data gathered on the testbed using a Kinect. The method is evaluated in three different datasets of different lighting conditions. Additional depth information with the actual image increases the detection rate especially in dark environments. The results are referred as a successful, high-fidelity online method for visual place recognition as well as to close navigation loops, which is a crucial information for the well known simultaneously localization and mapping (SLAM) problem. This technique is also practically applicable because of its low computational complexity, and performing capability in real-time with high loop closing accuracy.
Underwater environments present a considerable challenge for computervision, since water is a scattering medium with substantial light absorption characteristics which is made even more severe by turbidity. This pose...
详细信息
Underwater environments present a considerable challenge for computervision, since water is a scattering medium with substantial light absorption characteristics which is made even more severe by turbidity. This poses significant problems for visual underwater navigation, object detection, tracking and recognition. Previous works tackle the problem by using unreliable priors or expensive and complex devices. This paper adopts a physical underwater light attenuation model which is used to enhance the quality of images and enable the applicability of traditional computervisiontechniques images acquired from underwater scenes. The proposed method simultaneously estimates the attenuation parameter of the medium and the depth map of the scene to compute the image irradiance thus reducing the effect of the medium in the images. Our approach is based on a novel optical flow method, which is capable of dealing with scattering media, and a new technique that robustly estimates the medium parameters. Combined with structure-from-motion techniques, the depth map is estimated and a model-based restoration is performed. The method was tested both with simulated and real sequences of images. The experimental images were acquired with a camera mounted on a Remotely Operated Vehicle (ROV) navigating in a naturally lit, shallow seawater. The results show that the proposed technique allows for substantial restoration of the images, thereby improving the ability to identify and match features, which in turn is an essential step for other computervisionalgorithms such as object detection and tracking, and autonomous navigation.
Improvements were made to the intelligence algorithms of an autonomously operating ground vehicle, Q, which competed in the 2013 intelligent Ground Vehicle Competition (IGVC). The IGVC required the vehicle to first na...
详细信息
ISBN:
(纸本)9780819499424
Improvements were made to the intelligence algorithms of an autonomously operating ground vehicle, Q, which competed in the 2013 intelligent Ground Vehicle Competition (IGVC). The IGVC required the vehicle to first navigate between two white lines on a grassy obstacle course, then pass through eight GPS waypoints, and pass through a final obstacle field. Modifications to Q included a new vision system with a more effective image processing algorithm for white line extraction. The path-planning algorithm adopted the vision system, creating smoother, more reliable navigation. With these improvements, Q successfully completed the basic autonomous navigation challenge, finishing tenth out of over 50 teams.
Ro-Boat is an autonomous river cleaning intelligent robot incorporating mechanical design and computervision algorithm to achieve autonomous river cleaning and provide a sustainable environment. Ro-boat is designed i...
详细信息
ISBN:
(纸本)9780819499424
Ro-Boat is an autonomous river cleaning intelligent robot incorporating mechanical design and computervision algorithm to achieve autonomous river cleaning and provide a sustainable environment. Ro-boat is designed in a modular fashion with design details such as mechanical structural design, hydrodynamic design and vibrational analysis. It is incorporated with a stable mechanical system with air and water propulsion, robotic arms and solar energy source and it is proceed to become autonomous by using computervision. Both "HSV Color Space" and "SURF" are proposed to use for measurements in Kalman Filter resulting in extremely robust pollutant tracking. The system has been tested with successful results in the Yamuna River in New Delhi. We foresee that a system of Ro-boats working autonomously 24x7 can clean a major river in a city on about six months time, which is unmatched by alternative methods of river cleaning.
In a milking robot, the correct localization and positioning of milking teat cups is of very high importance. The milking robots technology has not changed since a decade and is based primarily on laser profiles for t...
详细信息
ISBN:
(纸本)9780819499424
In a milking robot, the correct localization and positioning of milking teat cups is of very high importance. The milking robots technology has not changed since a decade and is based primarily on laser profiles for teats approximate positions estimation. This technology has reached its limit and does not allow optimal positioning of the milking cups. Also, in the presence of occlusions, the milking robot fails to milk the cow. These problems, have economic consequences for producers and animal health (e.g. development of mastitis). To overcome the limitations of current robots, we have developed a new system based on 3D vision, capable of efficiently positioning the milking cups. A prototype of an intelligent robot system based on 3D vision for real- time positioning of a milking robot has been built and tested under various conditions on a synthetic udder model (in static and moving scenarios). Experimental tests, were performed using 3D Time-Of-Flight (TOF) and RGBD cameras. The proposed algorithms permit the online segmentation of teats by combing 2D and 3D visual information. The obtained results permit the teat 3D position computation. This information is then sent to the milking robot for teat cups positioning. The vision system has a real-time performance and monitors the optimal positioning of the cups even in the presence of motion. The obtained results, with both TOF and RGBD cameras, show the good performance of the proposed system. The best performance was obtained with RGBD cameras. This latter technology will be used in future real life experimental tests.
Service robots usually share their workspace with people. Typically, a robot's tasks require knowing when and where people are, to be able to schedule requested tasks. Therefore, there exists the need to take into...
详细信息
ISBN:
(纸本)9780819499424
Service robots usually share their workspace with people. Typically, a robot's tasks require knowing when and where people are, to be able to schedule requested tasks. Therefore, there exists the need to take into account the presence of humans when planning their actions and it is indispensable to have knowledge of robots' environments. It means in practice knowing when (time and events duration) and where (in workspace) a robot's tasks can be performed. This research paper takes steps towards obtaining of the spatial information required to execute software to plan tasks to be performed by a robot. With this aim, a program capable to define meaningful areas or zones in the robot workspace by the use of a clustering is created tied with statistically reasoned time slots when to perform each task. The software is tested using real data obtained from different cameras located along the corridors of CSE Department of University of Oulu.
The intelligent Ground Vehicle Competition (IGVC) is one of four, unmanned systems, student competitions that were founded by the Association for Unmanned Vehicle Systems International (AUVSI). The IGVC is a multidisc...
详细信息
ISBN:
(纸本)9780819499424
The intelligent Ground Vehicle Competition (IGVC) is one of four, unmanned systems, student competitions that were founded by the Association for Unmanned Vehicle Systems International (AUVSI). The IGVC is a multidisciplinary exercise in product realization that challenges college engineering student teams to integrate advanced control theory, machine vision, vehicular electronics and mobile platform fundamentals to design and build an unmanned system. Teams from around the world focus on developing a suite of dual-use technologies to equip ground vehicles of the future with intelligent driving capabilities. Over the past 21 years, the competition has challenged undergraduate, graduate and Ph.D. students with real world applications in intelligent transportation systems, the military and manufacturing automation. To date, teams from over 80 universities and colleges have participated. This paper describes some of the applications of the technologies required by this competition and discusses the educational benefits. The primary goal of the IGVC is to advance engineering education in intelligent vehicles and related technologies. The employment and professional networking opportunities created for students and industrial sponsors through a series of technical events over the four-day competition are highlighted. Finally, an assessment of the competition based on participation is presented.
In this paper, development of a low-cost PID controller with an intelligent behavior coordination system for an autonomous mobile robot is described that is equipped with IR sensors, ultrasonic sensors, regulator, and...
详细信息
ISBN:
(纸本)9780819499424
In this paper, development of a low-cost PID controller with an intelligent behavior coordination system for an autonomous mobile robot is described that is equipped with IR sensors, ultrasonic sensors, regulator, and RC filters on the robot platform based on HCS12 microcontroller and embedded systems. A novel hybrid PID controller and behavior coordination system is developed for wall-following navigation and obstacle avoidance of an autonomous mobile robot. Adaptive control used in this robot is a hybrid PID algorithm associated with template and behavior coordination models. Software development contains motor control, behavior coordination intelligent system and sensor fusion. In addition, the module-based programming technique is adopted to improve the efficiency of integrating the hybrid PID and template as well as behavior coordination model algorithms. The hybrid model is developed to synthesize PID control algorithms, template and behavior coordination technique for wall-following navigation with obstacle avoidance systems. The motor control, obstacle avoidance, and wall-following navigation algorithms are developed to propel and steer the autonomous mobile robot. Experiments validate how this PID controller and behavior coordination system directs an autonomous mobile robot to perform wall-following navigation with obstacle avoidance. Hardware configuration and module-based technique are described in this paper. Experimental results demonstrate that the robot is successfully capable of being guided by the hybrid PID controller and behavior coordination system for wall-following navigation with obstacle avoidance.
In this study, we designed an autonomous mobile robot based on the rules of the Federation of International Robot-soccer Association (FIRA) RoboSot category, integrating the techniques of computervision, real-time im...
详细信息
ISBN:
(纸本)9780819499424
In this study, we designed an autonomous mobile robot based on the rules of the Federation of International Robot-soccer Association (FIRA) RoboSot category, integrating the techniques of computervision, real-time image processing, dynamic target tracking, wireless communication, self-localization, motion control, path planning, and control strategy to achieve the contest goal. The self-localization scheme of the mobile robot is based on the algorithms featured in the images from its omni-directional vision system. In previous works, we used the image colors of the field goals as reference points, combining either dual-circle or trilateration positioning of the reference points to achieve self-localization of the autonomous mobile robot. However, because the image of the game field is easily affected by ambient light, positioning systems exclusively based on color model algorithms cause errors. To reduce environmental effects and achieve the self-localization of the robot, the proposed algorithm is applied in assessing the corners of field lines by using an omni-directional vision system. Particularly in the mid-size league of the RobotCup soccer competition, self-localization algorithms based on extracting white lines from the soccer field have become increasingly popular. Moreover, white lines are less influenced by light than are the color model of the goals. Therefore, we propose an algorithm that transforms the omni-directional image into an unwrapped transformed image, enhancing the extraction features. The process is described as follows: First, radical scan-lines were used to process omni-directional images, reducing the computational load and improving system efficiency. The lines were radically arranged around the center of the omni-directional camera image, resulting in a shorter computational time compared with the traditional Cartesian coordinate system. However, the omni-directional image is a distorted image, which makes it difficult to recognize the
In many robotics and automation applications, it is often required to detect a given object and determine its pose (position and orientation) from input images with high speed, high robustness to photometric changes, ...
详细信息
ISBN:
(纸本)9780819499424
In many robotics and automation applications, it is often required to detect a given object and determine its pose (position and orientation) from input images with high speed, high robustness to photometric changes, and high pose accuracy. We propose a new object matching method that improves efficiency over existing approaches by decomposing orientation and position estimation into two cascade steps. In the first step, an initial position and orientation is found by matching with Histogram of Oriented Gradients (HOG), reducing orientation search from 2D template matching to 1D correlation matching. In the second step, a more precise orientation and position is computed by matching based on Dominant Orientation Template (DOT), using robust edge orientation features. The cascade combination of the HOG and DOT feature for high-speed and robust object matching is the key novelty of the proposed method. Experimental evaluation was performed with real-world single-object and multi-object inspection datasets, using software implementations on an Atom CPU platform. Our results show that the proposed method achieves significant speed improvement compared to an already accelerated template matching method at comparable accuracy performance.
暂无评论