This paper presents improvements made to the intelligence algorithms employed on Q, an autonomous ground vehicle, for the 2014 intelligent Ground Vehicle Competition (IGVC). In 2012, the IGVC committee combined the fo...
详细信息
ISBN:
(纸本)9781628414967
This paper presents improvements made to the intelligence algorithms employed on Q, an autonomous ground vehicle, for the 2014 intelligent Ground Vehicle Competition (IGVC). In 2012, the IGVC committee combined the formerly separate autonomous and navigation challenges into a single AUT-NAV challenge. In this new challenge, the vehicle is required to navigate through a grassy obstacle course and stay within the course boundaries (a lane of two white painted lines) that guide it toward a given GPS waypoint. Once the vehicle reaches this waypoint, it enters an open course where it is required to navigate to another GPS waypoint while avoiding obstacles. After reaching the final waypoint, the vehicle is required to traverse another obstacle course before completing the run. Q uses modular parallel software architecture in which image processing, navigation, and sensor control algorithms run concurrently. A tuned navigation algorithm allows Q to smoothly maneuver through obstacle fields. For the 2014 competition, most revisions occurred in the vision system, which detects white lines and informs the navigation component. Barrel obstacles of various colors presented a new challenge for image processing: the previous color plane extraction algorithm would not suffice. To overcome this difficulty, laser range sensor data were overlaid on visual data. Q also participates in the Joint Architecture for Unmanned Systems (JAUS) challenge at IGVC. For 2014, significant updates were implemented: the JAUS component accepted a greater variety of messages and showed better compliance to the JAUS technical standard. With these improvements, Q secured second place in the JAUS competition.
Fish-eye lens is a kind of short focal distance (f=6 similar to 16mm) camera. The field of view (FOV) of it is near or even exceeded 180x180 degrees. A lot of literatures show that the multiple view geometry system bu...
详细信息
ISBN:
(纸本)9781628414967
Fish-eye lens is a kind of short focal distance (f=6 similar to 16mm) camera. The field of view (FOV) of it is near or even exceeded 180x180 degrees. A lot of literatures show that the multiple view geometry system built by fish-eye lens will get larger stereo field than traditional stereo vision system which based on a pair of perspective projection images. Since a fish-eye camera usually has a wider-than-hemispherical FOV, the most of image processing approaches based on the pinhole camera model for the conventional stereo vision system are not satisfied to deal with the applications of this category of stereo vision which built by fish-eye lenses. This paper focuses on discussing the calibration and the epipolar rectification method for a novel machine vision system set up by four fish-eye lenses, which is called Special Stereo vision System (SSVS). The characteristic of SSVS is that it can produce 3D coordinate information from the whole global observation space and acquiring no blind area 360 degrees x360 degrees panoramic image simultaneously just using single vision equipment with one time static shooting. Parameters calibration and epipolar rectification is the basic for SSVS to realize 3D reconstruction and panoramic image generation.
In this paper, we present an enhanced loop closure method* based on image-to-image matching relies on quantized local Zernike moments. In contradistinction to the previous methods, our approach uses additional depth i...
详细信息
ISBN:
(纸本)9781628414967
In this paper, we present an enhanced loop closure method* based on image-to-image matching relies on quantized local Zernike moments. In contradistinction to the previous methods, our approach uses additional depth information to extract Zernike moments in local manner. These moments are used to represent holistic shape information inside the image. The moments in complex space that are extracted from both grayscale and depth images are coarsely quantized. In order to find out the similarity between two locations, nearest neighbour (NN) classification algorithm is performed. Exemplary results and the practical implementation case of the method are also given with the data gathered on the testbed using a Kinect. The method is evaluated in three different datasets of different lighting conditions. Additional depth information with the actual image increases the detection rate especially in dark environments. The results are referred as a successful, high-fidelity online method for visual place recognition as well as to close navigation loops, which is a crucial information for the well known simultaneously localization and mapping (SLAM) problem. This technique is also practically applicable because of its low computational complexity, and performing capability in real-time with high loop closing accuracy.
The proceedings contain 25 papers. The topics discussed include: control issues and recent solutions for voltage controlled piezoelectric elements utilizing artificial neural networks;the 20th annual intelligent groun...
ISBN:
(纸本)9780819494351
The proceedings contain 25 papers. The topics discussed include: control issues and recent solutions for voltage controlled piezoelectric elements utilizing artificial neural networks;the 20th annual intelligent ground vehicle competition: building a generation of robotists;panoramic stereo sphere vision;loop closure detection using local Zernike moment patterns;stabilization and control of quad-rotor helicopter using a smartphone device;optimizing feature selection strategy for adaptive object identification in noisy environment;remotely controlling of mobile robots using gesture captured by the kinect and recognized by machine learning method;natural image understanding using algorithm selection and high-level feedback;natural image understanding using algorithm selection and high-level feedback;finger tracking for hand-held device interface using profile-matching stereo vision;and a restrained-torque-based motion instructor: forearm flexion/extension-driving exoskeleton.
Underwater environments present a considerable challenge for computervision, since water is a scattering medium with substantial light absorption characteristics which is made even more severe by turbidity. This pose...
详细信息
Underwater environments present a considerable challenge for computervision, since water is a scattering medium with substantial light absorption characteristics which is made even more severe by turbidity. This pose...
详细信息
Underwater environments present a considerable challenge for computervision, since water is a scattering medium with substantial light absorption characteristics which is made even more severe by turbidity. This poses significant problems for visual underwater navigation, object detection, tracking and recognition. Previous works tackle the problem by using unreliable priors or expensive and complex devices. This paper adopts a physical underwater light attenuation model which is used to enhance the quality of images and enable the applicability of traditional computervisiontechniques images acquired from underwater scenes. The proposed method simultaneously estimates the attenuation parameter of the medium and the depth map of the scene to compute the image irradiance thus reducing the effect of the medium in the images. Our approach is based on a novel optical flow method, which is capable of dealing with scattering media, and a new technique that robustly estimates the medium parameters. Combined with structure-from-motion techniques, the depth map is estimated and a model-based restoration is performed. The method was tested both with simulated and real sequences of images. The experimental images were acquired with a camera mounted on a Remotely Operated Vehicle (ROV) navigating in a naturally lit, shallow seawater. The results show that the proposed technique allows for substantial restoration of the images, thereby improving the ability to identify and match features, which in turn is an essential step for other computervisionalgorithms such as object detection and tracking, and autonomous navigation.
Improvements were made to the intelligence algorithms of an autonomously operating ground vehicle, Q, which competed in the 2013 intelligent Ground Vehicle Competition (IGVC). The IGVC required the vehicle to first na...
详细信息
ISBN:
(纸本)9780819499424
Improvements were made to the intelligence algorithms of an autonomously operating ground vehicle, Q, which competed in the 2013 intelligent Ground Vehicle Competition (IGVC). The IGVC required the vehicle to first navigate between two white lines on a grassy obstacle course, then pass through eight GPS waypoints, and pass through a final obstacle field. Modifications to Q included a new vision system with a more effective image processing algorithm for white line extraction. The path-planning algorithm adopted the vision system, creating smoother, more reliable navigation. With these improvements, Q successfully completed the basic autonomous navigation challenge, finishing tenth out of over 50 teams.
Ro-Boat is an autonomous river cleaning intelligent robot incorporating mechanical design and computervision algorithm to achieve autonomous river cleaning and provide a sustainable environment. Ro-boat is designed i...
详细信息
ISBN:
(纸本)9780819499424
Ro-Boat is an autonomous river cleaning intelligent robot incorporating mechanical design and computervision algorithm to achieve autonomous river cleaning and provide a sustainable environment. Ro-boat is designed in a modular fashion with design details such as mechanical structural design, hydrodynamic design and vibrational analysis. It is incorporated with a stable mechanical system with air and water propulsion, robotic arms and solar energy source and it is proceed to become autonomous by using computervision. Both "HSV Color Space" and "SURF" are proposed to use for measurements in Kalman Filter resulting in extremely robust pollutant tracking. The system has been tested with successful results in the Yamuna River in New Delhi. We foresee that a system of Ro-boats working autonomously 24x7 can clean a major river in a city on about six months time, which is unmatched by alternative methods of river cleaning.
In a milking robot, the correct localization and positioning of milking teat cups is of very high importance. The milking robots technology has not changed since a decade and is based primarily on laser profiles for t...
详细信息
ISBN:
(纸本)9780819499424
In a milking robot, the correct localization and positioning of milking teat cups is of very high importance. The milking robots technology has not changed since a decade and is based primarily on laser profiles for teats approximate positions estimation. This technology has reached its limit and does not allow optimal positioning of the milking cups. Also, in the presence of occlusions, the milking robot fails to milk the cow. These problems, have economic consequences for producers and animal health (e.g. development of mastitis). To overcome the limitations of current robots, we have developed a new system based on 3D vision, capable of efficiently positioning the milking cups. A prototype of an intelligent robot system based on 3D vision for real- time positioning of a milking robot has been built and tested under various conditions on a synthetic udder model (in static and moving scenarios). Experimental tests, were performed using 3D Time-Of-Flight (TOF) and RGBD cameras. The proposed algorithms permit the online segmentation of teats by combing 2D and 3D visual information. The obtained results permit the teat 3D position computation. This information is then sent to the milking robot for teat cups positioning. The vision system has a real-time performance and monitors the optimal positioning of the cups even in the presence of motion. The obtained results, with both TOF and RGBD cameras, show the good performance of the proposed system. The best performance was obtained with RGBD cameras. This latter technology will be used in future real life experimental tests.
Service robots usually share their workspace with people. Typically, a robot's tasks require knowing when and where people are, to be able to schedule requested tasks. Therefore, there exists the need to take into...
详细信息
ISBN:
(纸本)9780819499424
Service robots usually share their workspace with people. Typically, a robot's tasks require knowing when and where people are, to be able to schedule requested tasks. Therefore, there exists the need to take into account the presence of humans when planning their actions and it is indispensable to have knowledge of robots' environments. It means in practice knowing when (time and events duration) and where (in workspace) a robot's tasks can be performed. This research paper takes steps towards obtaining of the spatial information required to execute software to plan tasks to be performed by a robot. With this aim, a program capable to define meaningful areas or zones in the robot workspace by the use of a clustering is created tied with statistically reasoned time slots when to perform each task. The software is tested using real data obtained from different cameras located along the corridors of CSE Department of University of Oulu.
暂无评论