This paper presents an algorithm for solving three challenges of autonomous navigation: sensor signal processing, sensor integration, and path-finding. The algorithm organizes these challenges into three steps. The fi...
详细信息
ISBN:
(纸本)0819464821
This paper presents an algorithm for solving three challenges of autonomous navigation: sensor signal processing, sensor integration, and path-finding. The algorithm organizes these challenges into three steps. The first step involves converting the raw data from each sensor to a form suitable for real-time processing. Emphasis in the first step is on image processing. In the second step, the processed data from all sensors is integrated into a single map. Using this map as input, during the third step the algorithm calculates a goal and finds a suitable path from robot to the goal. The method presented in this paper completes these steps in this order and the steps repeat indefinitely. The robotic platform designed for testing the algorithm is a six-wheel mid-wheel drive system using differential steering. The robot, called Anassa II, has an electric wheelchair base and a custom-built top and it is designed to participate in the intelligent Ground Vehicle Competition (IGVC). The sensors consist of a laser scanner, a video camera, a Differential Global Positioning System (DGPS) receiver, a digital compass, and two wheel encoders. Since many intelligent vehicles have similar sensors, the approach presented here is general enough for many types of autonomous mobile robots.
This paper describes the design of a small autonomous vehicle based on the Helios computing platform, a custom FPGA-based board capable of supporting on-board vision. Target applications for the Helios computing platf...
详细信息
ISBN:
(纸本)0819464821
This paper describes the design of a small autonomous vehicle based on the Helios computing platform, a custom FPGA-based board capable of supporting on-board vision. Target applications for the Helios computing platform are those that require lightweight equipment and low power consumption. To demonstrate the capabilities of FPGAs in real-time control of autonomous vehicles, a 16 inch long R/C monster truck was outfitted with a Helios board. The platform provided by such a small vehicle is ideal for testing and development. The proof of concept application for this autonomous vehicle was a timed race through an environment with obstacles. Given the size restrictions of the vehicle and its operating environment, the only feasible on-board sensor is a small CMOS camera. The single video feed is therefore the only source of information from the surrounding enviromnent. The image is then segmented and processed by custom logic in the FPGA that also controls direction and speed of the vehicle based on visual input.
ALVIN-VII is an autonomous vehicle designed to compete in the AUVSI intelligent Ground Vehicle Competition (IGVC). The competition consists of two events, the Autonomous Challenge and Navigation Challenge. Using tri-p...
详细信息
ISBN:
(纸本)0819464821
ALVIN-VII is an autonomous vehicle designed to compete in the AUVSI intelligent Ground Vehicle Competition (IGVC). The competition consists of two events, the Autonomous Challenge and Navigation Challenge. Using tri-processor control architecture the information from sonar sensors, cameras, GPS and compass is effectively integrated to map out the path of the robot. In the Autonomous Challenge, the real time data from two Firewire web cameras and an array of four sonar sensors are plotted on a custom-defined polar grid to identify the position of the robot with respect to the obstacles in its path. Depending on the position of the obstacles in the grid, a state number is determined and a command of action is retrieved from the state table. The image processing algorithm comprises a series of steps involving plane extraction, morphological analysis, edge extraction and interpolation, all of which are statistically based allowing optimum operation at varying ambient conditions. In the Navigation Challenge, data from GPS and sonar sensors are integrated on a polar grid with flexible distance thresholds and a state table approach is used to drive the vehicle to the next waypoint while avoiding obstacles. Both algorithms are developed and implemented using National Instruments (NI) hardware and LabVIEW software. The task of collecting and processing information in real time can be time consuming and hence not reactive enough for moving robots. Using three controllers, the image processing is done separately for each camera while a third controller integrates the data received through an Ethernet connection.
This work has been performed in conjunction with the ECE Department's autonomous vehicle entry in the 2006 intelligent Ground Vehicle Competition (***). The course to be traversed in the competition consists of a ...
详细信息
ISBN:
(纸本)0819464821
This work has been performed in conjunction with the ECE Department's autonomous vehicle entry in the 2006 intelligent Ground Vehicle Competition (***). The course to be traversed in the competition consists of a lane demarcated by paint lines on grass along with other challenging artifacts such as a sandpit, a ramp, potholes, colored tarps, and obstacles set up using orange and white construction barrels. In this paper an enhanced obstacle detection and mapping algorithm based on region-based color segmentation techniques is described. The main purpose of this algorithm is to detect obstacles which are not properly identified by the LADAR (Laser Detection and Ranging) system optimally mounted close to the ground, due to "shadowing" occasionally resulting in bad navigation decisions. On the other hand, the camera that is primarily used to detect the lane lines is mounted at 6 feet. In this work we concentrate on the identification of orange/red construction barrels. This paper proposes a generalized color segmentation technique which is potentially more versatile and faster than traditional full or partial color segmentation approaches. The developed algorithm identifies the shadowed items within the camera's field of vision and uses this to complement the LADAR information, thus facilitating an enhanced navigation strategy. The identification of barrels also aids in deleting bright objects from images which contain lane lines, which improves lane line identification.
We have designed and implemented a fast predictive vision system for a mobile robot based on the principles of active vision. This vision system is part of a larger project to design a comprehensive cognitive architec...
详细信息
ISBN:
(纸本)0819464821
We have designed and implemented a fast predictive vision system for a mobile robot based on the principles of active vision. This vision system is part of a larger project to design a comprehensive cognitive architecture for mobile robotics. The vision system represents the robot's environment with a dynamic 3D world model based on a 3D gaming platform (Ogre3D). This world model contains a virtual copy of the robot and its environment, and outputs graphics showing what the virtual robot "sees" in the virtual world;this is what the real robot expects to see in the real world. The vision system compares this output in real time with the visual data. Any large discrepancies are flagged and sent to the robot's cognitive system, which constructs a plan for focusing on the discrepancies and resolving them, e.g. by updating the position of an object or by recognizing a new object. An object is recognized only once;thereafter its observed data are monitored for consistency with the predictions, greatly reducing the cost of scene understanding. We describe the implementation of this vision system and how the robot uses it to locate and avoid obstacles.
Extremely wide view of the onmi-vision performs highly advanced for the vehicle navigation and target detection. However moving targets detection through omni-vision fixed on AGV (Automatic Guided Vehicle) involves mo...
详细信息
ISBN:
(纸本)0819464821
Extremely wide view of the onmi-vision performs highly advanced for the vehicle navigation and target detection. However moving targets detection through omni-vision fixed on AGV (Automatic Guided Vehicle) involves more complex environments, where both the targets and the vehicle are in the moving condition. The moving targets will be detected in a moving background. After analyzing the character on orriniorientational vision and image, we propose to use the estimation in optical flow fields, Gabor filter over optical flow fields for detecting moving objects. Because polar angle theta and polar radius R of polar coordinates are being changed as the targets moving, we improved optical flow approach which can be calculated based on the polar coordinates at the oniniorientational center. We constructed Gabor filter which has 24 orientations every 15', and filter optical flow fields at 24 orientations. By the contrast of the Gabor filter images at the same orientation and the same AGV position between the situation which there aren't any moving targets in the environment and the situation which there are some moving targets in the same environment, the moving targets' optical flow fields could be recognized. Experiment results show that the proposed approach is feasible and effective.
The ADAPT project is a collaboration of researchers in robotics, linguistics and artificial intelligence at three universities to create a cognitive architecture specifically designed to be embodied in a mobile robot....
详细信息
ISBN:
(纸本)0819464821
The ADAPT project is a collaboration of researchers in robotics, linguistics and artificial intelligence at three universities to create a cognitive architecture specifically designed to be embodied in a mobile robot. There are major respects in which existing cognitive architectures are inadequate for robot cognition. In particular, they lack support for true concurrency and for active perception. ADAPT addresses these deficiencies by modeling the world as a network of concurrent schemas, and modeling perception as problem solving. Schemas are represented using the RS (Robot Schemas) language, and are activated by spreading activation. RS provides a powerful language for distributed control of concurrent processes. Also, The formal semantics of RS provides the basis for the semantics of ADAPT's use of natural language. We have implemented the RS language in Soar, a mature cognitive architecture originally developed at CMU and used at a number of universities and companies. Soar's subgoaling and learning capabilities enable ADAPT to manage the complexity of its environment and to learn new schemas from experience. We describe the issues faced in developing an embodied cognitive architecture, and our implementation choices.
This work has been performed in conjunction with the University of Detroit Mercy's (UDM) ECE Department autonomous vehicle entry in the 2006 intelligent Ground Vehicle Competition (***. The IGVC challenges enginee...
详细信息
ISBN:
(纸本)0819464821
This work has been performed in conjunction with the University of Detroit Mercy's (UDM) ECE Department autonomous vehicle entry in the 2006 intelligent Ground Vehicle Competition (***. The IGVC challenges engineering students to design autonomous vehicles and compete in a variety of unmanned mobility competitions. The course to be traversed in the competition consists of a lane demarcated by painted lines on grass with the possibility of one of the two lines being deliberately left out over segments of the course. The course also consists of other challenging artifacts such as sandpits, ramps, potholes, and colored tarps that alter the color composition of scenes, and obstacles set up using orange and white construction barrels. This paper describes a composite lane edge detection approach that uses three algorithms to implement noise filters enabling increased removal of noise prior to the application of image thresholding. The first algorithm uses a row-adaptive statistical filter to establish an intensity floor followed by a global threshold based on a reverse cumulative intensity histogram and a priori knowledge about lane thickness and separation. The second method first improves the contrast of the image by implementing an arithmetic combination of the blue plane (RGB format) and a modified saturation plane (HSI format). A global threshold is then applied based on the mean of the intensity image and a user-defined offset. The third method applies the horizontal component of the Sobel mask to a modified gray scale of the image, followed by a thresholding method similar to the one used in the second method. The Hough transform is applied to each of the resulting binary images to select the most probable line candidates. Finally, a heuristics-based confidence interval is determined, and the results sent on to a separate fuzzy polar-based navigation algorithm, which fuses the image data with that produced by a laser scanner (for obstacle detection).
A correlator-optical imaging system with three-dimensional nano- and micro-structured diffraction gratings in aperture and in image space allows an adaptive optical correlation of local RGB data in image space with gl...
详细信息
ISBN:
(纸本)0819464821
A correlator-optical imaging system with three-dimensional nano- and micro-structured diffraction gratings in aperture and in image space allows an adaptive optical correlation of local RGB data in image space with global RGB data (light from overall illumination in the visual field scattered from the aperture of the optical imaging system into image space), diffracted together into reciprocal grating space (photoreceptor space). This correlator-optical hardware seems to be a decisive part of the human eye and leads to new interpretations of color vision and of adaptive color constancy performances in human vision. In Part I the up to now available data and their corresponding interpretation, together explaining paradoxically colored shadows as well as the data from ***'s Retinex experiments, will be described. They will serve as premises for the planned experimental setup. In Part II these premises will experimentally be controlled by experiments in a part of an actually starting R+D project and the results will be described in 2007.
This paper presents a self-localization strategy for a team of heterogeneous mobile robots including ground mobile robots of various sizes and wall-climbing robots. These robots are equipped with various visual sensor...
详细信息
This paper presents a self-localization strategy for a team of heterogeneous mobile robots including ground mobile robots of various sizes and wall-climbing robots. These robots are equipped with various visual sensors, such as miniature webcams, omnidirectional cameras, and PTZ cameras. As the core of this work, a formation of four-robot team is constructed to operate in a 3D space, e.g., moving on ground, climbing on walls and clinging to ceilings. The four robots could dynamically localize themselves asynchronously by employing cooperative visiontechniques. Three of them on the ground mutually view each other and determine their relative poses with 6 degrees of freedom (DOFs). A wall-climbing robot, which significantly extends the work space of the robot team to 3D, is at a vantage point (e.g., on the ceiling) such that it can see all the three teammates, thus determining its own location and orientation. The four-robot formation theory and algorithms are presented, and experimental results with both simulated and real image data are provided to demonstrate the feasibility of this formation. Two 3D localization and control strategies are designed for applications such as search and rescue and surveillance in 3D urban environments where robots must be deployed in a full 3D space
暂无评论