The purpose of this paper is to describe the design, development and simulation of a real time controller for an intelligent, vision guided robot. The use of a creative controller that can select its own tasks is demo...
详细信息
ISBN:
(纸本)0819464821
The purpose of this paper is to describe the design, development and simulation of a real time controller for an intelligent, vision guided robot. The use of a creative controller that can select its own tasks is demonstrated. This creative controller uses a task control center and dynamic database. The dynamic database stores both global environmental information and local information including the kinematic and dynamic models of the intelligent robot. The kinematic model is very useful for position control and simulations. However, models of the dynamics of the manipulators are needed for tracking control of the robot's motions. Such models are also necessary for sizing the actuators, tuning the controller, and achieving superior performance. Simulations of various control designs are shown. Also, much of the model has also been used for the actual prototype Bearcat Cub mobile robot. This vision guided robot was designed for the intelligent Ground Vehicle Contest. A novel feature of the proposed approach is that the method is applicable to both robot arm manipulators and robot bases such as wheeled mobile robots. This generality should encourage the development of more mobile robots with manipulator capability since both models can be easily stored in the dynamic database. The multi task controller also permits wide applications. The use of manipulators and mobile bases with a high-level control are potentially useful for space exploration, certain rescue robots, defense robots, and medical robotics aids.
The intelligent Ground Vehicle Competition (IGVC) is one of three, unmanned systems, student competitions that were founded by the Association for Unmanned Vehicle Systems International (AUVSI) in the 1990s. The IGVC ...
详细信息
ISBN:
(纸本)0819464821
The intelligent Ground Vehicle Competition (IGVC) is one of three, unmanned systems, student competitions that were founded by the Association for Unmanned Vehicle Systems International (AUVSI) in the 1990s. The IGVC is a multidisciplinary exercise in product realization that challenges college engineering student teams to integrate advanced control theory, machine vision, vehicular electronics, and mobile platform fundamentals to design and build an unmanned system. Teams from around the world focus on developing a suite of dual-use technologies to equip ground vehicles of the future with intelligent driving capabilities. Over the past 14 years, the competition has challenged undergraduate, graduate and Ph.D. students with real world applications in intelligent transportation systems, the military and manufacturing automation. To date, teams from over 50 universities and colleges have participated. This paper describes some of the applications of the technologies required by this competition and discusses the educational benefits. The primary goal of the IGVC is to advance engineering education in intelligent vehicles and related technologies. The employment and professional networking opportunities created for students and industrial sponsors through a series of technical events over the three-day competition are highlighted. Finally, an assessment of the competition based on participant feedback is presented.
In this paper an embedded vision system and control module is introduced that is capable of controlling an unmanned four-rotor helicopter and processing live video for various law enforcement, security, military, and ...
详细信息
ISBN:
(纸本)0819464821
In this paper an embedded vision system and control module is introduced that is capable of controlling an unmanned four-rotor helicopter and processing live video for various law enforcement, security, military, and civilian applications. The vision system is implemented on a newly designed compact FPGA board (Helios). The Helios board contains a Xilinx Virtex-4 FPGA chip and memory making it capable of implementing real time visionalgorithms. A Smooth Automated intelligent Leveling daughter board (SAIL), attached to the Helios board, collects attitude and heading information to be processed in order to control the unmanned helicopter. The SAIL board uses an electrolytic tilt sensor, compass, voltage level converters, and analog to digital converters to perform its operations. While level flight can be maintained, problems stemming from the characteristics of the tilt sensor limits maneuverability of the helicopter. The embedded vision system has proven to give very good results in its performance of a number of realtime robotic visionalgorithms.
Through the vision for Space Exploration-hence vision-announced by George W. Bush in February 2004, NASA has been chartered to conduct progressively staged human-robotic (H/R) exploration of the Solar System. This exp...
详细信息
ISBN:
(纸本)0819464821
Through the vision for Space Exploration-hence vision-announced by George W. Bush in February 2004, NASA has been chartered to conduct progressively staged human-robotic (H/R) exploration of the Solar System. This exploration includes autonomous robotic precursors that will pave the way for a later durable H/R presence first at the Moon, then Mars and beyond. We discuss operations architectures and integrafive technologies that are expected to enable these new classes of space rnissions, with an emphasis on open design issues and R&D challenges.
We present an approach for abstracting invariant classifications of spatiotemporal patterns presented in a high-dimensionality input stream, and apply an early proof-of-concept to shift and scale invariant shape recog...
详细信息
ISBN:
(纸本)0819464821
We present an approach for abstracting invariant classifications of spatiotemporal patterns presented in a high-dimensionality input stream, and apply an early proof-of-concept to shift and scale invariant shape recognition. A model called Hierarchical Quilted Self-Organizing Map (HQSOM) is developed, using recurrent self-organizing maps (RSOM) arranged in a pyramidal hierarchy, attempting to mimic the parallel/hierarchical pattern of isocortical processing in the brain. The results of experiments are presented in which the algorithm learns to classify multiple shapes, invariant to shift and scale transformations, in a very small (7 x 7 pixel) field of view.
In this research, a new algorithm for fruit shape classification was proposed. The level set representations according to signed distance transforms were used, which are a simple, robust, rich and efficient way to rep...
详细信息
ISBN:
(纸本)0819464821
In this research, a new algorithm for fruit shape classification was proposed. The level set representations according to signed distance transforms were used, which are a simple, robust, rich and efficient way to represent shapes. Based on these representations, the rigid transform was adopted to align shapes within the same class, and the simplest possible criterion, the sum of square differences was considered. After align procedure, the average shape representations can easily be derived and shape classification was performed by the nearest neighbor method. Promising results were obtained on experiments showing the efficiency and accurate of our algorithm.
Three dimensional visual recognition and measurement are important in many machine vision applications. In some cases, a stationary camera base is used and a three-dimensional model will permit the measurement of dept...
详细信息
ISBN:
(纸本)0819464821
Three dimensional visual recognition and measurement are important in many machine vision applications. In some cases, a stationary camera base is used and a three-dimensional model will permit the measurement of depth information from a scene. One important special case is stereo vision for human visualization or measurements. In cases in which the camera base is also in motion, a seven dimensional model may be used. Such is the case for navigation of an autonomous mobile robot. The purpose of this paper is to provide a computational view and introduction of three methods to three-dimensional vision. Models are presented for each situation and example computations and images are presented. The significance of this work is that it shows that various methods based on three-dimensional vision may be used for solving two and three dimensional vision problems. We hope this work will be slightly iconoclastic but also inspirational by encouraging further research in optical engineering.
This paper discusses a simple, inexpensive, and effective implementation of a vision-guided autonomous robot. This implementation is a second year entrance for Brigham Young University students to the intelligent Grou...
详细信息
ISBN:
(纸本)0819464821
This paper discusses a simple, inexpensive, and effective implementation of a vision-guided autonomous robot. This implementation is a second year entrance for Brigham Young University students to the intelligent Ground Vehicle Competition. The objective of the robot was to navigate a course constructed of white boundary lines and orange obstacles for the autonomous competition. A used electric wheelchair was used as the robot base. The wheelchair was purchased from a local thrift store for $28. The base was modified to include Kegresse tracks using a friction drum system. This modification allowed the robot to perform better on a variety of terrains, resolving issues with last year's design. In order to control the wheelchair and retain the robust motor controls already on the wheelchair the wheelchair joystick was simply removed and replaced with a printed circuit board that emulated joystick operation and was capable of receiving commands through a serial port connection. Three different algorithms were implemented and compared: a purely reactive approach, a potential fields approach, and a machine learning approach. Each of the algorithms used color segmentation methods to interpret data from a digital camera in order to identify the features of the course. This paper will be useful to those interested in implementing an inexpensive vision-based autonomous robot.
In a machine vision-based guidance system, a camera must be corrected precisely to calculate the position of vehicle, however, it is not easy to obtain the intrinsic and extrinsic parameters of the camera, while neura...
详细信息
ISBN:
(纸本)0819464821
In a machine vision-based guidance system, a camera must be corrected precisely to calculate the position of vehicle, however, it is not easy to obtain the intrinsic and extrinsic parameters of the camera, while neural nets have the advantage to set up a mapping relationship for a nonlinear system. We intended to use the CMAC neural net to construct two map relationships: image coordinates and offsets of the vehicle, and image coordinates and the heading angle of the vehicle. The net inputs were the coordinates of top and bottom points in the detected guidance line in the image coordinate system. The outputs were offsets and heading angles. The verified results show that the RMS of inferred offset is 10.5 mm, and the STD is 11.3 mm;the RMS of inferred heading is 1.1 degrees, and the STD is 0.99 degrees.
暂无评论