This paper introduces a novel image description technique that aims at appearance based loop closure detection for mobile robotics applications. This technique relies on the local evaluation of the Zernike Moments. Bi...
详细信息
ISBN:
(纸本)9780819494351
This paper introduces a novel image description technique that aims at appearance based loop closure detection for mobile robotics applications. This technique relies on the local evaluation of the Zernike Moments. Binary patterns, which are referred to as Local Zernike Moment (LZM) patterns, are extracted from images, and these binary patterns are coded using histograms. Each image is represented with a set of histograms, and loop closure is achieved by simply comparing the most recent image with the images in the past trajectory. The technique has been tested on the New College dataset, and as far as we know, it outperforms the other methods in terms of computation efficiency and loop closure precision.
The intelligent Ground Vehicle Competition (IGVC) is one of four, unmanned systems, student competitions that were founded by the Association for Unmanned Vehicle Systems International (AUVSI). The IGVC is a multidisc...
详细信息
ISBN:
(纸本)9780819494351
The intelligent Ground Vehicle Competition (IGVC) is one of four, unmanned systems, student competitions that were founded by the Association for Unmanned Vehicle Systems International (AUVSI). The IGVC is a multidisciplinary exercise in product realization that challenges college engineering student teams to integrate advanced control theory, machine vision, vehicular electronics and mobile platform fundamentals to design and build an unmanned system. Teams from around the world focus on developing a suite of dual-use technologies to equip ground vehicles of the future with intelligent driving capabilities. Over the past 20 years, the competition has challenged undergraduate, graduate and Ph.D. students with real world applications in intelligent transportation systems, the military and manufacturing automation. To date, teams from over 80 universities and colleges have participated. This paper describes some of the applications of the technologies required by this competition and discusses the educational benefits. The primary goal of the IGVC is to advance engineering education in intelligent vehicles and related technologies. The employment and professional networking opportunities created for students and industrial sponsors through a series of technical events over the four-day competition are highlighted. Finally, an assessment of the competition based on participation is presented.
The main purpose of this paper is to use machine learning method and Kinect and its body sensation technology to design a simple, convenient, yet effective robot remote control system. In this study, a Kinect sensor i...
详细信息
ISBN:
(纸本)9780819494351
The main purpose of this paper is to use machine learning method and Kinect and its body sensation technology to design a simple, convenient, yet effective robot remote control system. In this study, a Kinect sensor is used to capture the human body skeleton with depth information, and a gesture training and identification method is designed using the back propagation neural network to remotely command a mobile robot for certain actions via the Bluetooth. The experimental results show that the designed mobile robots remote control system can achieve, on an average, more than 96% of accurate identification of 7 types of gestures and can effectively control a real e-puck robot for the designed commands.
This paper presents the Mobile Intelligence Team's approach to addressing the CANINE outdoor ground robot competition. The competition required developing a robot that provided retrieving capabilities similar to a...
详细信息
ISBN:
(纸本)9780819494351
This paper presents the Mobile Intelligence Team's approach to addressing the CANINE outdoor ground robot competition. The competition required developing a robot that provided retrieving capabilities similar to a dog, while operating fully autonomously in unstructured environments. The vision team consisted of Mobile Intelligence, the Georgia Institute of Technology, and Wayne State University. Important computervision aspects of the project were the ability to quickly learn the distinguishing characteristics of novel objects, searching images for the object as the robot drove a search pattern, identifying people near the robot for safe operations, correctly identify the object among distractors, and localizing the object for retrieval. The classifier used to identify the objects will be discussed, including an analysis of its performance, and an overview of the entire system architecture presented. A discussion of the robot's performance in the competition will demonstrate the system's successes in real-world testing.
With the development of the application of visual tracking technology, the performance of visual tracking algorithm is important. Due to many kinds of voice, robust of tracking algorithm is bad. To improve identificat...
详细信息
ISBN:
(纸本)9780819494351
With the development of the application of visual tracking technology, the performance of visual tracking algorithm is important. Due to many kinds of voice, robust of tracking algorithm is bad. To improve identification rate and track rate for quickly moving target, expand tracking scope and lower sensitivity to illumination varying, an active visual tracking system based on illumination invariants is proposed. Camera motion pre-control method based on particle filter pre-location is used to improve activity and accuracy of track for quickly moving target by forecasting target position and control camera joints of Tilt, Pan and zoom. Pre-location method using particle sample filter according to illumination invariants of target is used to reduce the affect of varying illumination during tracking moving target and to improve algorithm robust. Experiments in intelligent space show that the robust to illumination vary is improved and the accuracy is improved by actively adjust PTZ parameters.
In this paper we propose to use gesture recognition approaches to track a human hand in 3D space and, without the use of special clothing or markers, be able to accurately generate code for training an industrial robo...
详细信息
ISBN:
(纸本)9780819494351
In this paper we propose to use gesture recognition approaches to track a human hand in 3D space and, without the use of special clothing or markers, be able to accurately generate code for training an industrial robot to perform the same motion. The proposed hand tracking component includes three methods: a color-thresholding model, naive Bayes analysis and Support Vector Machine (SVM) to detect the human hand. Next, it performs stereo matching on the region where the hand was detected to find relative 3D coordinates. The list of coordinates returned is expectedly noisy due to the way the human hand can alter its apparent shape while moving, the inconsistencies in human motion and detection failures in the cluttered environment. Therefore, the system analyzes the list of coordinates to determine a path for the robot to move, by smoothing the data to reduce noise and looking for significant points used to determine the path the robot will ultimately take. The proposed system was applied to pairs of videos recording the motion of a human hand in a, real. environment to move the end-affector of a SCARA robot along the same path as the hand of the person in the video. The correctness of the robot motion was determined by observers indicating that motion of the robot appeared to match the motion of the video.
Conventional stereo vision systems have a small field of view (FOV) which limits their usefulness for certain applications. While panorama vision is able to "see" in all directions of the observation space, ...
详细信息
ISBN:
(纸本)9780819494351
Conventional stereo vision systems have a small field of view (FOV) which limits their usefulness for certain applications. While panorama vision is able to "see" in all directions of the observation space, scene depth information is missed because of the mapping from 3D reference coordinates to 2D panoramic image. In this paper, we present an innovative vision system which builds by a special combined fish-eye lenses module, and is capable of producing 3D coordinate information from the whole global observation space and acquiring no blind area 360 degrees x360 degrees panoramic image simultaneously just using single vision equipment with one time static shooting. It is called Panoramic Stereo Sphere vision (PSSV). We proposed the geometric model, mathematic model and parameters calibration method in this paper. Specifically, video surveillance, robotic autonomous navigation, virtual reality, driving assistance, multiple maneuvering target tracking, automatic mapping of environments and attitude estimation are some of the applications which will benefit from PSSV.
As part of the TARDEC-funded CANINE (Cooperative Autonomous Navigation in a Networked Environment) Program, iRobot developed LABRADOR (Learning Autonomous Behavior-based Robot for Adaptive Detection and Object Retriev...
详细信息
ISBN:
(纸本)9780819494351
As part of the TARDEC-funded CANINE (Cooperative Autonomous Navigation in a Networked Environment) Program, iRobot developed LABRADOR (Learning Autonomous Behavior-based Robot for Adaptive Detection and Object Retrieval). LABRADOR was based on the rugged, man-portable, iRobot PackBot unmanned ground vehicle (UGV) equipped with an explosives ordnance disposal (EOD) manipulator arm and a custom gripper. For LABRADOR, we developed a vision-based object learning and recognition system that combined a TLD (track-learn-detect) filter based on object shape features with a color-histogram-based object detector. Our vision system was able to learn in real-time to recognize objects presented to the robot. We also implemented a waypoint navigation system based on fused GPS, IMU (inertial measurement unit), and odometry data. We used this navigation capability to implement autonomous behaviors capable of searching a specified area using a variety of robust coverage strategies - including outward spiral, random bounce, random waypoint, and perimeter following behaviors. While the full system was not integrated in time to compete in the CANINE competition event, we developed useful perception, navigation, and behavior capabilities that may be applied to future autonomous robot systems.
We present the development of a multi-stage automatic target recognition (MS-ATR) system for computervision in robotics. This paper discusses our work in optimizing the feature selection strategies of the MS-ATR syst...
详细信息
ISBN:
(纸本)9780819494351
We present the development of a multi-stage automatic target recognition (MS-ATR) system for computervision in robotics. This paper discusses our work in optimizing the feature selection strategies of the MS-ATR system. Past implementations have utilized Optimum Trade-off Maximum Average Correlation Height (OT-MACH) filtering as an initial feature selection method, and principal component analysis (PCA) as a feature extraction strategy before the classification stage. Recent work has been done in the implementation of a modified saliency algorithm as a feature selection method. Saliency is typically implemented as a "bottom-up" search process using visual sensory information such as color, intensity, and orientation to detect salient points in the imagery. It is a general saliency mapping algorithm that receives no input from the user on what is considered salient. We discuss here a modified saliency algorithm that accepts the guidance of target features in locating regions of interest (ROI). By introducing target related input parameters, saliency becomes more focused and task oriented. It is used as an initial stage for the fast ROI detection method. The ROIs are passed to the later stages for feature extraction and target identification process.
This paper presents the analysis and derivation of the geometric relation between vanishing points and camera parameters of central catadioptric camera systems. These vanishing points correspond to the three mutually ...
详细信息
ISBN:
(纸本)9780819494351
This paper presents the analysis and derivation of the geometric relation between vanishing points and camera parameters of central catadioptric camera systems. These vanishing points correspond to the three mutually orthogonal directions of 3D real world coordinate system (i.e. X, Y and Z axes). Compared to vanishing points (VPs) in the perspective projection, the advantages of VPs under central catadioptric projection are that there are normally two vanishing points for each set of parallel lines, since lines are projected to conics in the catadioptric image plane. Also, their vanishing points are usually located inside the image frame. We show that knowledge of the VPs corresponding to XYZ axes from a single image can lead to simple derivation of both intrinsic and extrinsic parameters of the central catadioptric system. This derived novel theory is demonstrated and tested on both synthetic and real data with respect to noise sensitivity.
暂无评论