Fish-eye lens is a kind of short focal distance (f=6 similar to 16mm) camera. The field of view (FOV) of it is near or even exceeded 180x180 degrees. A lot of literatures show that the multiple view geometry system bu...
详细信息
ISBN:
(纸本)9781628414967
Fish-eye lens is a kind of short focal distance (f=6 similar to 16mm) camera. The field of view (FOV) of it is near or even exceeded 180x180 degrees. A lot of literatures show that the multiple view geometry system built by fish-eye lens will get larger stereo field than traditional stereo vision system which based on a pair of perspective projection images. Since a fish-eye camera usually has a wider-than-hemispherical FOV, the most of image processing approaches based on the pinhole camera model for the conventional stereo vision system are not satisfied to deal with the applications of this category of stereo vision which built by fish-eye lenses. This paper focuses on discussing the calibration and the epipolar rectification method for a novel machine vision system set up by four fish-eye lenses, which is called Special Stereo vision System (SSVS). The characteristic of SSVS is that it can produce 3D coordinate information from the whole global observation space and acquiring no blind area 360 degrees x360 degrees panoramic image simultaneously just using single vision equipment with one time static shooting. Parameters calibration and epipolar rectification is the basic for SSVS to realize 3D reconstruction and panoramic image generation.
This paper presents improvements made to the intelligence algorithms employed on Q, an autonomous ground vehicle, for the 2014 intelligent Ground Vehicle Competition (IGVC). In 2012, the IGVC committee combined the fo...
详细信息
ISBN:
(纸本)9781628414967
This paper presents improvements made to the intelligence algorithms employed on Q, an autonomous ground vehicle, for the 2014 intelligent Ground Vehicle Competition (IGVC). In 2012, the IGVC committee combined the formerly separate autonomous and navigation challenges into a single AUT-NAV challenge. In this new challenge, the vehicle is required to navigate through a grassy obstacle course and stay within the course boundaries (a lane of two white painted lines) that guide it toward a given GPS waypoint. Once the vehicle reaches this waypoint, it enters an open course where it is required to navigate to another GPS waypoint while avoiding obstacles. After reaching the final waypoint, the vehicle is required to traverse another obstacle course before completing the run. Q uses modular parallel software architecture in which image processing, navigation, and sensor control algorithms run concurrently. A tuned navigation algorithm allows Q to smoothly maneuver through obstacle fields. For the 2014 competition, most revisions occurred in the vision system, which detects white lines and informs the navigation component. Barrel obstacles of various colors presented a new challenge for image processing: the previous color plane extraction algorithm would not suffice. To overcome this difficulty, laser range sensor data were overlaid on visual data. Q also participates in the Joint Architecture for Unmanned Systems (JAUS) challenge at IGVC. For 2014, significant updates were implemented: the JAUS component accepted a greater variety of messages and showed better compliance to the JAUS technical standard. With these improvements, Q secured second place in the JAUS competition.
Obstacle detection and localization behaviours have shown to work robustly in 2D perceived environment. With the progress of proximity sensors, precisely the emergence of 3D vision techniques, it would be interesting ...
详细信息
Obstacle detection and localization behaviours have shown to work robustly in 2D perceived environment. With the progress of proximity sensors, precisely the emergence of 3D vision techniques, it would be interesting to examine these motion controllers whether for 3D perceived environment. 2D basic algorithms for obstacle avoidance or robot location, needs reformulation in order to process data from 3D perception devices. In this work, we introduce a 3D obstacle detection controller, combined with particle filter localization. The obstacle detection control described in this paper address 3D obstacles with different shapes and the particle filter uses 3D environment data to correct the wheelchair localization based on his kinematics model.
In this paper, we present an enhanced loop closure method* based on image-to-image matching relies on quantized local Zernike moments. In contradistinction to the previous methods, our approach uses additional depth i...
详细信息
ISBN:
(纸本)9781628414967
In this paper, we present an enhanced loop closure method* based on image-to-image matching relies on quantized local Zernike moments. In contradistinction to the previous methods, our approach uses additional depth information to extract Zernike moments in local manner. These moments are used to represent holistic shape information inside the image. The moments in complex space that are extracted from both grayscale and depth images are coarsely quantized. In order to find out the similarity between two locations, nearest neighbour (NN) classification algorithm is performed. Exemplary results and the practical implementation case of the method are also given with the data gathered on the testbed using a Kinect. The method is evaluated in three different datasets of different lighting conditions. Additional depth information with the actual image increases the detection rate especially in dark environments. The results are referred as a successful, high-fidelity online method for visual place recognition as well as to close navigation loops, which is a crucial information for the well known simultaneously localization and mapping (SLAM) problem. This technique is also practically applicable because of its low computational complexity, and performing capability in real-time with high loop closing accuracy.
Underwater environments present a considerable challenge for computervision, since water is a scattering medium with substantial light absorption characteristics which is made even more severe by turbidity. This pose...
详细信息
Underwater environments present a considerable challenge for computervision, since water is a scattering medium with substantial light absorption characteristics which is made even more severe by turbidity. This pose...
详细信息
Underwater environments present a considerable challenge for computervision, since water is a scattering medium with substantial light absorption characteristics which is made even more severe by turbidity. This poses significant problems for visual underwater navigation, object detection, tracking and recognition. Previous works tackle the problem by using unreliable priors or expensive and complex devices. This paper adopts a physical underwater light attenuation model which is used to enhance the quality of images and enable the applicability of traditional computervision techniques images acquired from underwater scenes. The proposed method simultaneously estimates the attenuation parameter of the medium and the depth map of the scene to compute the image irradiance thus reducing the effect of the medium in the images. Our approach is based on a novel optical flow method, which is capable of dealing with scattering media, and a new technique that robustly estimates the medium parameters. Combined with structure-from-motion techniques, the depth map is estimated and a model-based restoration is performed. The method was tested both with simulated and real sequences of images. The experimental images were acquired with a camera mounted on a Remotely Operated Vehicle (ROV) navigating in a naturally lit, shallow seawater. The results show that the proposed technique allows for substantial restoration of the images, thereby improving the ability to identify and match features, which in turn is an essential step for other computervisionalgorithms such as object detection and tracking, and autonomous navigation.
We propose a novel approach for real-time object pose detection and tracking that is highly scalable in terms of the number of objects tracked and the number of cameras observing the scene. Key to this scalability is ...
详细信息
We propose a novel approach for real-time object pose detection and tracking that is highly scalable in terms of the number of objects tracked and the number of cameras observing the scene. Key to this scalability is a high degree of parallelism in the algorithms employed. The method maintains a single 3D simulated model of the scene consisting of multiple objects together with a robot operating on them. This allows for rapid synthesis of appearance, depth, and occlusion information from each camera viewpoint. This information is used both for updating the pose estimates and for extracting the low-level visual cues. The visual cues obtained from each camera are efficiently fused back into the single consistent scene representation using a constrained optimization method. The centralized scene representation, together with the reliability measures it enables, simplify the interaction between pose tracking and pose detection across multiple cameras. We demonstrate the robustness of our approach in a realistic manipulation scenario. We publicly release this work as a part of a general ROS software framework for real-time pose estimation, SimTrack, that can be integrated easily for different robotic applications.
This paper presents a novel approach to modeling the dynamics of human movements with a grid-based representation. The model we propose, termed as Multi-scale Conditional Transition Map (MCTMap), is an inhomogeneous H...
详细信息
This paper presents a novel approach to modeling the dynamics of human movements with a grid-based representation. The model we propose, termed as Multi-scale Conditional Transition Map (MCTMap), is an inhomogeneous HMM process that describes transitions of human location state in spatial and temporal space. Unlike existing work, our method is able to capture both local correlations and long-term dependencies on faraway initiating events. This enables the learned model to incorporate more information and to generate an informative representation of human existence probabilities across the grid map and along the temporal axis for intelligent interaction of the robot, such as avoiding or meeting the human. Our model consists of two levels. For each grid cell, we formulate the local dynamics using a variant of the left-to-right HMM, and thus explicitly model the exiting direction from the current cell. The dependency of this process on the entry direction is captured by employing the Input-Output HMM (IOHMM). On the higher level, we introduce the place where the whole trajectory originated into the IOHMM framework forming a hierarchical input structure to capture long-term dependencies. The capabilities of our method are verified by experimental results from 10 hours of data collected in an office corridor environment.
Improvements were made to the intelligence algorithms of an autonomously operating ground vehicle, Q, which competed in the 2013 intelligent Ground Vehicle Competition (IGVC). The IGVC required the vehicle to first na...
详细信息
ISBN:
(纸本)9780819499424
Improvements were made to the intelligence algorithms of an autonomously operating ground vehicle, Q, which competed in the 2013 intelligent Ground Vehicle Competition (IGVC). The IGVC required the vehicle to first navigate between two white lines on a grassy obstacle course, then pass through eight GPS waypoints, and pass through a final obstacle field. Modifications to Q included a new vision system with a more effective image processing algorithm for white line extraction. The path-planning algorithm adopted the vision system, creating smoother, more reliable navigation. With these improvements, Q successfully completed the basic autonomous navigation challenge, finishing tenth out of over 50 teams.
Ro-Boat is an autonomous river cleaning intelligent robot incorporating mechanical design and computervision algorithm to achieve autonomous river cleaning and provide a sustainable environment. Ro-boat is designed i...
详细信息
ISBN:
(纸本)9780819499424
Ro-Boat is an autonomous river cleaning intelligent robot incorporating mechanical design and computervision algorithm to achieve autonomous river cleaning and provide a sustainable environment. Ro-boat is designed in a modular fashion with design details such as mechanical structural design, hydrodynamic design and vibrational analysis. It is incorporated with a stable mechanical system with air and water propulsion, robotic arms and solar energy source and it is proceed to become autonomous by using computervision. Both "HSV Color Space" and "SURF" are proposed to use for measurements in Kalman Filter resulting in extremely robust pollutant tracking. The system has been tested with successful results in the Yamuna River in New Delhi. We foresee that a system of Ro-boats working autonomously 24x7 can clean a major river in a city on about six months time, which is unmatched by alternative methods of river cleaning.
暂无评论