Motion vision (visual odometry, the estimation of camera egomotion) is a well researched field, yet has seen relatively limited use despite strong evidence from biological systems that vision can be extremely valuable...
详细信息
ISBN:
(纸本)0780384636
Motion vision (visual odometry, the estimation of camera egomotion) is a well researched field, yet has seen relatively limited use despite strong evidence from biological systems that vision can be extremely valuable for navigation. The limited use of such visiontechniques has been attributed to a lack of good algorithms and insufficient computer power, but both of those problems were resolved as long as a decade ago. A gap presently yawns between theory and practice, perhaps due to perceptions of robot vision as less reliable and more complex than other types of sensing. We present an experimental methodology for assessing the real-world precision and reliability of visual odometry techniques in both normal and extreme terrain. This paper evaluates the performance of a mobile robot equipped with a simple vision system in common outdoor and indoor environments, including grass, pavement, ice, and carpet. Our results show that motion visionalgorithms can be robust and effective, and suggest a number of directions for further development.
This paper presents a method to integrate non-stereoscopic vision information with laser distance measurements for Autonomous Ground Robotic Vehicles (AGRV). The method assumes a horizontally-mounted Laser Measurement...
详细信息
ISBN:
(纸本)081945561X
This paper presents a method to integrate non-stereoscopic vision information with laser distance measurements for Autonomous Ground Robotic Vehicles (AGRV). The method assumes a horizontally-mounted Laser Measurement System (LMS) sweeping 180 degrees in front from right to left every one second, and a video camera mounted five feet high pointing to the front and down at 45 degrees to the horizontal. The LMS gives highly accurate obstacle position measurements in a two-dimensional plane whereas the vision system gives limited and not-so-accurate information on obstacle positions in three dimensions. The vision system can also see contrasts between ground markings. Many AGRVs have similar sensors in similar arrangements. The method presented here is general enough for many types of distance measurements and cameras and lenses. Since the data from these two sensors are in radically different formats, AGRVs need a scheme to combine this data into a common format so that the data can be compared and correlated. Having a successful integration method allows the AGRV to make smart path-finding navigation decisions. Integrating these two sensors is one of the challenges for AGRVs that use this approach. The method presented in this paper employs a geometrical approach to combine the two data sets in real time. Tests, accomplished in simulation as well as on an actual AGRV, show excellent results.
We discus a tool kit for usage in scene understanding where prior information about targets is not necessarily understood. As such, we give it a notion of connectivity such that it can classify features in an image fo...
详细信息
ISBN:
(纸本)081945561X
We discus a tool kit for usage in scene understanding where prior information about targets is not necessarily understood. As such, we give it a notion of connectivity such that it can classify features in an image for the purpose of tracking and identification. The tool VFAT (Visual Feature Analysis Tool) is designed to work in real time in an intelligent multi agent room. It is built around a modular design and includes several fast vision processes. The first components discussed are for feature selection using visual saliency and Monte Carlo selection. Then features that have been selected from an image are mixed into useful and more complex features. All the features are then reduced in dimension and contrasted using a combination of Independent Component Analysis and Principle Component Analysis (ICA/PCA). Once this has been done, we classify features using a custom non-parametric classifier (NPclassify) that does not require hard parameters such as class size or number of classes so that VFAT can create classes without stringent priors about class structure. These classes are then generalized using Gaussian regions which allows easier storage of class properties and computation of probability for class matching. To speed up to creation of Gaussian regions we use a system of rotations instead of the traditional Psuedo-inverse method. In addtion to discussing the structure of VFAT we discuss training of the current system which is relatively easy to perform. ICA/PCA is trained by giving VFAT a large number of random images. The ICA/PCA matrix is computed by features extracted by VFAT. The non-parametric classifier NPclasify it trained by presenting it with images of objects having it decide how many objects it thinks it sees. The difference between what it sees and what it is supposed to see in terms of the number of objects is used as the error term and allows VFAT to learn to classify based upon the experimenters subjective idea of good classification.
Cullets optical sorting represents one of the oldest selection procedure applied to the field of solid waste recycling. From the original sorting strategies, mainly addressed to separate non-transparent elements (cera...
详细信息
ISBN:
(纸本)081945561X
Cullets optical sorting represents one of the oldest selection procedure applied to the field of solid waste recycling. From the original sorting strategies, mainly addressed to separate non-transparent elements (ceramics, stones, metal particles, etc.) from transparent ones (glass fragments), the attention was addressed to define procedures and actions able to separate the cullets according to their color characteristics and, more recently, to recognize transparent ceramic glass from glass. Cullets sorting is currently realized adopting, as detecting architecture, laser beam technology based devices. The sorting logic is mainly analogical. An "on-off" logic is applied. Detection is, in fact, based on the evaluation of the "characteristics" of the energy (transparent or non-transparent fragment) and the spectra (fragment color attributes) received by a detector after that cullets were crossed by a suitable laser beam light. Such an approach presents some limits related with the technology utilized and the material characteristics. The technological limits are linked to the physical dimension and the mechanical arrangement of the optics carrying out and in the signals, and with the pneumatic architectures enabling the modification of cullets trajectory to realize sorting, according to their characteristics (color and transmittance). Furthermore such devices are practically "blind" in the recognition of ceramic glasses, whose presence in the final selected material to melt, damage the full recycled glass fusion compromising the quality of the final product. In the following it will be described the work developed, and the results achieved, in order to design a full integrated classical digital imaging and spectrophotometric based approach addressed to develop suitable sorting strategies able to perform, at industrial recycling scale, the distinction of cullets both in terms of color and material typologies, that is "real glass" from "ceramic glass" fragments.
This paper describes a novel application of active learning techniques in the field of robotic grasping. A vision-based grasping system has been implemented on a humanoid robot. It is able to compute a set of feasible...
详细信息
ISBN:
(纸本)9781586034528
This paper describes a novel application of active learning techniques in the field of robotic grasping. A vision-based grasping system has been implemented on a humanoid robot. It is able to compute a set of feasible grasps and to execute any of them and measure their actual reliability. An algorithm aimed at predicting the performance of an untested grasp using the results observed on previous similar attempts is presented. The previous experience is stored using a set of vision-based grasp descriptors. Moreover, a second algorithm that actively selects the next grasp to be executed in order to improve the predictive quality of the accumulated experience is introduced. An exhaustive database of experimental data is collected and used to test and validate both algorithms.
Motion vision (visual odometry, the estimation of camera egomotion) is a well researched field, yet has seen relatively limited use despite strong evidence from biological systems that vision can be extremely valuable...
详细信息
Motion vision (visual odometry, the estimation of camera egomotion) is a well researched field, yet has seen relatively limited use despite strong evidence from biological systems that vision can be extremely valuable for navigation. The limited use of such visiontechniques has been attributed to a lack of good algorithms and insufficient computer power, but both of those problems were resolved as long as a decade ago. A gap presently yawns between theory and practice, perhaps due to perceptions of robot vision as less reliable and more complex than other types of sensing. We present an experimental methodology for assessing the real world precision and reliability of visual odometry techniques in both normal and extreme terrain. This paper evaluates the performance of a mobile robot equipped with a simple vision system in common outdoor and indoor environments, including grass, pavement, ice, and carpet. Our results show that motion visionalgorithms can be robust and effective, and suggest a number of directions for further development.
Standard bundle adjustment techniques for Euclidean reconstruction consider camera intrinsic parameters as unknowns in the optimization process. Obviously, the speed of an optimization process is directly related to t...
详细信息
Standard bundle adjustment techniques for Euclidean reconstruction consider camera intrinsic parameters as unknowns in the optimization process. Obviously, the speed of an optimization process is directly related to the number of unknowns and to the form of the cost function. The scheme proposed in this paper differs from previous standard techniques since unknown camera intrinsic parameters are not considered in the optimization process. Considering fewer unknowns in the optimization process produces a faster algorithm, which is more adapted to time-dependent applications such as robotics. Computationally expensive metric reconstruction, using for example several zooming cameras, considerably benefits from an intrinsics-free bundle adjustment.
There has been a great interest in the recent years in visual coordination and target tracking for mobile robots cooperating in unstructured environments. This paper describes visual servo control techniques suitable ...
详细信息
ISBN:
(纸本)081945155X
There has been a great interest in the recent years in visual coordination and target tracking for mobile robots cooperating in unstructured environments. This paper describes visual servo control techniques suitable for intelligent task planning of cooperative robots operating in unstructured environment. In this paper, we have considered a team of semi-autonomous robots controlled by a remote supervisory control system. We have presented an algorithm for visual position tracking of individual cooperative robots within their working environment. Initially, we present a technique suitable for visual servoing of a robot toward its landmark targets. Secondly, we present an image-processing technique that utilizes images from a remote surveillance camera for localization of the robots within the operational environment. In this algorithm, the surveillance camera can be either stationary or mobile. The supervisor control system keeps tracks of relative locations of individual robots and utilizes relative coordinate information of the robots to plan their cooperative activities. We presented some results of this research effort that illustrates effectiveness of the proposed algorithms for cooperative robotic systems visual team working and target tracking.
The proceedings contains 33 papers from the conference on SPIE - intelligentrobots and computervision XXI: algorithms, techniques, and Active vision. The topics discussed include: learning for intelligent mobile rob...
详细信息
The proceedings contains 33 papers from the conference on SPIE - intelligentrobots and computervision XXI: algorithms, techniques, and Active vision. The topics discussed include: learning for intelligent mobile robots;remote operation of robotics systems using WLAN- and CORBA-based architecture;autonomous cross-country driving using active vision;Web-based telerobotics system in a virtual reality environment;automatic calibration and neural networks for robot guidance;recursive least squares approach to calculate motion parameters for a moving camera;and effective color representation for image segmentation under nonwhite illumination.
This paper describes the development of a miniature assembly cell for microsystems. The cell utilizes a transparent electrostatic gripper all-owing the use of computervision for part alignment with respect to the gri...
详细信息
ISBN:
(纸本)081945155X
This paper describes the development of a miniature assembly cell for microsystems. The cell utilizes a transparent electrostatic gripper all-owing the use of computervision for part alignment with respect to the gripper. Part to assembly alignment is achieved via optical triangulation using a fiber-coupled laser and a position sensitive detector (PSD). The system layout, principle of operation and design are described along with the visual and optical control algorithms and their implementation. Experimental measurements of the performance of the stage indicate normal and tangential gripping forces in the range of 0.03-2.5 mN and 1.-9. mN respectively. The visual search algorithm limits the feature tracking speed to 111ms /search. The alignment accuracy of the visual and optical proportional position feedback controls were determined to be +/-7 mum and +/-10 mum respectively.
暂无评论