Based on the concept of object- and behavior-oriented stereo vision, a method is introduced which enables a robot manipulator to handle two distinct types of objects. It uses an uncalibrated stereo vision system and a...
详细信息
ISBN:
(纸本)0819423068
Based on the concept of object- and behavior-oriented stereo vision, a method is introduced which enables a robot manipulator to handle two distinct types of objects. It uses an uncalibrated stereo vision system and allows a direct transition from image coordinates to motion control commands of a robot. An object can be placed anywhere in the robot's 3D work space which is in the field of view of both cameras. The objects to be manipulated can either be of flat cylindrical or elongate shape. Results gained from real-world experiments are discussed.
Augmented reality is a term used to describe systems in which computer-generated information is superimposed on top of the real world;for example, through the use of a see-through head-mounted display. A human user of...
详细信息
ISBN:
(纸本)0819423068
Augmented reality is a term used to describe systems in which computer-generated information is superimposed on top of the real world;for example, through the use of a see-through head-mounted display. A human user of such a system could still see and interact with the real world, but have valuable additional information, such as descriptions of important features or instructions for performing physical tasks, superimposed on the world. For example, the computer could identify and overlay them with graphic outlines, labels, and schematics. The graphics are registered to the real-world objects and appear to be 'painted' onto those objects. Augmented reality systems can be used to make productivity aids for tasks such as inspection, manufacturing, and navigation. One of the most critical requirements for augmented reality is to recognize and locate real-world objects with respect to the person's head. Accurate registration is necessary in order to overlay graphics accurately on top of the real-world objects. At the Colorado School of Mines, we have developed a prototype augmented reality system that uses head-mounted cameras and computervisiontechniques to accurately register the head to the scene. The current system locates and tracks a set of pre-placed passive fiducial targets placed on the real-world objects. The system computes the pose of the objects and displays graphics overlays using a see-through head-mounted display. This paper describes the architecture of the system and outlines the computervisiontechniques used.
A camera system to be used in a tactile vision aid for blind persons has been built and tested. The camera is based on individual adaptive photoreceptors modelled after the biological example and realized in standard ...
详细信息
ISBN:
(纸本)0819423068
A camera system to be used in a tactile vision aid for blind persons has been built and tested. The camera is based on individual adaptive photoreceptors modelled after the biological example and realized in standard CMOS technology. The system exhibits a large dynamic range of approximately 7 orders of magnitude in incident light intensity and a pronounced capability to detect moving objects. It is planned to connect such a camera to a set of mechanical actuators which will transmit processed information about the image to the skin of a person. This paper describes simulations and measurements carried out with single adaptive pixels as well as results obtained with two complete prototype camera systems.
A special case of civilian active vision has been investigated here, namely, a vision system by car anti-fog headlamps. A method to estimate the light-engineering criteria for headlamp performances and simulate the op...
详细信息
ISBN:
(纸本)0819423068
A special case of civilian active vision has been investigated here, namely, a vision system by car anti-fog headlamps. A method to estimate the light-engineering criteria for headlamp performances and simulate the operation of the system through a turbid medium, such as fog, is developed on the base of the analytical procedures of the radiative transfer theory. This method features in include the spaced light source and receiver of a driver's active vision system, the complicated azimuth-nonsymmetrical emissive pattern of the headlamps, and the fine angular dependence of the fog phase function near the backscattering direction. The final formulas are derived in an analytical form providing additional convenience and simplicity for the computations. The image contrast of a road object with arbitrary orientation, dimensions, and shape and its limiting visibility range are studied as a function of meteorological visibility range in fog as well as of various emissive pattern, mounting, and adjustment parameters of the headlamps. Optimization both light-engineering and geometrical characteristics of the headlamps is shown to be possible to enable the opportunity to enhance the visibility range and, hence, traffic safety.
This paper proposes a new localization method for indoor mobile robots. Using two cameras and one laser range finder on board a TRC mobile robot, the initial position and pose of the robot can be obtained by multisens...
详细信息
ISBN:
(纸本)0819423068
This paper proposes a new localization method for indoor mobile robots. Using two cameras and one laser range finder on board a TRC mobile robot, the initial position and pose of the robot can be obtained by multisensor fusion and scene matching based on geometric hashing. No correspondence calculation and special pattern recognition are needed during the scene matching. This localization method can be implemented in five stages: 1) Model the indoor environment. Some selected indoor environment features are firstly modeled off-line into hashing tables. 2) Perform system calibration and information fusion from two cameras and the range finder. 3) Extract the vertical edge points corresponding to the horizontal scanning plane of the 2D laser range finder from the scene images and transform them into geometric invariants. 4) Perform scene matching and matching verification by geometric hashing and model back-projection method respectively. 5) Perform position and pose estimation by a least square fit method. Experimental results show that the accuracy and reliability of this localization method are quite high.
The challenging task of automated handling of variable objects necessitates a combination of innovative engineering and advanced information technology. This paper describes the application of a recently developed con...
详细信息
ISBN:
(纸本)0819423068
The challenging task of automated handling of variable objects necessitates a combination of innovative engineering and advanced information technology. This paper describes the application of a recently developed control strategy applied to overcome some limitations of robot handling, particularly when dealing with variable objects. The paper focuses on a novel approach to accommodate the need for sensing and actuation in controlling the pickup procedure. An experimental robot-based system for the handling of soft parts, ranging from artificial components to natural objects such as fruit and meat pieces was developed. The configuration comprises a modular gripper subsystem, and an industrial robot as part of a distributed control system. The gripper subsystem features manually configurable fingers with integrated sensing capabilities. The control architecture is based on a concept of decentralized control differentiating between positioning and gripping procedures. In this way, the robot and gripper systems are treated as individual handling operations. THis concept allows very short set-up times for future changes involving one or more sub-systems.
This paper focuses on simulating a model of 3D-color vision system based on synthetic nonlinear modulation. The model is set up to recover 3D and color properties from a colored object through evaluating several rf-in...
详细信息
ISBN:
(纸本)0819423068
This paper focuses on simulating a model of 3D-color vision system based on synthetic nonlinear modulation. The model is set up to recover 3D and color properties from a colored object through evaluating several rf-interferograms sampled by a black-white CCD camera. Colorizing a black-white CCD camera in a 3D-vision system implies high resolution. The synthetic nonlinear modulation is different from other 3D-color vision systems. Different colored lights are synchronously modulated with characterizing rf-frequencies to detect a 3D object. Recovering colors is equally treated as recovering 3D information. Optical filters are not used. Instead, a suitable algorithm is adopted for recovering color and 3D information. Since a modulated optical rf-signal is used as a detecting probe rather than an unmodulated optical wave, higher orders of harmonic signals may be caused by electrical or optical components. Although linear matching techniques are adapted to prevent the problem, it is necessary to simulate the vision system for predicting its performances. An 8-bit black-white CCD camera with different signal to noise ratios is taken as an example in the simulation. 3D color properties are evaluated for the system in the case of nonlinearity and noise. An optimized result is obtained for realizing this vision system.
Biologically plausible model of the system with an adaptive behavior in a priori environment and resistant to impairment has been developed. The system consists of input, learning, and output subsystems. The first sub...
详细信息
ISBN:
(纸本)0819423068
Biologically plausible model of the system with an adaptive behavior in a priori environment and resistant to impairment has been developed. The system consists of input, learning, and output subsystems. The first subsystems classifies input patterns presented as n-dimensional vectors in accordance with some associative rule. The second one being a neural network determines adaptive responses of the system to input patterns. Arranged neural groups coding possible input patterns and appropriate output responses are formed during learning by means of negative reinforcement. Output subsystem maps a neural network activity into the system behavior in the environment. The system developed has been studied by computer simulation imitating a collision-free motion of a mobile robot. After some learning period the system 'moves' along a road without collisions. It is shown that in spite of impairment of some neural network elements the system functions reliably after relearning. Foveal visual preprocessor model developed earlier has been tested to form a kind of visual input to the system.
A new feature space trajectory (FST) description of 3D distorted views of an object is advanced for active vision applications. In an FST, different distorted object views are vertices in feature space. A new eigen-fe...
详细信息
ISBN:
(纸本)0819423068
A new feature space trajectory (FST) description of 3D distorted views of an object is advanced for active vision applications. In an FST, different distorted object views are vertices in feature space. A new eigen-feature space and Fourier transform features are used. Vertices for different adjacent distorted views are connected by straight lines so that an FST is created as the viewpoint changes. Each different object is represented by a distinct FST. An object to be recognized is represented as a point in feature space;the closest FST denotes the class of the object, and the closest line segment on the FST indicates its pose. A new neural network is used to efficiently calculate distances. We discuss its uses in active vision. Apart from an initial estimate of object class and pose, the FST processor can specify where to move the sensor to: confirm class and pose, to grasp the object, or to focus on a specific object part for assembly or inspection. We advance initial remarks on the number of aspect views needed and which aspect views are needed to represent an object. We note the superiority of our eigenspace for discrimination, how it can provide shift invariance, and how the FST overcomes problems associated with other classifiers.
The term 'active vision' was first used by Bajcsy at a NATO workshop in 1982 to describe an emerging field of robot vision which departed sharply from traditional paradigms of image understanding and machine v...
详细信息
ISBN:
(纸本)0819423068
The term 'active vision' was first used by Bajcsy at a NATO workshop in 1982 to describe an emerging field of robot vision which departed sharply from traditional paradigms of image understanding and machine vision. The new approach embeds a moving camera platform as an in-the-loop component of robotic navigation or hand-eye coordination. Visually served steering of the focus of attention supersedes the traditional functions of recognition and gaging. Custom active vision platforms soon proliferated in research laboratories in Europe and North America. In 1990 the National Science Foundation funded the design of a common platform to promote cooperation and reduce cost in active vision research. This paper describes the resulting platform. The design was driven by payload requirements for binocular motorized C-mount lenses on a platform whose performance and articulation emulate those of the human eye-head system. The result was a 4-DOF mechanisms driven by servo controlled DC brush motors. A crossbeam supports two independent worm-gear driven camera vergence mounts at speeds up to 1,000 degrees per second over a range of ± 90 degrees from dead ahead. This crossbeam is supported by a pan-tilt mount whose horizontal axis intersects the vergence axes for translation-free camera rotation about these axes at speeds up to 500 degrees per second.
暂无评论