Two new planetary rovers, the SRR-1 (Sample Return Rover) and the FIDO (Field Integrated Design & Operations) rover, have been developed to support future planned science missions of the Mars Surveyor Program. SRR...
详细信息
Two new planetary rovers, the SRR-1 (Sample Return Rover) and the FIDO (Field Integrated Design & Operations) rover, have been developed to support future planned science missions of the Mars Surveyor Program. SRR-1 is a 10-kg 4-wheel hybrid composite-metal vehicle for rapid autonomous location, rendezvous and retrieval of collected samples under integrated visual and beacon guidance. It is 88×55×36 cm3 in dimensions, collapsing to less than a third of its deployable field volume, and carrying a powerful visually-servoed manipulator. FIDO, on the other hand, is a high-mobility, multi-km range science vehicle of over 50 kg and about approximately 100×80×50 cm3. It includes a robot arm with attached microscope and body-mounted rock sampling corer.
Developing elementary behavior is the starting point for the realization of complex systems. We present a learning algorithm that realizes a simple goal-reaching behavior for an autonomous vehicle when no a-priori kno...
详细信息
Developing elementary behavior is the starting point for the realization of complex systems. We present a learning algorithm that realizes a simple goal-reaching behavior for an autonomous vehicle when no a-priori knowledge of the environment is provided. Information coming from a visual sensor is used to detect a general state of the system. To each state an optimal action is associated using a Q-learning algorithm. As sets of states and actions are limited, a few training trials are sufficient in simulation to learn the optimal policy. During test trials (both in simulated and real environment) fuzzy sets with membership functions are introduced to compute the state of the system and the proper action at the extent of tackling errors in state estimation due to noise in vision measures. Experimental results both in simulated and real environment are shown.
Diffraction of light by 3D phase grating layers could be effectively used for color image processing in robotic vision. Gratings with hexagonal close-packed structures have the maximum amount of cells per volume unit,...
详细信息
Diffraction of light by 3D phase grating layers could be effectively used for color image processing in robotic vision. Gratings with hexagonal close-packed structures have the maximum amount of cells per volume unit, which leads to an advantage for color image processing. Using the 4D spectral method, we solve the wave equation for diffraction of light by a 3D hexagonal phase grating layer of spherical particles. Both ABCA and ABAB structures are considered. Distribution of diffracted light intensity is calculated in the Fraunhofer and Fresnel diffraction zones. For particular gating distances, the incident white light diffracts in three spatially separated maximums with the central wavelengths corresponding to the three primary colors. The wavelength dependence of diffracted light intensity, for incident white light, is calculated for the three maximums. In general case, by using these three primary curves one can reconstruct the color of incident light from corresponding values of light intensities measured in the three diffracted maximums. The conditions for self-imaging of 3D grating layers are formulated and investigated. Intensity distributions for diffracted light in planes of positive and negative self-imaging, and in a plane of lowest contrast are computed.
Rather than design an optimal filter over a large window, which may be computationally impossible or require unacceptable computation time, one can design an iterative filter, each stage of which is designed over a sm...
详细信息
Rather than design an optimal filter over a large window, which may be computationally impossible or require unacceptable computation time, one can design an iterative filter, each stage of which is designed over a small window with acceptable design time. Theoretically, a two-stage iterative filter with each stage optimally designed over a window is suboptimal in comparison to the optimal filter over the larger window formed as the dilation of the small window with itself. In practice, however, filters are designed from realizations and lack of precision for design over a large window can result in a directly estimated optimal filter over a large window that performs worse than an iteratively designed filter. Using image-noise models, this paper considers three cases: (1) the designed filters are good estimates of the theoretically optimal filters, the two-stage iterative filter is close to optimal, and as further iterations are considered for both the small and large windows, the performance difference becomes small;(2) the designed filter over the large window is a poor estimate of the theoretically optimal filter and the iteratively designed filter outperforms the directly designed filter;(3) iteration cannot do well because the iteration window is too small for the image-noise model. We will see that, while in terms of logic there may be a significant difference between a noniterative and an approximating iterative filter, their probabilistic difference as operators on random processes can be negligible.
robots relying on vision as a primary sensor frequently need to track common objects such as people, cars, and tools in order to successfully perform autonomous navigation or grasping tasks. These objects may comprise...
详细信息
robots relying on vision as a primary sensor frequently need to track common objects such as people, cars, and tools in order to successfully perform autonomous navigation or grasping tasks. These objects may comprise many visual parts and attributes, yet image-based tracking algorithms are often keyed to only one of a target's identifying characteristics. In this paper, we present a framework for sharing information among disparate state estimation processes operating on the same underlying visual object. Well-known techniques for joint probabilistic data association are adapted to yield increased robustness when multiple trackers attuned to different visual cues are deployed simultaneously. We also formulate a measure of tracker confidence, based on distinctiveness and occlusion probability, which permits the deactivation of trackers before erroneous state estimates adversely affect the ensemble. We will discuss experiments using color-region- and snake-based tracking in tandem that demonstrate the efficacy of this approach.
A machine vision system has been developed to separate half cut peaches with small splinters from clean ones. The system uses the different spectral profile of both together with an ad hoc illuminating system. The sys...
详细信息
ISBN:
(纸本)0819426407
A machine vision system has been developed to separate half cut peaches with small splinters from clean ones. The system uses the different spectral profile of both together with an ad hoc illuminating system. The system is capable of process 30 half peaches per second. The hardware and software solutions are described.
An intelligent robot is a remarkably useful combination of a manipulator, sensors and controls. The use of these machines in factory automation can improve productivity, increase product quality and improve competitiv...
详细信息
ISBN:
(纸本)0819426407
An intelligent robot is a remarkably useful combination of a manipulator, sensors and controls. The use of these machines in factory automation can improve productivity, increase product quality and improve competitiveness. This paper presents a discussion of recent economic and technical trends. The robotics industry now has a billion-dollar market in the U.S. and is growing. Feasibility studies are presented which also show unaudited healthy rates of return for a variety of robotic applications. Technically, the machines are faster, cheaper, more repeatable, more reliable and safer. The knowledge base of inverse kinematic and dynamic solutions and intelligent controls is increasing. More attention is being given by industry to robots, vision and motion controls. New areas of usage are emerging for service robots, remote manipulators and automated guided vehicles. However, the road from inspiration to successful application is still long and difficult, often taking decades to achieve a new product. More cooperation between government, industry and universities is needed to speed the development of intelligentrobots that will benefit both industry and society.
Hand-eye coordination is the coupling between vision and manipulation. Visual servoing is the term applied to hand-eye coordination in robots. In recent years, research has demonstrated that active vision - active cen...
详细信息
ISBN:
(纸本)0819426407
Hand-eye coordination is the coupling between vision and manipulation. Visual servoing is the term applied to hand-eye coordination in robots. In recent years, research has demonstrated that active vision - active central of camera position and camera parameters - facilitates a robot's interaction with the world. One aspect of active vision is centering an object in an image. This is known as gaze stabilization or fixation. This paper presents a new algorithm that applies target fixation to image-based visual servoing. This algorithm, called Fixation Point Servoing (FPS), uses target fixation to eliminate the need for Jacobian computation. Additionally, FPS requires only the rotation relationship between the camera head and the gripper frames and does not require accurate tracking of the gripper. FPS was tested on a robotics system called ISAC and experimental results are shown. FPS was also compared to a classical Jacobian-based technique using simulations of both algorithms.
When interacting with intelligent agents, it is vital to understand something of their intentions. Agents' itentions provide context in spoken dialog, help to define their future plans (and thus actions), and reve...
详细信息
ISBN:
(纸本)0819426407
When interacting with intelligent agents, it is vital to understand something of their intentions. Agents' itentions provide context in spoken dialog, help to define their future plans (and thus actions), and reveal information about their beliefs. We propose a method by which agents' intentions are inferred by observing their actions. Explicit communication among agents is not permitted. The joint intentions framework(1) specifies the behaviors and obligations of agents that share in a cooperative intention. Our work focuses on the creation of such joint intentions through observation and plan recognition. The plan recognition structure uses Bayes Nets to reason from observations to actions, and Hidden Markov Models to reason from sequences of actions to intent.
This paper describes a method for expressing and manipulating position and orientation uncertainty in sensor-based robotics. The goal is to formulate a computational framework where the effects of accumulating uncerta...
详细信息
ISBN:
(纸本)0819426407
This paper describes a method for expressing and manipulating position and orientation uncertainty in sensor-based robotics. The goal is to formulate a computational framework where the effects of accumulating uncertainties, originating from internal and external sensors, are shown as uncertainties in tool frame positions and orientations. The described method is based on using covariance matrices of position and orientation parameters. The used orientation parameters are xyz Euler angles. There are three different forms of spatial uncertainty and uncertainty manipulation involves transformations between these forms. These transformations are done by using linearisation around nominal relations. Paper presents the basic formulas for these transformations and also three calculation examples.
暂无评论