Multifocus fusion is the process of fusing focal information from a set of input images into one all-in-focus image. Here, a versatile multifocus fusion algorithm is presented for application-independent fusion. A foc...
详细信息
Multifocus fusion is the process of fusing focal information from a set of input images into one all-in-focus image. Here, a versatile multifocus fusion algorithm is presented for application-independent fusion. A focally connected region is a region or a set of regions in an input image that falls under the depth of field of the imaging system. Such regions are segmented adaptively under the predicate of focal connectivity and fused by partition synthesis. The fused image has information from all focal planes, while maintaining the visual verisimilitude of the scene. In order to validate the fusion performance of our method, we have compared our results with those of tiling and multiscale fusion techniques. In addition to performing a seamless fusion of the focally connected regions, our method out performs the competing methods regarding overall sharpness in all our experiments. Several illustrative examples of multifocus fusion are shown and objective comparisons are provided.
The purpose of this research is to develop techniques that enable robots to choose and track a desired person for interaction in daily-life environments. Therefore, localizing multiple moving sounds and human faces is...
详细信息
The purpose of this research is to develop techniques that enable robots to choose and track a desired person for interaction in daily-life environments. Therefore, localizing multiple moving sounds and human faces is necessary so that robots can locate a desired person. For sound source localization, we used a cross-power spectrum phase analysis (CSP) method and showed that CSP can localize sound sources only using two microphones and does not need impulse response data. An expectation-maximization (EM) algorithm was shown to enable a robot to cope with multiple moving sound sources. For face localization, we developed a method that can reliably detect several faces using the skin color classification obtained by using the EM algorithm. To deal with a change in color state according to illumination condition and various skin colors, the robot can obtain new skin color features of faces detected by OpenCV, an open vision library, for detecting human faces. Finally, we developed a probability based method to integrate auditory and visual information and to produce a reliable tracking path in real time. Furthermore, the developed system chose and tracked people while dealing with various background noises that are considered loud, even in the daily-life environments.
Nanorobotic structure design and modeling require large-scale computer simulation tools and numerical algorithms in order to better understand, control and accelerate development of multiscale nanosystems. Research on...
详细信息
Nanorobotic structure design and modeling require large-scale computer simulation tools and numerical algorithms in order to better understand, control and accelerate development of multiscale nanosystems. Research on modeling and simulation of physical, chemical and biological systems at the nanoscale will include techniques such as quantum mechanics, multi-particle simulation, molecular dynamics simulation, grain and continuum based models. In this paper, we present a novel approach that makes use of multi-scale and multi- physics modeling coupled to virtual reality for nanorobotic prototyping systems. First, a CAD system that integrated principles for multiscale approach in nanorobotics structure design is presented. Then, we focus in the different design levels, more specifically, the optimization of geometry structure carried out by quantum mechanics, molecular dynamics and continuum mechanics methodologies. As illustration of the proposed multiscale modeling concepts, we tested by simulation the dynamics characteristics of a nanorobotic parallel platform with one degree-of-freedom (DOF) composed of proteins-based passive kinematic chains and actuated by a viral protein-based nanoactuator. Multiscale simulations proved the effectiveness and accuracy of the proposed design and modeling approaches.
The purpose of this paper is to describe the design, development and simulation of a real time controller for an intelligent, vision guided robot. The use of a creative controller that can select its own tasks is demo...
详细信息
ISBN:
(纸本)0819464821
The purpose of this paper is to describe the design, development and simulation of a real time controller for an intelligent, vision guided robot. The use of a creative controller that can select its own tasks is demonstrated. This creative controller uses a task control center and dynamic database. The dynamic database stores both global environmental information and local information including the kinematic and dynamic models of the intelligent robot. The kinematic model is very useful for position control and simulations. However, models of the dynamics of the manipulators are needed for tracking control of the robot's motions. Such models are also necessary for sizing the actuators, tuning the controller, and achieving superior performance. Simulations of various control designs are shown. Also, much of the model has also been used for the actual prototype Bearcat Cub mobile robot. This vision guided robot was designed for the intelligent Ground Vehicle Contest. A novel feature of the proposed approach is that the method is applicable to both robot arm manipulators and robot bases such as wheeled mobile robots. This generality should encourage the development of more mobile robots with manipulator capability since both models can be easily stored in the dynamic database. The multi task controller also permits wide applications. The use of manipulators and mobile bases with a high-level control are potentially useful for space exploration, certain rescue robots, defense robots, and medical robotics aids.
The intelligent Ground Vehicle Competition (IGVC) is one of three, unmanned systems, student competitions that were founded by the Association for Unmanned Vehicle Systems International (AUVSI) in the 1990s. The IGVC ...
详细信息
ISBN:
(纸本)0819464821
The intelligent Ground Vehicle Competition (IGVC) is one of three, unmanned systems, student competitions that were founded by the Association for Unmanned Vehicle Systems International (AUVSI) in the 1990s. The IGVC is a multidisciplinary exercise in product realization that challenges college engineering student teams to integrate advanced control theory, machine vision, vehicular electronics, and mobile platform fundamentals to design and build an unmanned system. Teams from around the world focus on developing a suite of dual-use technologies to equip ground vehicles of the future with intelligent driving capabilities. Over the past 14 years, the competition has challenged undergraduate, graduate and Ph.D. students with real world applications in intelligent transportation systems, the military and manufacturing automation. To date, teams from over 50 universities and colleges have participated. This paper describes some of the applications of the technologies required by this competition and discusses the educational benefits. The primary goal of the IGVC is to advance engineering education in intelligent vehicles and related technologies. The employment and professional networking opportunities created for students and industrial sponsors through a series of technical events over the three-day competition are highlighted. Finally, an assessment of the competition based on participant feedback is presented.
In this paper an embedded vision system and control module is introduced that is capable of controlling an unmanned four-rotor helicopter and processing live video for various law enforcement, security, military, and ...
详细信息
ISBN:
(纸本)0819464821
In this paper an embedded vision system and control module is introduced that is capable of controlling an unmanned four-rotor helicopter and processing live video for various law enforcement, security, military, and civilian applications. The vision system is implemented on a newly designed compact FPGA board (Helios). The Helios board contains a Xilinx Virtex-4 FPGA chip and memory making it capable of implementing real time visionalgorithms. A Smooth Automated intelligent Leveling daughter board (SAIL), attached to the Helios board, collects attitude and heading information to be processed in order to control the unmanned helicopter. The SAIL board uses an electrolytic tilt sensor, compass, voltage level converters, and analog to digital converters to perform its operations. While level flight can be maintained, problems stemming from the characteristics of the tilt sensor limits maneuverability of the helicopter. The embedded vision system has proven to give very good results in its performance of a number of realtime robotic visionalgorithms.
Through the vision for Space Exploration-hence vision-announced by George W. Bush in February 2004, NASA has been chartered to conduct progressively staged human-robotic (H/R) exploration of the Solar System. This exp...
详细信息
ISBN:
(纸本)0819464821
Through the vision for Space Exploration-hence vision-announced by George W. Bush in February 2004, NASA has been chartered to conduct progressively staged human-robotic (H/R) exploration of the Solar System. This exploration includes autonomous robotic precursors that will pave the way for a later durable H/R presence first at the Moon, then Mars and beyond. We discuss operations architectures and integrafive technologies that are expected to enable these new classes of space rnissions, with an emphasis on open design issues and R&D challenges.
We present an approach for abstracting invariant classifications of spatiotemporal patterns presented in a high-dimensionality input stream, and apply an early proof-of-concept to shift and scale invariant shape recog...
详细信息
ISBN:
(纸本)0819464821
We present an approach for abstracting invariant classifications of spatiotemporal patterns presented in a high-dimensionality input stream, and apply an early proof-of-concept to shift and scale invariant shape recognition. A model called Hierarchical Quilted Self-Organizing Map (HQSOM) is developed, using recurrent self-organizing maps (RSOM) arranged in a pyramidal hierarchy, attempting to mimic the parallel/hierarchical pattern of isocortical processing in the brain. The results of experiments are presented in which the algorithm learns to classify multiple shapes, invariant to shift and scale transformations, in a very small (7 x 7 pixel) field of view.
In this research, a new algorithm for fruit shape classification was proposed. The level set representations according to signed distance transforms were used, which are a simple, robust, rich and efficient way to rep...
详细信息
ISBN:
(纸本)0819464821
In this research, a new algorithm for fruit shape classification was proposed. The level set representations according to signed distance transforms were used, which are a simple, robust, rich and efficient way to represent shapes. Based on these representations, the rigid transform was adopted to align shapes within the same class, and the simplest possible criterion, the sum of square differences was considered. After align procedure, the average shape representations can easily be derived and shape classification was performed by the nearest neighbor method. Promising results were obtained on experiments showing the efficiency and accurate of our algorithm.
Three dimensional visual recognition and measurement are important in many machine vision applications. In some cases, a stationary camera base is used and a three-dimensional model will permit the measurement of dept...
详细信息
ISBN:
(纸本)0819464821
Three dimensional visual recognition and measurement are important in many machine vision applications. In some cases, a stationary camera base is used and a three-dimensional model will permit the measurement of depth information from a scene. One important special case is stereo vision for human visualization or measurements. In cases in which the camera base is also in motion, a seven dimensional model may be used. Such is the case for navigation of an autonomous mobile robot. The purpose of this paper is to provide a computational view and introduction of three methods to three-dimensional vision. Models are presented for each situation and example computations and images are presented. The significance of this work is that it shows that various methods based on three-dimensional vision may be used for solving two and three dimensional vision problems. We hope this work will be slightly iconoclastic but also inspirational by encouraging further research in optical engineering.
暂无评论