Nowadays more and more measurement applications where sensor nodes cooperate to realize a data fusion process are entrusted to wireless sensor networks (WSN). Data fusion to be accurate requires that nodes be referred...
详细信息
Nowadays more and more measurement applications where sensor nodes cooperate to realize a data fusion process are entrusted to wireless sensor networks (WSN). Data fusion to be accurate requires that nodes be referred to a common clock and among themselves. In this scenario node synchronization got a crucial research topic and even sophisticated synchronization algorithms can be easily found in literature. Not a few others show to be especially designed for low cost, low memory and low power architectures. Their performance is usually evaluated either through analytical considerations or in simulation environments, but in some cases where ad hoc radio systems have been adopted. This paper makes a first step toward an experimental characterization of synchronization protocols that hold into account also influence factors which arise when commercial wireless communication busses as WiFi, BlueTooth (BT), ZigBee are adopted.
The following two books are reviewed: Identity & Security. A Common Architecture & Framework for SOA and Network Convergence (Radhakrishnan, R.; 2007) and Network Routing: algorithms, Protocols, and Architectu...
详细信息
The following two books are reviewed: Identity & Security. A Common Architecture & Framework for SOA and Network Convergence (Radhakrishnan, R.; 2007) and Network Routing: algorithms, Protocols, and architectures (Medhi, D. and Ramasamy, K.; 2007).
For land vehicle applications, in order to obtain an accurate positioning solution in denied Global Positioning System (GPS) environments, low cost Inertial Measurement Units (IMUs) can be integrated with GPS. Kalman ...
For land vehicle applications, in order to obtain an accurate positioning solution in denied Global Positioning System (GPS) environments, low cost Inertial Measurement Units (IMUs) can be integrated with GPS. Kalman Filtering (KF) is usually used to fuse the position and velocity information obtained from both systems. It benefits from the availability of the dynamic mathematical model of the Inertial Navigation System (INS) position, velocity and attitude errors. However, KF requires stochastic models of the inertial sensor errors and a priori information about the covariances of both INS and GPS data. With low cost INS, it is usually difficult to come up with accurate enough stochastic models for each inertial sensor error, which may lead to large position errors. Moreover, nonlinear and non-stationary INS errors become very serious when low cost inertial sensors are utilized. Alternative INS/GPS integration methods, such as Neural Networks (NN), have received more attention. The main advantage of NN over KF is that it can solve non-linear problems that map input to output data without relying on a priori information. However, NN-based methods do not benefit from the knowledge of the dynamic model of INS errors and are in general computationally expensive. This thesis augments the KF with NN in order to realize the benefits of both techniques and to improve the overall positioning accuracy. The KF benefits from the INS error model and is capable of removing part of the INS errors. In addition, Radial Basis Function Neural Network (RBFNN) is used as an NN module to model and predict the residual stochastic and nonlinear parts of the INS errors. The ultimate objective of this thesis is to obtain a consistent level of accuracy over relatively long GPS outages (60 seconds) using a low-cost MEMS-based INS integrated with GPS. Two different architectures of the augmented solution were built; the Position Update Architecture (PUA) and the velocity Update Architecture (v
In this work, we present a hybrid multisensor system equipped with sensorfusionarchitectures for continuous gas concentration estimation, in a gas mixture scenario. Complex gas mixtures pose significant issues to qu...
详细信息
In this work, we present a hybrid multisensor system equipped with sensorfusionarchitectures for continuous gas concentration estimation, in a gas mixture scenario. Complex gas mixtures pose significant issues to quantification capability of room temperature operating sensors due to the often poor stability and selectivity of this class of devices. In this work, we show how these problems can take advantage by the use of pattern recognition algorithms. Here, the use of ad-hoc sensorfusionalgorithms based on neural networks and support vector machines, together with an array of heterogeneous, room temperature operating sensors, is investigated to enhance array performances. Results obtained by different architectures for individual analyte quantification in NO2-NH3-humid air mixtures are presented and discussed. (c) 2007 Elsevier B.v. All rights reserved.
Signal-level image fusion has been the focus of considerable research attention in recent years with a plethora of algorithms proposed, using a host of image processing and information fusion techniques. Yet what is a...
详细信息
Signal-level image fusion has been the focus of considerable research attention in recent years with a plethora of algorithms proposed, using a host of image processing and information fusion techniques. Yet what is an optimal information fusion strategy or spectral decomposition that should precede it for any multi-sensor data cannot be defined a priori. This could be learned by either evaluating fusionalgorithms subjectively or indeed through a small number of available objective metrics on a large set of relevant sample data. This is not practical however and is limited in that it provides no guarantee of optimal performance should realistic input conditions be different from the sample data. This paper proposes and examines the viability of a powerful framework for objectively adaptive image fusion that explicitly optimises fusion performance for a broad range of input conditions. The idea is to employ the concepts used in objective image fusion evaluation to optimally adapt the fusion process to the input conditions. Specific focus is on fusion for display, which has broad appeal in a wide range of fusionapplications such as night vision, avionics and medical imaging. By integrating objective fusion metrics shown to be subjectively relevant into conventional fusionalgorithms the framework is used to adapt fusion parameters to achieve optimal fusion display. The results show that the proposed framework achieves a considerable improvement in both level and robustness of fusion performance on a wide array of multi-sensor images and image sequences. (C) 2005 Elsevier B.v. All rights reserved.
The proceedings contain 27 papers. The topics discussed include: a novel method to evaluate the performance of pan-sharpening algorithms;signal-to-noise ratio for cross-sensorfusion approach;fusion and kernel type se...
详细信息
ISBN:
(纸本)081946693X
The proceedings contain 27 papers. The topics discussed include: a novel method to evaluate the performance of pan-sharpening algorithms;signal-to-noise ratio for cross-sensorfusion approach;fusion and kernel type selection in adaptive image retrieval;merging infrared and color visible images with a contrast enhanced fusion method;multi-sensor detection and fusion technique;novel classification fusion techniques for military target classification;a full-scale prototype multisensor system for fire detection and situational awareness;fusion of disparate information sources in a hybrid decision-support architecture;selection of fusion operations using rank-score diversity for robot mapping and localization;maximum likelihood ensemble filter applied to multisensory systems;assessing the value of information in a fuzzy cognitive map;and multisensorfusion of images for target identification.
A multi-sensor detection and fusion technology is described in this paper. The system consist of inputs from three sensors, Infra Red, Doppler Motion, and Stereo video. This choice of sensors is designed to give high ...
详细信息
ISBN:
(纸本)9780819466938
A multi-sensor detection and fusion technology is described in this paper. The system consist of inputs from three sensors, Infra Red, Doppler Motion, and Stereo video. This choice of sensors is designed to give high reliability, infra red and Doppler to provide detection ability at night, stereo video has the ability to analyze depth and range information. The combination of these sensors has the ability to provide high probability of detection and very low false alarm rate. The technique consists of three processing parts corresponding to each sensor data, and a fusion module, which makes the final decision based on the inputs from the three parts. The signal processing and detection algorithms process the inputs from each sensors and provides a specific information to the fusion module. fusion module is based on bayes belief propagation theory. It takes the processed inputs from all the sensor modules and provides a final decision on the presence and absense of objects, as well as their reliability based on the iterative belief propagation algorithm operating on decision graphs. A prototype system was built using the technique to study the feasibility of intrusion detection for NASA's launch danger zone protection. The system verified the potential of the proposed algorithms and proved the feasibility of high probability of detection and low false alarm rates compared to many existing techniques.
We determined the signal-to-noise ratios for an image fusion approach that is suitable for application to systems with disparate sensors. We found there was reconstruction error present when the forward and reverse tr...
详细信息
ISBN:
(纸本)9780819466938
We determined the signal-to-noise ratios for an image fusion approach that is suitable for application to systems with disparate sensors. We found there was reconstruction error present when the forward and reverse transforms for image fusion give perfect reconstruction.
Multi-sensor platforms are widely used in surveillance video systems for both military and civilian applications. The complimentary nature of different types of sensors (e.g. EO and IR sensors) makes it possible to ob...
详细信息
ISBN:
(纸本)9780819466938
Multi-sensor platforms are widely used in surveillance video systems for both military and civilian applications. The complimentary nature of different types of sensors (e.g. EO and IR sensors) makes it possible to observe the scene under almost any condition (day/night/fog/smoke). In this paper, we propose an innovative EO/IR sensor registration and fusion algorithm which runs real-time on a portable computing unit with head-mounted display. The EO/IR sensor suite is mounted on a helmet for a dismounted soldier and the fused scene is shown in the goggle display upon the processing on a portable computing unit. The linear homography transformation between images from the two sensors is pre-computed for the mid-to-far scene, which reduces the computational cost for the online calibration of the sensors. The system is implemented in a highly optimized C++ code, with MMX/SSE, and performing a real-time registration. The experimental results on real captured video show the system works very well both in speed and in performance.
Multisensor data fusion is an emerging technology applied to defense and non-defense applications. In this paper, a image fusion algorithm using different texture parameters is proposed to identify long-range targets....
详细信息
ISBN:
(纸本)9780819466938
Multisensor data fusion is an emerging technology applied to defense and non-defense applications. In this paper, a image fusion algorithm using different texture parameters is proposed to identify long-range targets. The method uses a semi-supervised approach for detecting single target from the input images. The procedure consists of three steps: Feature extraction, Feature level fusion and sensor level fusion. In this study, two methods of texture feature extraction using co-occurrence matrix and run-length matrix are considered. Texture parameters are calculated at each pixel of the selected training image, and target non-target pixels identified manually. Some of the texture features calculated at the target position differ from those in the background. Discriminant analysis is used to perform feature level fusion on the training image, which classify target and non-target pixels. By applying the discriminant function to the feature space of textural parameters, a new image is created. The maxima of this image correspond to target point. The same discriminant function can be applied to the other images for detecting the trained targets regions. sensor level fusion combines images obtained from feature level fusion of visual and IR images. The method was first tested with synthetically generated images and then with real images. Results are obtained using both co-occurrence and run-length method of texture feature extraction for target identification.
暂无评论