Distributed processing and control are critical to supports distributed intelligence and autonomy of multi-degree-of-freedom motion systems. Measurements and fusion of spatiotemporal physical quantities imply data-int...
详细信息
ISBN:
(纸本)9781538663844
Distributed processing and control are critical to supports distributed intelligence and autonomy of multi-degree-of-freedom motion systems. Measurements and fusion of spatiotemporal physical quantities imply data-intensive computing and spatial distribution of computing resources to enable control and processing of large data sets from image and inertial sensors. We examine distributed and asynchronous processing nodes which process information independently deriving partial solutions. There are multiple sensing-and-processing nodes in each individual agent. Each node comprises solid-state or MEMS multi-degree-of-freedom sensors with ASICs which process and fuse data. On-node computing supports distributed processing. Adaptive bottom-up organization ensures data aggregation and data management with operation on sub-samples or hashed sets of large source datasets. Cooperative distributed processing is essential in centralized, decentralized and behavioral coordination. In the centralized organization, a central processor may not ensure adequacy. A network of semi-autonomous on-device processing sensors may interact to solve specific tasks and validate solutions. Problem allocation, partitioning, coordination and other tasks are implemented using software-and hardware-supported algorithms and protocols. This paper contributes to design of next generation of systems with distributed multi-node processing capabilities.
While machine learning algorithms become more and more accurate in imageprocessing tasks, their computation complexity becomes less important because they can be run on more and more powerful hardware. In this work, ...
详细信息
ISBN:
(纸本)9781538626344;9781538626337
While machine learning algorithms become more and more accurate in imageprocessing tasks, their computation complexity becomes less important because they can be run on more and more powerful hardware. In this work, we are considering the computation complexity of a machine learning algorithm training/classification phase as the major criterion. The main aim is given to the Principal Component Analysis algorithm, which is examined, its drawbacks are point-out and suppressed by the proposed combination with the F-transform technique. We show that the training phase of such a combination is very fast, which is caused by the fact that both PCA and F-transform algorithms reduce dimensionality. In the designed benchmark, we show that the success rate of the fast hybrid algorithm is the same as the original PCA, due to F-transform ability to capture spatial information and reduction of noise/distortion in an image. Finally, we demonstrate that PCA+FT is faster and can achieve a higher success rate than a standard Convolution Neural Network and nevertheless, it is slightly less accurate as a Capsule Neural Network for the chosen dataset, its training phase is 100000x faster and classification time is faster 9x.
Aerial targets such as missiles and aircrafts at far distance projected on image plane as point or small targets in infra-red and visible video. These targets lack apriori information about target dynamic, shape and s...
详细信息
ISBN:
(纸本)9781728106489
Aerial targets such as missiles and aircrafts at far distance projected on image plane as point or small targets in infra-red and visible video. These targets lack apriori information about target dynamic, shape and size. Detection and tracking of such targets has been reported challenging task. Hence, point or small size target detection algorithms become focus of long range detection and tracking systems. Generally, pre-processing is carried out on the input frame to predict the background and consequently enhance the target signature. In some scenario, post-processingalgorithms are also required to reduce the false alarms. In this paper, we propose an efficient and innovative scheme for real-time implementation of point or small size target detection algorithm on PowerPC Single-Board Computer (SBC) by utilizing the parallel computing feature of AltiVec vector processing unit to achieve real-time processing. Results demonstrate the real-time processing of video with hardware results matching the simulation results.
Modern vision systems include a television imageprocessing system. Improving the technical characteristics and expanding the capabilities of modern devices for recording television images makes it necessary to develo...
Modern vision systems include a television imageprocessing system. Improving the technical characteristics and expanding the capabilities of modern devices for recording television images makes it necessary to develop and create highly efficient algorithms for processing television images. Such systems process large amounts of information, therefore, the requirements for the reliability of processing large amounts of data at limited time intervals are constantly increasing. The rapid development of high-performance computing systems has recently created the conditions for expanding the range of tasks that can be solved with the help of vision systems. The research for improving the efficiency of methods and algorithms for digital imageprocessing significant. Currently, the processing of television images is divided into several steps. The first step is the formation of a digital representation of the image (discretization, quantization and input of the image into the computer memory). The second step is the pre-processing of images (restoration and filtering). The third step is the formation of a graphic preparation of the image (segmentation and contour detection). One of the most complex and relevant is the scientific and technical task of the contour detections of objects in television images on the background with additive noise. The purpose of this study is the development of the contour detection method on the image with impulse noise using mathematical modeling.
This paper presents three fully convolutional neural network architectures which perform change detection using a pair of coregistered images. Most notably, we propose two Siamese extensions of fully convolutional net...
详细信息
This paper presents three fully convolutional neural network architectures which perform change detection using a pair of coregistered images. Most notably, we propose two Siamese extensions of fully convolutional networks which use heuristics about the current problem to achieve the best results in our tests on two open change detection datasets, using both RGB and multispectral images. We show that our system is able to learn from scratch using annotated change detection images. Our architectures achieve better performance than previously proposed methods, while being at least 500 times faster than related systems. This work is a step towards efficient processing of data from large scale Earth observation systems such as Copernicus or Landsat.
The embedded multimedia devices which are designed to perform computationally intensive DSP applications such as imageprocessing and a large range of multimedia tasks. For improving the performance of image processin...
详细信息
ISBN:
(数字)9781538624401
ISBN:
(纸本)9781538624418
The embedded multimedia devices which are designed to perform computationally intensive DSP applications such as imageprocessing and a large range of multimedia tasks. For improving the performance of imageprocessingsystems, processingalgorithms should be implemented in hardware platforms. For real time applications, reconfigurable hardware in the form of Field Programmable Gate Arrays (FPGAs) are used which are able to provide high performance with low latency. The inherent reprogram ability of FPGAs gives them the flexibility of software while retaining the performance advantages of an application specific solution. In real time applications as image sizes grow larger, only real-time hardware systems have to be used with less software. This paper proposes a fully pipelined architecture implementation of skeletonization algorithm for 2-D gray scale images. 3x3 windowing operator is used for analyzing the pixel values. The proposed architecture is tested for image size of 8×8, but the approach discussed can be used for images of any size, as long as the FPGA memory will hold it. The implementation was carried out on Xilinx Vertex 5 board.
Robots have become an integral part in industry as well as domestic use. The future of mankind cannot be seen without robots. Artificial Intelligence is core of any autonomous robot, and it should be capable to recogn...
详细信息
ISBN:
(纸本)9781538677100
Robots have become an integral part in industry as well as domestic use. The future of mankind cannot be seen without robots. Artificial Intelligence is core of any autonomous robot, and it should be capable to recognize an object or human by the vision provided to it. Representations of an object or human are complex functions and various algorithms are being implemented to analyze these complex functions. This review paper is a survey of different conventional techniques used to detect an object in an image. The paper commences with basics of imageprocessingalgorithms for object detection and summarizes the findings based on the research and experimental results of different researchers.
We propose a GPU-based approach to accelerate filtered backprojection (FBP)-type computed tomography (CT) algorithms by adaptively reconstructing only relevant regions of the object at full resolution. In industrial a...
详细信息
We propose a GPU-based approach to accelerate filtered backprojection (FBP)-type computed tomography (CT) algorithms by adaptively reconstructing only relevant regions of the object at full resolution. In industrial applications, the object's insensitivity to radiation as well as lack of inner motion allow for high-resolution scans. The large amounts of recorded data, however, pose serious challenges as the computational cost of CT reconstruction scales quartically with resolution. To ensure real-time reconstruction (i.e. faster processing than projection acquisition) for high-resolution scans, our method skips below-threshold voxels and monotonous regions inside the object. Our approach is able to speed up the reconstruction process by a factor of up to 13 while simultaneously reducing memory requirements by a factor of up to 71.
A new method for personal identification based on geometric invariant moment is presented. Firstly imageprocessing including binary processing and segment of hand silhouette are used, and then translation and scale n...
详细信息
A new method for personal identification based on geometric invariant moment is presented. Firstly imageprocessing including binary processing and segment of hand silhouette are used, and then translation and scale normalization algorithms are implemented on the palms and fingers image. After that the geometric moment characteristics of image are extract, and then the feature vectors composed of seven moment invariants is obtained. At last, support vector is achieved by training 100 images data in images database, 15 testing image were selected randomly to verify validity and feasibility of algorithms. Experimental results indicate that the accuracy of hand shape identification is 93%. The new method of extracting hand shape geometric moment as characteristic matrix is easy to realize with characteristic of high utility and accuracy, and solve the problem of translation, rotation and scaling during the image acquisition process without positioning aids, and especially for development and application of the portable embedded devices.
Cellular Automata (CA) theory is a discrete model that represents the state of each of its cells from a finite set of possible values which evolve in time according to a pre-defined set of transition rules. CA have be...
详细信息
ISBN:
(纸本)9781538610466
Cellular Automata (CA) theory is a discrete model that represents the state of each of its cells from a finite set of possible values which evolve in time according to a pre-defined set of transition rules. CA have been applied to a number of imageprocessing tasks such as Convex Hull Detection, image Denoising etc. but mostly under the limitation of restricting the input to binary images. In general, a gray-scale image may be converted to a number of different binary images which are finally recombined after CA operations on each of them individually. We have developed a multinomial regression based weighed summation method to recombine binary images for better performance of CA based imageprocessingalgorithms. The recombination algorithm is tested for the specific case of denoising Salt and Pepper Noise to test against standard benchmark algorithms such as the Median Filter for various images and noise levels. The results indicate several interesting invariances in the application of the CA, such as the particular noise realization and the choice of sub-sampling of pixels to determine recombination weights. Additionally, it appears that simpler algorithms for weight optimization which seek local minima work as effectively as those that seek global minima such as Simulated Annealing.
暂无评论