Hyperspectral image (HSI) analysis refers to the processes used to identify and classify objects photographed using equipment that can image photons from a broad range of the electromagnetic spectrum. Downlinking such...
详细信息
ISBN:
(纸本)9781728102474
Hyperspectral image (HSI) analysis refers to the processes used to identify and classify objects photographed using equipment that can image photons from a broad range of the electromagnetic spectrum. Downlinking such large images from space on radiation-resistant platforms with limited on-board computing power takes a large amount of time, memory, and other mission-critical resources. Performing such analysis in space before downlinking all images will save these resources by enabling a subset of images of interest to be downloaded rather than the entire set. The goal of this study is to benchmark and evaluate HSI-classification methods which incorporate deep learning on embedded platforms with limited computing resources. Support Vector Machine (SVM), Multi-Layer Perceptron (MLP), and Convolutional Neural Network (CNN) are the classification methods used in this study. These algorithms were executed on a desktop PC and two embedded platforms: the ODROID-C2 and the Raspberry Pi 3B. Accuracy, run-time, and memory benchmarks determined the optimal model for each platform. Based on results gathered in this research, CNN classification is recommended for the desktop PC due to its high accuracy of 97%. MLP classification is recommended for the embedded platforms under study, as it showcased the shortest runtime and second-highest accuracy.
Segmentation plays a vital role in digital media processing, pattern recognition and computer vision. In the last four decades, extensive research has been done and a number of algorithms have been published in the li...
详细信息
ISBN:
(纸本)9781509045594
Segmentation plays a vital role in digital media processing, pattern recognition and computer vision. In the last four decades, extensive research has been done and a number of algorithms have been published in the literature. Each one has its own merits and demerits. This paper aims to make a comparative analysis of the most popularly known segmentation methods, namely K-Means, Region Growing, Mean shift and Watershed segmentation for video from different category. The contribution of the paper is twofold: Conventionally, the value of K in K-Means segmentation is not known a prior and given as input. In order to avoid manual input by the user, Region growing segmentation is used. The prominent regions come as output of the region growing method, is used as input for K-Means segmentation. The performance of the segmentation algorithms is determined using a set of Quality Metric (QM) parameters. Segmentation is done on RGB Color Video from Entertainment, Sports and Natural Scenery category. The results show the most suitable algorithm for segmentation for each category of video. UBUNTU C Version 16.04 LTS is used to implement the algorithms.
Video systems have seen a resurgence in military applications since the recent proliferation of unmanned aerial vehicles (UAVs). Video systems offer light weight, low cost, and proven COTS technology. Video has not pr...
详细信息
ISBN:
(纸本)0819431907
Video systems have seen a resurgence in military applications since the recent proliferation of unmanned aerial vehicles (UAVs). Video systems offer light weight, low cost, and proven COTS technology. Video has not proven to be a panacea, however, as generally available storage and transmission systems are limited in bandwidth. Digital video systems collect data at rates of up to 270 Mbs;typical transmission bandwidths range from 9600 baud to 10 Mbs. Either extended transmission times or data compression are needed to handle video bit streams. Video compression algorithms have been developed and evaluated in the commercial broadcast and entertainment industry. The Motion Pictures Expert Group (MPEG) developed MPEG-I to compress videos to CD ROM bandwidths (1.5 Mbs) and MPEG-2 to cover the range of 5-10 Mbs and higher. Commercial technology has not extended to lower bandwidths, nor has the impact of MPEG compression for military applications been demonstrated. Using digitized video collected by UAV systems, the effects of data compression on image interpretability and task satisfaction were investigated. Using both MPEG-2 and frame decimation, video clips were compressed to rates of 6Mbs, 1.5 Mbs, and 0.256 Mbs. Experienced image analysts provided task satisfaction estimates and National image Interpretability Rating Scale (NIIRS) ratings on the compressed and uncompressed video clips. Results were analyzed to define the effects of compression rate and method on interpretability and task satisfaction. Lossless compression was estimated to occur at similar to 10 Mbs and frame decimation was superior to MPEG-2 at low bit rates.
This electronic document is a "live"template and already defines the coThe purpose of this paper is to develop a secure storage system for financial data based on fuzzy recognition algorithm to improve the s...
详细信息
Digital images assume a vital part both in day by day life applications, for example, satellite TV, computed tomography, magnetic resonance imaging and additionally in ranges of research and innovation, for example, c...
详细信息
Digital images assume a vital part both in day by day life applications, for example, satellite TV, computed tomography, magnetic resonance imaging and additionally in ranges of research and innovation, for example, cosmology and geographical information systems. An expansive segment of computerized image preparing incorporates image restoration. image restoration is a technique for removal or decrease of corruption that are caused amid the image catching. Corruption originates from obscuring as well as commotion due to the electronic and photometric sources. Obscuring is the type of data transfer capacity decrease of images caused by flawed image development process, for example, relative movement amongst camera and unique scene or by an optical framework that is out of core interest. image Denoising is an important pre-processing task before further processing of the image like segmentation, feature extraction, texture analysis, etc. which removes the noise while retaining the edges and other detailed features as much as possible. This noise gets introduced during acquisition, transmission & reception and storage & retrieval processes. This paper presents a novel pre-processing algorithm which is named as Profuse Clustering Technique (PCT) based on the super pixel clustering. K-Means clustering, Simple Linear Iterative Clustering, Fusing Optimization algorithms are involved in this proposed Profuse Clustering Technique and is further used for denoising the Lung Cancer images to get the more accurate result in the decision making process. (c) 2018 The Authors. Published by Elsevier B.V.
Superpixel segmentation is popularly used as a preprocessing in image segmentation and object recognition. However, none of effective superpixel segmentation algorithms have been proposed to highlight the contour feat...
详细信息
ISBN:
(纸本)9781509027644
Superpixel segmentation is popularly used as a preprocessing in image segmentation and object recognition. However, none of effective superpixel segmentation algorithms have been proposed to highlight the contour features of the salient object in image, and they are difficult to describe the irregular superpixel boundaries. That will make a bad impact on subsequent imageprocessing. In this paper, we present a novel algorithm to produce superpixel based on Delaunay Triangulation, and the Difference of Gaussian (DoG) feature points are used as nodes to build initial superpixels. This method will reduce the redundant superpixels of the image's smooth regions. To further avoid redundant segmentation, firstly, we obtain the gradient image by Roberts operator for texture features, and the RGB color space is converted into the YUV color space for color features. Secondly, we calculate the local contour orientation of the neighborhood region of superpixel boundaries. Thirdly, we compare the superpixel boundaries orientation with the local contour: if the orientations are consistent, we retain the boundary;if not, we delete them. Likewise, if the average colors on both sides of the boundary are different, it will be retained. Experimental results on the Berkeley Segmentation Dataset (BSD) show that the superpixel segmentation algorithm in this paper can effectively suppress the redundant superpixels and the superpixel boundaries can draw the contour of the salient object clearly. Moreover, our method can self-adaptively adjust the number of superpixel based on image texture and HSI color features.
Human detection in digital videos is used in a variety of real time scenarios such as unusual activity detection in crowded places, gender identification and age determination and people detection in heavy traffic. Ev...
详细信息
ISBN:
(纸本)9781538611227
Human detection in digital videos is used in a variety of real time scenarios such as unusual activity detection in crowded places, gender identification and age determination and people detection in heavy traffic. Every human detection algorithm detects all the moving objects in the initial stage. The next step is to differentiate the moving object as a human being (or) non-human being. This classification is based on shape, texture and motion features. The objective of this work is to compare the performance of two different human detection algorithms. One is the human detection based on shape and another one is human detection based on Daubechies wavelet Transform. The shape based detection uses the shape information of human body to classify the moving objects. Daubechies wavelet transform is shift invariant in nature. Thus, the algorithm is able to detect even small hand or head movements. The performance of these two algorithms are compared in terms of detection accuracy, Precision and Recall. Experimental results have shown promising results for the wavelet transform based approach.
This paper addresses the vision sensor planning for part dimensional inspection. To efficiently inspect large sheet metal parts from automotive manufacturing, it is highly desirable to obtain the minimum number of cam...
详细信息
ISBN:
(纸本)078037925X
This paper addresses the vision sensor planning for part dimensional inspection. To efficiently inspect large sheet metal parts from automotive manufacturing, it is highly desirable to obtain the minimum number of camera viewpoints with each satisfying all the given task constraints. However, the general minimum-viewpoint problem is NP-hard. Based on our previous work, a novel method is developed to solve the problem into its sub-optimality. This method first generates candidate viewpoints using a decomposition-based approach. Then the minimum-viewpoint problem is rendered as an integer optimization problem set-partition problem, which can be solved to its suboptimality using existing algorithms and software. Experimental results on real-world parts demonstrate the effectiveness of the new method.
The proceedings contains 140 papers from the 1997 International conference on Information, Communications and Signal processing. Topics discussed include: routing algorithms in ATM networks;traffic control in ATM netw...
详细信息
The proceedings contains 140 papers from the 1997 International conference on Information, Communications and Signal processing. Topics discussed include: routing algorithms in ATM networks;traffic control in ATM networks;neural networks;imageprocessing;audio and acoustic signal processing;telecommunication network design;database and distributed systems;code division multiple access (CDMA);switching in ATM networks;speech processing;data communication networks;multimedia information retrieval;satellite communications;signal encoding;optical fiber communication;adaptive signal processing;and telecommunication network management.
Assembly of miniaturized high-resolution cameras is typically carried out by active alignment. The sensor image is constantly monitored while the lens stack is adjusted. When sharpness is acceptable in all regions of ...
详细信息
ISBN:
(纸本)9781628412444
Assembly of miniaturized high-resolution cameras is typically carried out by active alignment. The sensor image is constantly monitored while the lens stack is adjusted. When sharpness is acceptable in all regions of the image, the lens position over the sensor is fixed. For multi-aperture cameras, this approach is not sufficient. During prototyping, it is beneficial to see the complete reconstructed image, assembled from all optical channels. However, typical reconstruction algorithms are high-quality offline methods that require calibration. As the geometric setup of the camera repeatedly changes during assembly, this would require frequent re-calibration. We present a real-time algorithm for an interactive preview of the reconstructed image during camera alignment. With this algorithm, systematic alignment errors can be tracked and corrected during assembly. Known imperfections of optical components can also be included in the reconstruction. Finally, the algorithm easily maps to very simple GPU operations, making it ideal for applications in mobile devices where power consumption is critical.
暂无评论