The computer vision systems are mainly devoted for production monitoring in quality inspection systems. It is the fastest growing and most popular non-invasive product defects detection method. The productivity of ele...
详细信息
ISBN:
(纸本)9781509018666
The computer vision systems are mainly devoted for production monitoring in quality inspection systems. It is the fastest growing and most popular non-invasive product defects detection method. The productivity of electronic components growth and their prices decline creates favorable conditions for the development of imageprocessingsystems for industrial production. The food industry is one of the main industries. Production volumes grow along with human population growth. Containers for food industry are made in very large quantities and demand on quality inspection system plays important role. An automated computer vision system was developed for the control of PET preparation quality. The implementation of the designed system was presented in this article. The system used for imageprocessingalgorithms to inspect the lateral and upper parts of the workpiece. The system is designed according to its operating parameters. Reached throughput is 10,000 workpieces per hour.
This paper describes an efficient edge detection algorithm that can be used as a plug-in for digital imageprocessingsystems. The proposed algorithm uses a method based on iterative clustering targeting a reduced num...
详细信息
ISBN:
(纸本)9781509020478
This paper describes an efficient edge detection algorithm that can be used as a plug-in for digital imageprocessingsystems. The proposed algorithm uses a method based on iterative clustering targeting a reduced number of operations. The algorithm splits the image into two parts, background and foreground, and calculates the mean value for each of them. Based on these results, the new threshold value will be obtained and looped until the mean values remain unchanged. The only pixels affected by the change are the pixels with values between the previous two thresholds, so only they have to be redistributed to a new class. As a result, only few operations are needed in order to obtain the desired threshold. All the algorithms and results obtained in this paper are developed and tested using the C# programming language.
Flood disaster is one of the heaviest disasters in the world. It is necessary to monitor and evaluate the flood disaster in order to mitigate the consequences. As floods do not recognize borders, transboundary flood r...
详细信息
ISBN:
(数字)9781510613171
ISBN:
(纸本)9781510613171;9781510613164
Flood disaster is one of the heaviest disasters in the world. It is necessary to monitor and evaluate the flood disaster in order to mitigate the consequences. As floods do not recognize borders, transboundary flood risk management is imperative in shared river basins. Disaster management is highly dependent on early information and requires data from the whole river basin. Based on the hypothesis that the flood events over the same area with same magnitude have almost identical evolution, it is crucial to develop a repository database of historical flood events. This tool, in the case of extended transboundary river basins, could constitute an operational warning system for the downstream area. The utility of SAR images for flood mapping, was demonstrated by previous studies but the SAR systems in orbit were not characterized by high operational capacity. Copernicus system will fill this gap in operational service for risk management, especially during emergency phase. The operational capabilities have been significantly improved by newly available satellite constellation, such as the Sentinel-1A&B mission, which is able to provide systematic acquisitions with a very high temporal resolution in a wide swath coverage. The present study deals with the monitoring of a transboundary flood event in Evros basin. The objective of the study is to create the "migration story" of the flooded areas on the basis of the evolution in time for the event occurred from October 2014 till May 2015. Flood hazard maps will be created, using SAR-based semi-automatic algorithms and then through the synthesis of the related maps in a GIS-system, a spatiotemporal thematic map of the event will be produced. The thematic map combined with TanDEM-X DEM, 12m/pixel spatial resolution, will define the non-affected areas which is a very useful information for the emergency planning and emergency response phases. The Sentinels meet the main requirements to be an effective and suitable operational
This paper presents a survey of blob detection methods which has been applied on imageprocessing with relation of medical images proposed by literature. "The blob detection is a mathematical method which detects...
详细信息
ISBN:
(纸本)9781509022489
This paper presents a survey of blob detection methods which has been applied on imageprocessing with relation of medical images proposed by literature. "The blob detection is a mathematical method which detects regions or points in digital images". [1] The regions or points which have noticeable difference with their surroundings is called blob. Given the increased interest in biomedical imageprocessing system, many algorithms and methods have been reported to apply but there is no systematic survey and classification of the blob detection for medical images and how they have been assessed and applied. The findings, which is the most usable methods of blob detectors in biomedical imageprocessing has been presented. It was also investigated how these studies have been surveyed, how they evolved in the main digital libraries over the last decade, and what points deserves further attention, through new research. From this survey, practitioners and researchers can adopt the blob detection methods and analyze to use these methods in their research for further development.
In recent years the growth in quantity, diversity and capability of Earth Observation (EO) satellites, has enabled increase's in the achievable payload data dimensionality and volume. However, the lack of equivale...
详细信息
ISBN:
(纸本)9781509018178
In recent years the growth in quantity, diversity and capability of Earth Observation (EO) satellites, has enabled increase's in the achievable payload data dimensionality and volume. However, the lack of equivalent advancement in downlink technology has resulted in the development of an onboard data bottleneck. This bottleneck must be alleviated in order for EO satellites to continue to efficiently provide high quality and increasing quantities of payload data. This research explores the selection and implementation of state-of-the-art multidimensional image compression algorithms and proposes a new onboard data processing architecture, to help alleviate the bottleneck and increase the data throughput of the platform. The proposed new system is based upon a backplane architecture to provide scalability with different satellite platform sizes and varying mission's objectives. The heterogeneous nature of the architecture allows benefits of both Field Programmable Gate Array (FPGA) and Graphical processing Unit (GPU) hardware to be leveraged for maximised data processing throughput.
With the increase in the data storage and data acquisition technologies there is an increase in huge image database. Therefore we need to develop proper and accurate systems to manage this database. Here in this paper...
详细信息
ISBN:
(数字)9783319309330
ISBN:
(纸本)9783319309330;9783319309323
With the increase in the data storage and data acquisition technologies there is an increase in huge image database. Therefore we need to develop proper and accurate systems to manage this database. Here in this paper we focus on the transformation technique to search, browse and retrieve images from large database. Here we have discussed briefly about the CBIR technique for image retrieval using Discrete Cosine Transform for generating feature vector. We have researched on the different retrieval algorithms. The proposed work is experimented over 9000 images from MIRFLIKR Database (Huskies ACM International conference on Multimedia Information Retrieval (MIR'08), 2008 [1]). We have focused on showing the difference between the precision and recall and also the time of different methods and its performance by querying an image from the database and a non-database image.
This work proposes a textual and graphical domain-specific language (DSL) designed especially for modeling and writing data and imageprocessingalgorithms. Since reusing algorithms and other functionality leads to hi...
详细信息
ISBN:
(纸本)9789897582325
This work proposes a textual and graphical domain-specific language (DSL) designed especially for modeling and writing data and imageprocessingalgorithms. Since reusing algorithms and other functionality leads to higher program quality and mostly shorter development time, this approach introduces a novel component-based language design. Special diagrams and structures, such as components, component-diagrams and component-instance-diagrams are introduced. The new language constructs allow an abstract and object-oriented description of data and imageprocessing tasks. Additionally, a compatible graphical design interface is proposed, giving modelers and architects the opportunity to decide which kind of modeling they prefer (graphical or textual, including round-trip engineering).
The paper analyzes well-known automated microscopy systems. The work presents the comparative analysis of low-and middle-designed algorithms. The adaptive module of pre-processing and image segmentation has been worke...
详细信息
ISBN:
(纸本)9786176079132
The paper analyzes well-known automated microscopy systems. The work presents the comparative analysis of low-and middle-designed algorithms. The adaptive module of pre-processing and image segmentation has been worked out.
Hyperspectral images' data structure contains information from spectral ranges far beyond the limits of conventional, visible light imaging devices. This additional information comes at the cost of having unwieldy...
详细信息
ISBN:
(纸本)9781785616525
Hyperspectral images' data structure contains information from spectral ranges far beyond the limits of conventional, visible light imaging devices. This additional information comes at the cost of having unwieldy large image sizes, which presents an important computational resource consideration for many applications. Recent studies have shown that the usual large amount of redundancy in Hyperspectral images can be discarded to avoid classification noise by means of sparse signal processing principles. To improve on classification performance, class-dependent Sparse-Representation classification (cd-SRC) algorithm includes Euclidean distance information between the sparse representation of a sample pixel to be classified and the training classes. The current work describes the use of Manhattan (or City Block) distance for improving the cd-SRC classification procedure. Our results show that classification performance using Euclidean Distance is comparable to the Manhattan Distance metric, which requires a smaller number of significantly less computationally expensive operations. In addition, we show that sparse representation classification has a significant advantage in classification performance compared to established Hyperspectral image classification algorithms, such as the well-known Minimum Euclidean Distance and Support Vector Machine classifiers.
Low-dose Proton Computed Tomography (pCT) is an evolving imaging modality that is used in proton therapy planning which addresses the range uncertainty problem. The goal of pCT is generating a 3D map of relative stopp...
详细信息
ISBN:
(纸本)9781538622834
Low-dose Proton Computed Tomography (pCT) is an evolving imaging modality that is used in proton therapy planning which addresses the range uncertainty problem. The goal of pCT is generating a 3D map of relative stopping power measurements with high accuracy within clinically required time frames. Generating accurate relative stopping power values within the shortest amount of time is considered a key goal when developing an image reconstruction software. The existing image reconstruction softwares have successfully met this time frame and even exceeded this time goal, but require clusters with hundreds of processors. This paper describes a novel reconstruction technique using two graphics processing unit devices. The proposed reconstruction technique is tested on both simulated and experimental datasets and on two different systems namely Nvidia K40 and P100 graphics processing units from IBM and Cray. The experimental results demonstrate that our proposed reconstruction method meets both the timing and accuracy with the benefit of having reasonable cost and efficient use of power.
暂无评论