Despite impressive capabilities and outstanding performance, deep neural networks (DNNs) have captured increasing public concern about their security problems, due to their frequently occurred erroneous behaviors. The...
详细信息
Despite impressive capabilities and outstanding performance, deep neural networks (DNNs) have captured increasing public concern about their security problems, due to their frequently occurred erroneous behaviors. Therefore, it is necessary to conduct a systematical testing for DNNs before they are deployed to real-world applications. Existing testing methods have provided fine-grained metrics based on neuron coverage and proposed various approaches to improve such metrics. However, it has been gradually realized that a higher neuron coverage does not necessarily represent better capabilities in identifying defects that lead to errors. Besides, coverage-guided methods cannot hunt errors due to faulty training procedure. So the robustness improvement of DNNs via retraining by these testing examples are unsatisfactory. To address this challenge, we introduce the concept of excitable neurons based on Shapley value and design a novel white-box testing framework for DNNs, namely DeepSensor. It is motivated by our observation that neurons with larger responsibility towards model loss changes due to small perturbations are more likely related to incorrect corner cases due to potential defects. By maximizing the number of excitable neurons concerning various wrong behaviors of models, DeepSensor can generate testing examples that effectively trigger more errors due to adversarial inputs, polluted data and incomplete training. Extensive experiments implemented on both image classification models and speaker recognition models have demonstrated the superiority of DeepSensor. Compared with the state-of-the-art testing approaches, DeepSensor can find more test errors due to adversarial inputs (∼ ×1.2), polluted data (∼ ×5) and incompletely-trained DNNs (∼ ×1.3). Additionally, it can help DNNs build larger l2-norm robustness bound (∼ ×3) via retraining according to CLEVER's certification. We further provide interpretable proofs for effectiveness of DeepSensor via excitable neuro
The use of Deep Learning methods have been identified as a key opportunity for enabling processing of extreme-scale scientific datasets. Feeding data into compute nodes equipped with several high-end GPUs at sufficien...
详细信息
ISBN:
(纸本)9781728116440
The use of Deep Learning methods have been identified as a key opportunity for enabling processing of extreme-scale scientific datasets. Feeding data into compute nodes equipped with several high-end GPUs at sufficiently high rate is a known challenge. Facilitating processing of these datasets thus requires the ability to store petabytes of data as well as to access the data with very high bandwidth. In this work, we look at two Deep Learning use cases for cytoarchitectonic brain mapping. These applications are very challenging for the underlying IO system. We present an in depth analysis of their IO requirements and performance. Both applications are limited by the IO performance, as the training processes often have to wait several seconds for new training data. Both applications read random patches from a collection of large HDF5 datasets or TIFF files, which result in many small non-consecutive accesses to the parallel file systems. By using a chunked data format or storing temporally copies of the required patches, the IO performance can be improved significantly. These leads to a decrease of the total runtime of up to 80%.
In order to promote the visual quality of degraded underwater images, we develop an innovative variational model based on Retinex with a total generalized variation (TGV) prior on the illumination. The TGV prior is ad...
详细信息
ISBN:
(纸本)9781728143286
In order to promote the visual quality of degraded underwater images, we develop an innovative variational model based on Retinex with a total generalized variation (TGV) prior on the illumination. The TGV prior is adopted to approximate piecewise smoothness and piecewise linear smoothness of the illumination, which combines the first-order and second-order total variation (TV) to model the variation of illumination. When adopting this illumination prior in the Retinex-based algorithm, the illumination and reflection are well separated, and underwater enhanced results appear more natural and their details and edges are better preserved. Then an efficient iterative optimization method is derived to settle the proposed model via alternately calculating the illumination and the reflection simultaneously. Numerous experiments on both visual results and objective metrics demonstrate the superiority of our method compared with several underwater enhancement methods. In addition, the proposed method can be extended for dehazing, sandstorm removal and low illumination image enhancement, which can illustrate better capacity of our model.
Person re-identification aims to associate images of the same person over multiple non-overlapping camera views at different times. Depending on the human operator, manual re-identification in large camera networks is...
详细信息
Implementation of Canny edge detection algorithm significantly outperforms the existing edge detection techniques in many computer vision algorithms. However, Canny edge detection algorithm is complex, time-consuming ...
详细信息
Implementation of Canny edge detection algorithm significantly outperforms the existing edge detection techniques in many computer vision algorithms. However, Canny edge detection algorithm is complex, time-consuming process with high hardware cost. To overcome these issues, a novel Canny edge detection algorithm is proposed in block level to detect edges without any loss. It uses sobel operator, approximation methods to compute gradient magnitude and orientation for replacing complex operations with reduced hardware cost, existing non-maximum suppression, block classification for adaptive thresholding and existing hysteresis thresholding. Pipelining is introduced to reduce latency. The proposed algorithm is implemented on Xilinx Virtex-5 FPGA and it provides better performance compared to frame-level Canny edge detection algorithm. The synthesized architecture reduces execution time by 6.8% and utilizes less resource to detect edges of 512x512 image compared to existing distributed Canny edge detection algorithm.
Nowadays various methods and sensors are available for 3D reconstruction tasks;however, it is still necessary to integrate advantages of different technologies for optimizing the quality 3D models. Computed tomography...
详细信息
Nowadays various methods and sensors are available for 3D reconstruction tasks;however, it is still necessary to integrate advantages of different technologies for optimizing the quality 3D models. Computed tomography (CT) is an imaging technique which takes a large number of radiographic measurements from different angles, in order to generate slices of the object, however, without colour information. The aim of this study is to put forward a framework to extract colour information from photogrammetric images for corresponding Computed Tomography (CT) surface data with high precision. The 3D models of the same object from CT and photogrammetry methods are generated respectively, and a transformation matrix is determined to align the extracted CT surface to the photogrammetric point cloud through a coarse-to-fine registration process. The estimated pose information of images to the photogrammetric point clouds, which can be obtained from the standard image alignment procedure, also applies to the aligned CT surface data. For each camera pose, a depth image of CT data is calculated by projecting all the CT points to the image plane. The depth image is in principle should agree with the corresponding photogrammetric image. The points, which cannot be seen from the pose, but are also projected on the depth image, are excluded from the colouring process. This is realized by comparing the range values of neighbouring pixels and finding the corresponding 3D points with larger range values. The same procedure is implemented for all the image poses to obtain the coloured CT surface. Thus, by using photogrammetric images, we achieve a coloured CT dataset with high precision, which combines the advantages from both methods. Rather than simply stitching different data, we deep-dive into the photogrammetric 3D reconstruction process and optimize the CT data with colour information. This process can also provide an initial route and more options for other data fusion processes.
Context. Stars form preferentially in clusters embedded inside massive molecular clouds, many of which contain high-mass stars. Thus, a comprehensive understanding of star formation requires a robust and statistically...
详细信息
Context. Stars form preferentially in clusters embedded inside massive molecular clouds, many of which contain high-mass stars. Thus, a comprehensive understanding of star formation requires a robust and statistically well-constrained characterization of the formation and early evolution of these high-mass star clusters. To achieve this, we designed the ALMAGAL Large Program that observed 1017 high-mass star-forming regions distributed throughout the Galaxy, sampling different evolutionary stages and environmental conditions. Aims. In this work, we present the acquisition and processing of the ALMAGAL data. The main goal is to set up a robust pipeline that generates science-ready products, that is, continuum and spectral cubes for each ALMAGAL field, with a good and uniform quality across the whole sample. methods. ALMAGAL observations were performed with the Atacama Large Millimeter/submillimeter Array (ALMA). Each field was observed in three different telescope arrays, being sensitive to spatial scales ranging from approximate to 1000 au up to approximate to 0.1 pc. The spectral setup allows sensitive (approximate to 0.1 mJy beam(-1)) imaging of the continuum emission at 219 GHz (or 1.38 mm), and it covers multiple molecular spectral lines observed in four different spectral windows that span about approximate to 4 GHz in frequency coverage. We have designed a Python-based processing workflow to calibrate and image these observational data. This ALMAGAL pipeline includes an improved continuum determination, suited for line-rich sources;an automatic self-calibration process that reduces phase-noise fluctuations and improves the dynamical range by up to a factor approximate to 5 in about 15% of the fields;and the combination of data from different telescope arrays to produce science-ready, fully combined images. Results. The final products are a set of uniformly generated continuum images and spectral cubes for each ALMAGAL field, including individual-array and comb
Central Receiver Systems use thousands of heliostats distributed in a field to reflect the solar radiation to a receiver installed at a tower. The large distances between the heliostats and their aim point require a h...
详细信息
Central Receiver Systems use thousands of heliostats distributed in a field to reflect the solar radiation to a receiver installed at a tower. The large distances between the heliostats and their aim point require a high precision of the heliostat control and drive system to efficiently concentrate the solar radiation and to allow for a safe and reliable plant operation. Precise calibration procedures are needed for the commissioning of open loop systems. Furthermore, the need for cost reduction in the heliostat field motivates for precise and fast calibration systems allowing for savings in the design of the individual heliostats. This work introduces a method for the measurement of true aim points of heliostats in operation. A proposed application is the plug-in system HelioControl, to be used for parallel in-situ calibration of multiple heliostats in operation. The approach aims at extracting the focal spot position of individual heliostats on the receiver domain using image analysis and signal modulation. A centralized remote vision system, installed far from the harsh conditions at the focal area, takes image sequences of the receiver domain. During the measurement, a signal is modulated on different heliostats which is then extracted for the retrieval of the true aim points. This paper describes the theoretical background of the methodology and demonstrates the functionality based on two simulated cases showing the practical advantages introduced with this approach.
Digital processing of remotely sensed image data has been great importance in recent times. This research work discusses task distribution method in parallelimageprocessing and load balancing under the circumstance ...
详细信息
ISBN:
(数字)9781728183640
ISBN:
(纸本)9781728183657
Digital processing of remotely sensed image data has been great importance in recent times. This research work discusses task distribution method in parallelimageprocessing and load balancing under the circumstance of multi-tasks and multi-processors in remote sensing. Task distribution method can speed up computation and improve efficiency and perform larger computations which are not possible on single processor system. The tasks are distributed based on the segmentation of color and the Support Vector Machine (SVM) is used to classify the indices of the input image and intends to design and improve the color segmentation based task distribution method for index classification using machine learning. In the system, the RGB satellite image is used as an input image and the output is the four indices of forest, building, road and land The results of the system are more accurate and less time consumption than non-distributed computing methods. It is implemented in MATLAB platform with parallel computation toolbox because the system can solve computationally and data-intensive tasks using multicore processors and clusters of computer.
暂无评论