With the rapid progress of tomography systems, a need has emerged for innovation that can provide high accuracy measurements, fast acquisition rate, and low-cost robust systems for multi-phase flow and medical instrum...
详细信息
With the rapid progress of tomography systems, a need has emerged for innovation that can provide high accuracy measurements, fast acquisition rate, and low-cost robust systems for multi-phase flow and medical instruments. The main objective of this work is to design and implement a 3D Electrical Impedance Tomography (EIT) system to effectively monitor a multi-phase flow which consists of an annular flow comprising of an air-core of unknown diameter, surrounded by a liquid phase, such as oil and water mixture of high water-cut value. The system consists of an array of electrode sensors evenly distributed around the pipe confining the three-phase flow. Subsequently, a dedicated hardware module (GPU) is used to reconstruct these signals, captured from the sensors, into a 3D image of the flow by solving the forward-inverse problem. The annular flow is generated using a flow conditioner consisting of a swirl cage followed by a bluff body that generates vortices that push the high dense liquid phase to the outer side of the pipeline. The 3D EIT system is designed to accurately measure the gas liquid fraction and the liquid-liquid fraction and provide the 3D volume image of the flow. Hence, it can be utilized for pipeline and reservoirs characterization to achieve high efficiency, cost reduction, and safer production. The approach is to design a dedicated, highly parallel hardware architecture that can sustain the high computation complexity of the 3D EIT algorithm. Coincidentally, advanced software development tools are available to help effectively develop these parallel hardware algorithms. The main novelty is utilizing GPUs to outperform the existing real-time tomography hardware platforms, mainly based on Field Programmable Gate Array (FPGAs), which have the disadvantage of being less dense and feature lower precise arithmetic operations. Matrix multiplication and matrix inversion are considered the most intensive computation in Electric Impedance Tomography (EIT). T
The explosive growth of videos on streaming media platforms has underscored the urgent need for effective video quality assessment (vQA) algorithms to monitor and perceptually optimize the quality of streaming videos....
详细信息
Fuzzy co-clustering algorithms are the effective techniques for multi-dimensional clustering in which all features are considered of equal importance (relevance). In fact, the features' importance could be differe...
详细信息
Fuzzy co-clustering algorithms are the effective techniques for multi-dimensional clustering in which all features are considered of equal importance (relevance). In fact, the features' importance could be different, even several of them could be considered redundant. The removal of the redundant features has formed the idea of feature-reduction in problems of the big data processing. In this paper, we propose a new unsupervised learning scheme by incorporating the feature-weighted entropy into the objective function of fuzzy co-clustering, called the Feature-Reduction Fuzzy Co-Clustering Algorithm (FRFCoC). First, a new objective function is formed on the basis of the original fuzzy co-clustering objective function which adds parameters representing the entropy weight of the different features. Next, a feature-reduction and clustering automatic schema are adjusted based on FCoC's original learning schema which calculates new parameters and conditions to eliminate irrelevant feature components. FRFCoC algorithm can be mathematically shown to converge after a finite number of iterations. The experiment results were conducted on some many-features data sets and hyperspectral images that have demonstrated the outstanding performance of FRFCoC algorithm compared with some previously proposed algorithms. (C) 2020 Elsevier B.v. All rights reserved.
The best neural architecture for a given machine learning problem depends on many factors: not only the complexity and structure of the dataset, but also on resource constraints including latency, compute, energy cons...
ISBN:
(纸本)9781713871088
The best neural architecture for a given machine learning problem depends on many factors: not only the complexity and structure of the dataset, but also on resource constraints including latency, compute, energy consumption, etc. Neural architecture search (NAS) for tabular datasets is an important but under-explored problem. Previous NAS algorithms designed for image search spaces incorporate resource constraints directly into the reinforcement learning (RL) rewards. However, for NAS on tabular datasets, this protocol often discovers suboptimal architectures. This paper develops TabNAS, a new and more effective approach to handle resource constraints in tabular NAS using an RL controller motivated by the idea of rejection sampling. TabNAS immediately discards any architecture that violates the resource constraints without training or learning from that architecture. TabNAS uses a Monte-Carlo-based correction to the RL policy gradient update to account for this extra filtering step. Results on several tabular datasets demonstrate the superiority of TabNAS over previous reward-shaping methods: it finds better models that obey the constraints.
In some practical situations, images are set on a circle. For example, images of the facies (thin film) of dried biological fluid, eyes, cut of a tree trunk, etc. Currently, most of the imageprocessing works deal wit...
详细信息
Stereoscopic imagers are widely used in machine vision for three-dimensional (3D) visualization as well as in non-destructive testing for quantitative characterization of cracks, delamination and other defects. Measur...
详细信息
Stereoscopic imagers are widely used in machine vision for three-dimensional (3D) visualization as well as in non-destructive testing for quantitative characterization of cracks, delamination and other defects. Measurement capability in these systems is provided by a proper combination of the optical parameters and data processing techniques. Conventional approach to their design consists of two sequential stages: optical system design and optimization of calibration and imageprocessingalgorithms. Such two-stage procedure often complicates both the hardware and the software, and results in a time-ineffective design procedure and cost-ineffective solution. We demonstrate a more effective approach and propose to estimate errors of 3D measurements at the early optical design stage. We show that computer simulation using optical design software allows not only optimizing optical parameters of the imager but also choosing the most effective mathematical model of the system and the equipment necessary for calibration. We tested the proposed approach on the design of miniature prism-based stereoscopic system and analyzed the impact of various factors (aberrations, tolerances, etc.) as on the image quality, so on the quality of calibration and 3D measurements accuracy. The proposed joint design approach may be highly effective for various measurement systems and applications when both optical parameters and imageprocessingalgorithms are not defined in advance and are necessary to be optimized. (C) 2019 Optical Society of America under the terms of the OSA Open Access Publishing Agreement
Deep learning, as one of the currently most popular computer science research trends, improves neural networks, which has more and deeper layers allowing higher abstraction levels and more accurate data analysis. Alth...
详细信息
Deep learning, as one of the currently most popular computer science research trends, improves neural networks, which has more and deeper layers allowing higher abstraction levels and more accurate data analysis. Although deep convolutional neural networks, as a deep learning algorithm, has recently achieved promising results in data analysis, the requirement for a large amount of data prevents its use in medical data analysis since it is challenging to obtain data from the medical field. Breast cancer is a common cancer in women. To diagnose this kind of cancer, breast cell shapes in histopathology images should be examined by senior pathologists. The number of pathologists per population in the world is not enough, especially in Africa, and human mistake may occur in diagnosis procedure. After the evaluation of deep learning methods and algorithms in breast histological data processing, we tried to improve the current systems' accuracy. As a result, this study proposes two effective deep transfer learning-based models, which rely on pre-trained DCNN using a large collection of imageNet dataset images that improve current state-of-the-art systems in both binary and multiclass classification. We transfer pre-trained weights of the ResNet50 and DesneNet121 on the imagenet as initial weights and fine-tune these models with a deep classifier with data augmentation to detect various malignant and benign samples tissues in the two categories of binary classification and multiclass classification. The proposed models have been examined with optimized hyperparameters in magnification-dependent and magnification-independent classification modes. In the multiclass classification, the proposed system achieved up to 98% accuracy. As for binary classification, the proposed system provides up to 100% accuracy. The results outperform previous studies accuracies in all defined performance metrics in breast cancer CAD systems from histological images.
The basic methods for stabilizing video images are considered. The requirements for the image stabilization algorithm in the video streaming processing system for unmanned aerial vehicles (UAvs) are formulated. An imp...
详细信息
Cosmic Ray (CR) hits are the major contaminants in astronomical imaging and spectroscopic observa-tions involving solid-state detectors. Correctly identifying and masking them is a crucial part of the imageprocessing...
详细信息
Cosmic Ray (CR) hits are the major contaminants in astronomical imaging and spectroscopic observa-tions involving solid-state detectors. Correctly identifying and masking them is a crucial part of the imageprocessing pipeline, since it may otherwise lead to spurious detections. For this purpose, we have developed and tested a novel Deep Learning based framework for the automatic detection of CR hits from astronomical imaging data from two different imagers: Dark Energy Camera (DECam) and Las Cumbres Observatory Global Telescope (LCOGT). We considered two baseline models namely deepCR and Cosmic-CoNN, which are the current state-of-the-art learning based algorithms that were trained using Hubble Space Telescope (HST) ACS/WFC and LCOGT Network images respectively. We have experimented with the idea of augmenting the baseline models using Attention Gates (AGs) to improve the CR detection performance. We have trained our models on DECam data and demonstrate a consistent marginal improvement by adding AGs in True Positive Rate (TPR) at 0.01% False Positive Rate (FPR) and Precision at 95% TPR over the aforementioned baseline models for the DECam dataset. We demonstrate that the proposed AG augmented models provide significant gain in TPR at 0.01% FPR when tested on previously unseen LCO test data having images from three distinct telescope classes. Furthermore, we demonstrate that the proposed baseline models with and without attention augmentation outperform state-of-the-art models such as Astro-SCRAPPY, Maximask (that is trained natively on DECam data) and pre-trained ground-based Cosmic-CoNN. This study demonstrates that the AG module augmentation enables us to get a better deepCR and Cosmic-CoNN models and to improve their generalization capability on unseen data. (C) 2022 Elsevier B.v. All rights reserved.
暂无评论