Delineation of subsurface stratigraphy is an essential task in site characterization. A three-dimensional (3D) subsurface geological model that precisely depicts stratigraphic relationships in a specific site can grea...
详细信息
Delineation of subsurface stratigraphy is an essential task in site characterization. A three-dimensional (3D) subsurface geological model that precisely depicts stratigraphic relationships in a specific site can greatly benefit subsequent geotechnical analysis and designs. However, only a limited number of boreholes is usually available from a specific site in practice. It is therefore challenging to properly construct complex stratigraphic relationships in a 3D space based on sparse measurements from limited boreholes. To tackle this challenge, this study proposes a generative machine learning method called multi-scale generative adversarial networks (MS-GAN) for developing 3D subsurface geological models from limited boreholes and a 3D training image representing prior geological knowledge. The proposed method automatically learns multi-scale 3D stratigraphic patterns extracted from the 3D training image and generates 3D geological models conditioned on limited borehole data in an iterative manner. The proposed method is illustrated using 3D numerical and real data examples, and the results indicate that the proposed method can effectively learn the stratigraphic information from a 3D training image to generate multiple 3D realizations from sparse boreholes. Both accuracy and associated uncertainty of 3D realizations are quantified. Effect of borehole number on performance of the proposed method is also investigated.
In order to improve bioimages quality and improve diagnosis reliability, physicians need to use injectable contrast medium. Recently, a lot of studies reported the toxicity of these agents such as the correlation betw...
详细信息
ISBN:
(数字)9781665468459
ISBN:
(纸本)9781665468459
In order to improve bioimages quality and improve diagnosis reliability, physicians need to use injectable contrast medium. Recently, a lot of studies reported the toxicity of these agents such as the correlation between repeated administration of Gadolinium MRI examinations and nephrogenic systemic fibrosis. The direction is to define innovative techniques able to improve bioimages quality, minimizing the quantity of contrast medium, or define methods able to enhance images. Latter can be performed by using deep learning based machine learning techniques. In this work, we report preliminary work about the design of a deep learning architecture for bio-images enhancing, called DeLaBE, able to enrich information from images obtained with low or zero quantities of contrast liquid. DeLaBe is based on using a convolutional neural network reversing the standard workflow by using non-contrast images as ground truth. The cost function compares the prediction and the reference ground-truth image, which enables the optimization of the network parameters. During training of the DL model, the network parameters are updated with respect to the cost function comparing predicted and true non-contrast images. The goal is to improve the performance of the enhancing task by increasing the number of inputs as non-contrast images require a specific examination without agents administrations.
With the opening of electricity market, the interaction between grids and users is becoming more and more frequent. Household electricity demand estimation is a significant and indispensable process of the necessary p...
详细信息
With the opening of electricity market, the interaction between grids and users is becoming more and more frequent. Household electricity demand estimation is a significant and indispensable process of the necessary precise demand response in the future. Large-scale coverage of the Advanced Metering Infrastructure provides a large volume of user electricity data and brings opportunities for residential electricity consumption forecasting, but, on the other hand, it has brought tremendous pressure on the communication link and data computing center. This paper proposes an efficient edge sparse coding method based on the K-singular value decomposition (K-SVD) algorithm to extract hidden usage behavior patterns (UBPs) from load datasets and reduce the cost of communication, storage, and computation. The load of representative household appliances is introduced as the initial dictionary of the K-SVD algorithm in order to make the UBPs more proximate to the residents' daily electricity consumption. Then, a linear support vector machine (SVM)-based method with UBPs is used to predict the subsequent interval household electricity demand. The experimental result shows that the proposed algorithm can effectively follow the trend of the real load curve and realize accurate forecasting of the peak electricity demand. (C) 2018 Institute of Electrical Engineers of Japan. Published by John Wiley & Sons, Inc.
The widespread popularity of smart meters enables the collection of an immense amount of fine-grained data, thereby realizing a two-way information flow between the grid and the customer, along with personalized inter...
详细信息
The widespread popularity of smart meters enables the collection of an immense amount of fine-grained data, thereby realizing a two-way information flow between the grid and the customer, along with personalized interaction services, such as precise demand response. These services basically rely on the accurate estimation of electricity demand, and the key challenge lies in the high volatility and uncertainty of load profiles and the tremendous communication pressure on the data link or computing center. This study proposed a novel two-stage approach for estimating household electricity demand based on edge deep sparse coding. In the first sparse coding stage, the status of electrical devices was introduced into the deep non-negative k-means-singular value decomposition (K-SVD) sparse algorithm to estimate the behavior of customers. The patterns extracted in the first stage were used to train the long short-term memory (LSTM) network and forecast household electricity demand in the subsequent 30 min. The developed method was implemented on the Python platform and tested on AMPds dataset. The proposed method outperformed the multi-layer perception (MLP) by 51.26%, the autoregressive integrated moving average model (ARIMA) by 36.62%, and LSTM with shallow K-SVD by 16.4% in terms of mean absolute percent error (MAPE). In the field of mean absolute error and root mean squared error, the improvement was 53.95% and 36.73% compared with MLP, 28.47% and 23.36% compared with ARIMA, 11.38% and 18.16% compared with LSTM with shallow K-SVD. The results of the experiments demonstrated that the proposed method can provide considerable and stable improvement in household electricity demand estimation.
In this paper, a comparative study of the advantages and disadvantages of Principal Component Analysis (PCA) and Wavelet Transform (WT) based methods combined with a Artificial Neural Network (ANN) classifier for the ...
详细信息
ISBN:
(纸本)9781479913442;9781479913435
In this paper, a comparative study of the advantages and disadvantages of Principal Component Analysis (PCA) and Wavelet Transform (WT) based methods combined with a Artificial Neural Network (ANN) classifier for the lightning stroke classification is presented. Results show that both techniques present an acceptable performance. However, PCA technique presents some advantages with respect to the Wavelet Transform technique.
Multivariate time series classification is of significance in machine learning area. In this paper, we present a novel time series classification algorithm, which adopts triangle distance function as similarity measur...
详细信息
Multivariate time series classification is of significance in machine learning area. In this paper, we present a novel time series classification algorithm, which adopts triangle distance function as similarity measure, extracts some meaningful patterns from original data and uses traditional machine learning algorithm to create classifier based on the extracted patterns. During the stage of pattern extraction, Gini function is used to determine the starting position in the original data and the length of each pattern. In order to improve computing efficiency, we also apply sampling method to reduce the searching space of patterns. The common datasets are used to check our algorithm and compare with the naive algorithms. Experimental results are shown to reveal that much improvement can be gained in terms of interpretability, simplicity and accuracy.
This paper presents a statistical test and algorithms for patterns extraction and supervised classification of sequential data. First it defines the notion of prediction suffix tree (PST). This type of tree can be use...
详细信息
This paper presents a statistical test and algorithms for patterns extraction and supervised classification of sequential data. First it defines the notion of prediction suffix tree (PST). This type of tree can be used to efficiently describe variable order chain. It performs better than the Markov chain of order L and at a lower storage cost. We propose an improvement of this model, based on a statistical test. This test enables us to control the risk of encountering different patterns in the model of the sequence to classify and in the model of its class. Applications to biological sequences are presented to illustrate this procedure. We compare the results obtained with different models (Markov chain of order L, Variable order model and the statistical test, with or without smoothing). We set out to show how the choice of the parameters of the models influences performance in these applications. Obviously these algorithms can be used in other fields in which the data are naturally ordered. (C) 2003 Elsevier B.V. All rights reserved.
暂无评论