Although the classification of chest radiographs has long been an extensively researched topic, interest increased significantly with the onset of the COVID-19 pandemic. Existing results are promising;however, the rad...
详细信息
Although the classification of chest radiographs has long been an extensively researched topic, interest increased significantly with the onset of the COVID-19 pandemic. Existing results are promising;however, the radiological similarities between COVID-19 and other types of respiratory diseases limit the success of conventional image classification approaches that focus on single instances. This study proposes a novel perspective that conceptualizes COVID-19 pneumonia as a deviation from a normative distribution of typical pneumonia patterns. Using a population- based approach, our approach utilizes distributional anomaly detection. This method diverges from traditional instance-wise approaches by focusing on sets of scans instead of individual images. Using an autoencoder to extract feature representations, we present instance-based and distribution-based assessments of the separability between COVID-positive and COVIDnegative pneumonia radiographs. The results demonstrate that the proposed distribution-based methodology outperforms conventional instance-based techniques in identifying radiographic changes associated with COVID-positive cases. This underscores its potential as an early warning system capable of detecting significant distributional shifts in radiographic data. By continuously monitoring these changes, this approach offers a mechanism for early identification of emerging health trends, potentially signaling the onset of new pandemics and enabling prompt public health responses.
Elevated seismic noise for moderate-size earthquakes recorded at teleseismic distances has limited our ability to see their complexity. We develop a machine-learning-based algorithm to separate noise and earthquake si...
详细信息
Elevated seismic noise for moderate-size earthquakes recorded at teleseismic distances has limited our ability to see their complexity. We develop a machine-learning-based algorithm to separate noise and earthquake signals that overlap in frequency. The multi-task encoder-decoder model is built around a kernel pre-trained on local (e.g., short distances) earthquake data (Yin et al., 2022, ) and is modified by continued learning with high-quality teleseismic data. We denoise teleseismic P waves of deep Mw5.0+ earthquakes and use the clean P waves to estimate source characteristics with reduced uncertainties of these understudied earthquakes. We find a scaling of moment and duration to be M0 similar or equal to tau 4, and a resulting strong scaling of stress drop and radiated energy with magnitude (Delta sigma similar or equal to M00.21 ${\Delta }\sigma \simeq {M}_{0}<^>{0.21}$ and ER similar or equal to M01.24 ${E}_{R}\simeq {M}_{0}<^>{1.24}$). The median radiation efficiency is 5%, a low value compared to crustal earthquakes. Overall, we show that deep earthquakes have weak rupture directivity and few subevents, suggesting a simple model of a circular crack with radial rupture propagation is appropriate. When accounting for their respective scaling with earthquake size, we find no systematic depth variations of duration, stress drop, or radiated energy within the 100-700 km depth range. Our study supports the findings of Poli and Prieto (2016, ) with a doubled amount of earthquakes investigated and with earthquakes of lower magnitudes. The vibration of the Earth's ground recorded at seismometers carries the seismic signatures of distant earthquakes superimposed to the Earth's natural or anthropogenic noise surrounding the seismic station. We use artificial intelligence technology to separate the weak signals of distant earthquakes from other sources of ground vibrations unrelated to the earthquakes. The separated signal provides new insights into earthquakes, especi
Anomaly detection is a fundamental problem in data science and is one of the highly studied topics in machine learning. This problem has been addressed in different contexts and domains. This article investigates anom...
详细信息
Anomaly detection is a fundamental problem in data science and is one of the highly studied topics in machine learning. This problem has been addressed in different contexts and domains. This article investigates anomalous data within time series data in the maritime sector. Since there is no annotated dataset for this purpose, in this study, we apply an unsupervised approach. Our method benefits from the unsupervised learning feature of autoencoders. We utilize the reconstruction error as a signal for anomaly detection. For this purpose, we estimate the probability density function of the reconstruction error and find different levels of abnormality based on statistical attributes of the density of error. Our results demonstrate the effectiveness of this approach for localizing irregular patterns in the trajectory of vessel movements.
Background The standard configuration's set of twelve electrocardiogram (ECG) leads is optimal for the medical diagnosis of diverse cardiac conditions. However, it requires ten electrodes on the patient's limb...
详细信息
Background The standard configuration's set of twelve electrocardiogram (ECG) leads is optimal for the medical diagnosis of diverse cardiac conditions. However, it requires ten electrodes on the patient's limbs and chest, which is uncomfortable and cumbersome. Interlead conversion methods can reconstruct missing leads and enable more comfortable acquisitions, including in wearable devices, while still allowing for adequate diagnoses. Currently, methodologies for interlead ECG conversion either require multiple reference (input) leads and/or require input signals to be temporally aligned considering the ECG landmarks. Methods Unlike the methods in the literature, this paper studies the possibility of converting ECG signals into all twelve standard configuration leads using signal segments from only one reference lead, without temporal alignment (blindly-segmented). The proposed methodology is based on a deep learning encoder-decoder U-Net architecture, which is compared with adaptations based on convolutional autoencoders and label refinement networks. Moreover, the method is explored for conversion with one single shared encoder or multiple individual encoders for each lead. Results Despite the more challenging settings, the proposed methodology was able to attain state-of-the-art level performance in multiple target leads, and both lead I and lead II seem especially suitable to convert certain sets of leads. In cross-database tests, the methodology offered promising results despite acquisition setup differences. Furthermore, results show that the presence of medical conditions does not have a considerable effect on the method's performance. Conclusions This study shows the feasibility of converting ECG signals using single-lead blindly-segmented inputs. Although the results are promising, further efforts should be devoted towards the improvement of the methodologies, especially the robustness to diverse acquisition setups, in order to be applicable to cardiac healt
Graph representation learning has attracted increasing attention in a variety of applications that involve learning on non-Euclidean data. Recently, generative adversarial networks(GAN) have been increasingly applied ...
详细信息
Graph representation learning has attracted increasing attention in a variety of applications that involve learning on non-Euclidean data. Recently, generative adversarial networks(GAN) have been increasingly applied to the field of graph representation learning, and large progress has been made. However, most GAN-based graph representation learning methods use adversarial learning strategies directly on the update of the vector representation instead of the deep embedding mechanism. Compared with deep models, these methods are less capable of learning nonlinear features. To address this problem and take full advantage of the essential advantages of GAN, we propose to use adversarial idea on the reconstruction mechanism of deep autoencoders. Specifically, the generator and the discriminator are the two basic components of the GAN structure. We use the deep autoencoder as the discriminator, which can capture the highly nonlinear structure of the graph. In addition, the generator another generative model is introduced into the adversarial learning system as a competitor. A series of empirical results proved the effectiveness of the new approach.
The total losses through online banking in the United Kingdom have increased because fraudulent techniques have progressed and used advanced technology. Using the history transaction data is the limit for discovering ...
详细信息
The total losses through online banking in the United Kingdom have increased because fraudulent techniques have progressed and used advanced technology. Using the history transaction data is the limit for discovering various patterns of fraudsters. autoencoder has a high possibility to discover fraudulent action without considering the unbalanced fraud class data. Although the autoencoder model uses only the majority class data, in our hypothesis, if the original data itself has various feature vectors related to transactions before inputting the data in autoencoder then the performance of the detection model is improved. A new feature engineering framework is built that can create and select effective features for deep learning in remote banking fraud detection. Based on our proposed framework [19], new features have been created using feature engineering methods that select effective features based on their importance. In the experiment, a real-life transaction dataset has been used which was provided by a private bank in Europe and built autoencoder models with three different types of datasets: With original data, with created features and with selected effective features. We also adjusted the threshold values (1 and 4) in the autoencoder and evaluated them with the different types of datasets. The result demonstrates that using the new framework the deep learning models with the selected features are significantly improved than the ones with original data.
The problem of data truncation in Computed Tomography (CT) is caused by the missing data when the patient exceeds the Scan Field of View (SFOV) of a CT scanner. The reconstruction of a truncated scan produces severe t...
详细信息
The problem of data truncation in Computed Tomography (CT) is caused by the missing data when the patient exceeds the Scan Field of View (SFOV) of a CT scanner. The reconstruction of a truncated scan produces severe truncation artifacts both inside and outside the SFOV. We have employed a deep learning-based approach to extend the field of view and suppress truncation artifacts. Thereby, our aim is to generate a good estimate of the real patient data and not to provide a perfect and diagnostic image even in regions beyond the SFOV of the CT scanner. This estimate could then be used as an input to higher order reconstruction algorithms [1]. To evaluate the influence of the network structure and layout on the results, three convolutional neural networks (CNNs), in particular a general CNN called ConvNet, an autoencoder, and the U-Net architecture have been investigated in this paper. Additionally, the impact of L1, L2, structural dissimilarity and perceptual loss functions on the neural network's learning have been assessed and evaluated. The evaluation of data set comprising 12 truncated test patients demonstrated that the U-Net in combination with the structural dissimilarity loss showed the best performance in terms of image restoration in regions beyond the SFOV of the CT scanner. Moreover, this network produced the best mean absolute error, L1, L2, and structural dissimilarity evaluation measures on the test set compared to other applied networks. Therefore, it is possible to achieve truncation artifact removal using deep learning techniques.
The modern stage of the development of telecommunication systems is characterized by the widespread development of wired and wireless data transmission networks. The growth in the number of users of such networks, the...
详细信息
ISBN:
(纸本)9781450387347
The modern stage of the development of telecommunication systems is characterized by the widespread development of wired and wireless data transmission networks. The growth in the number of users of such networks, the emergence of new multimedia services impose high demands on speed, reliability, and delay time in information processing. One of the factors in increasing the data transfer rate is the on-the-fly compression of information at the link level. The paper describes a method for compressing data in a communication channel using error-correcting BCH codes and a feed-forward neural network autoencoder. This method converts a binary vector of user data into BCH codewords, which are used to train the autoencoder. This ultimately makes it possible to reduce the number of transmitted bits in the communication channel.
A novel framework for the classification of lung nodules using computed tomography scans is proposed in this article. To get an accurate diagnosis of the detected lung nodules, the proposed framework integrates the fo...
详细信息
A novel framework for the classification of lung nodules using computed tomography scans is proposed in this article. To get an accurate diagnosis of the detected lung nodules, the proposed framework integrates the following 2 groups of features: ( I ) appearance features modeled using the higher order Markov Gibbs random field model that has the ability to describe the spatial inhomogeneities inside the lung nodule and (2) geometric features that describe the shape geometry of the lung nodules. The novelty of this article is to accurately model the appearance of the detected lung nodules using a new developed seventh-order Markov Gibbs random field model that has the ability to model the existing spatial inhomogeneities for both small and large detected lung nodules, in addition to the integration with the extracted geometric features. Finally, a deep autoencoder classifier is fed by the above 2 feature groups to distinguish between the malignant and benign nodules. To evaluate the proposed framework, we used the publicly available data from the Lung Image Database Consortium. We used a total of 727 nodules that were collected from 467 patients. The proposed system demonstrates the promise to be a valuable tool for the detection of lung cancer evidenced by achieving a nodule classification accuracy of 91.20%.
Understanding the distribution of hydrogeological properties of the aquifers is crucial for sustainable groundwater resource development. This research explores the application of deep autoencoder neural networks (AEN...
详细信息
Understanding the distribution of hydrogeological properties of the aquifers is crucial for sustainable groundwater resource development. This research explores the application of deep autoencoder neural networks (AENN), assisted with global optimization methods for estimating hydrogeological parameters in the Quaternary aquifer system in the Debrecen area, Hungary. Traditional methods for estimating aquifer parameters typically depend on field experiments and laboratory analyses, which are both costly and time-consuming, and often fail to account for the heterogeneity of groundwater formations. In this study, deep AE-NN models are trained to extract latent space (LS) representations that capture key features from the available well logs, including spontaneous potential (SP), natural gamma ray (NGR), shallow resistivity (RS), and deep resistivity (RD). The LS log is then correlated with shale volume and hydraulic conductivity, as determined by the Larionov and Csokas methods, respectively. Regression analysis revealed a Gaussian relationship between the LS log and shale volume and a negative nonlinear relationship with hydraulic conductivity. Global optimization methods, including simulated annealing (SA) and particle swarm optimization (PSO), were used to refine the regression parameters, enhancing the predictive capabilities of the models. The results demonstrated that AE-NN assisted with global optimization methods can be effectively used to estimate shale volume and hydraulic conductivity, proposing a novel and independent approach for estimating hydrogeological parameters critical to groundwater flow and contaminant transport modeling.
暂无评论