We propose an intensity-based technique to homogenize dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) data acquired at six institutions. A total of 234 T1-weighted MRI volumes acquired at the peak kinet...
详细信息
ISBN:
(数字)9781510625488
ISBN:
(纸本)9781510625488
We propose an intensity-based technique to homogenize dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) data acquired at six institutions. A total of 234 T1-weighted MRI volumes acquired at the peak kinetic curve were obtained for study of the homogenization and unsupervised deep-learning feature extraction techniques. The homogenization uses reference regions of adipose breast tissue since they are less susceptible to variations due to cancer and contrast medium. For the homogenization, the moments of the distribution of reference pixel intensities across the cases were matched and the remaining intensity distributions were matched accordingly. A deep stacked autoencoder with six convolutional layers was trained to reconstruct a 128x128 MRI slice and to extract a latent space of 1024 dimensions. We used the latent space from the stacked autoencoder to extract deep embedding features that represented the global and local structures of the imaging data. An analysis using spectral embedding of the latent space shows that, before homogenization the dominating factor was the dependency on the imaging center;after homogenization the histograms of the cases between different centers were matched and the center dependency was reduced. The results of feature analysis indicate that the proposed homogenization approach may lessen the effects of different imaging protocols and scanners in MRI, which may then allow more consistent quantitative analysis of radiomic information across patients and improve the generalizability of machine learning methods across different clinical sites. Further study is underway to evaluate the performance of machine learning models with and without image homogenization.
Interference detection using deep neural network has recently received increasing attention due to its capability in learning rich features of data. In this paper, we proposed a low-complexity blind interference detec...
详细信息
ISBN:
(纸本)9781728109602
Interference detection using deep neural network has recently received increasing attention due to its capability in learning rich features of data. In this paper, we proposed a low-complexity blind interference detection method. Our method operates on Time-frequency overlapped interference signal and can separate which from received signal. The proposed method uses autoencoder network to reconstruct the transmitted signal and separate interference signal. The autoencoder network consists of several layers of recurrent neural network (RNN) which is well-suited for learning representations from time-correlated data. Simulation results show that the separated interference signal has the same features of original interference, so it not only achieves good interference detection performance, but also can realize interference recognition.
The end-to-end learning of Simultaneous Wireless Information and Power Transfer (SWIPT) over a noisy channel is studied. Adopting a nonlinear model for the Energy Harvester (EH) at the receiver, a joint optimization o...
详细信息
ISBN:
(纸本)9781479981311
The end-to-end learning of Simultaneous Wireless Information and Power Transfer (SWIPT) over a noisy channel is studied. Adopting a nonlinear model for the Energy Harvester (EH) at the receiver, a joint optimization of the transmitter and the receiver is implemented using Neural Network (NN)-based autoencoders. Modulation constellations for different levels of "power" and "information rate" demands at the receiver are obtained. The numerically optimized signal constellations are inline with the previous theoretical results. In particular, it is observed that as the receiver power demand increases, all but one of the modulation symbols are concentrated around the origin and the other symbol is shot away from the origin.
Unsupervised learning is becoming more and more important recently. As one of its key components, the autoencoder (AE) aims to learn a latent feature representation of data which is more robust and discriminative. How...
详细信息
ISBN:
(纸本)9781479981311
Unsupervised learning is becoming more and more important recently. As one of its key components, the autoencoder (AE) aims to learn a latent feature representation of data which is more robust and discriminative. However, most AE based methods only focus on the reconstruction within the encoder-decoder phase, which ignores the inherent relation of data, i.e., statistical and geometrical dependence, and easily causes overfitting. In order to deal with this issue, we propose an Exclusivity Enhanced (EE) unsupervised feature learning approach to improve the conventional AE. To the best of our knowledge, our research is the first to utilize such exclusivity concept to cooperate with feature extraction within AE. Moreover, in this paper we also make some improvements to the stacked AE structure especially for the connection of different layers from decoders, this could be regarded as a weight initialization trial. The experimental results show that our proposed approach can achieve remarkable performance compared with other related methods.
Planar homography estimation refers to the problem of computing a bijective linear mapping of pixels between two images. While this problem has been studied with convolutional neural networks (CNNs), existing methods ...
详细信息
ISBN:
(纸本)9783030208769;9783030208752
Planar homography estimation refers to the problem of computing a bijective linear mapping of pixels between two images. While this problem has been studied with convolutional neural networks (CNNs), existing methods simply regress the location of the four corners using a dense layer preceded by a fully-connected layer. This vector representation damages the spatial structure of the corners since they have a clear spatial order. Moreover, four points are the minimum required to compute the homography, and so such an approach is susceptible to perturbation. In this paper, we propose a conceptually simple, reliable, and general framework for homography estimation. In contrast to previous works, we formulate this problem as a perspective field (PF), which models the essence of the homography - pixel-to-pixel bijection. The PF is naturally learned by the proposed fully convolutional residual network, PFNet, to keep the spatial order of each pixel. Moreover, since every pixels' displacement can be obtained from the PF, it enables robust homography estimation by utilizing dense correspondences. Our experiments demonstrate the proposed method outperforms traditional correspondence-based approaches and state-of-the-art CNN approaches in terms of accuracy while also having a smaller network size. In addition, the new parameterization of this task is general and can be implemented by any fully convolutional network (FCN) architecture.
This paper studies a combination of feature selection and ensemble learning to address the feature redundancy and class imbalance problems in software fault prediction. Also, a deep learning model is used to generate ...
详细信息
ISBN:
(纸本)9781728130033
This paper studies a combination of feature selection and ensemble learning to address the feature redundancy and class imbalance problems in software fault prediction. Also, a deep learning model is used to generate deep representation from defect data to improve the performance of fault prediction models. The proposed method, GFsSDAEsTSE, is evaluated on 12 NASA datasets, and the results show that GFsSDAEsTSE outperforms state-of-the-art methods in both small and large datasets.
In recent years, network embedding methods based on deep learning to process network structure data have attracted widespread attention. It aims to represent nodes in the network as low-dimensional dense real-value ve...
详细信息
ISBN:
(纸本)9783030342234;9783030342227
In recent years, network embedding methods based on deep learning to process network structure data have attracted widespread attention. It aims to represent nodes in the network as low-dimensional dense real-value vectors and effectively preserve network structure and other valuable information. Most network embedding methods now only preserve the network topology and do not take advantage of the rich attribute information in networks. In this paper, we propose a novel deep attributed network embedding framework (RolEANE), which can preserve network topological structure and attribute information well at the same time. The framework consists of two parts, one of which is the network structural role proximity enhanced deep autoencoder, which is used to capture highly nonlinear network topological structure and attribute information. The other part is that we proposed a neighbor optimization strategy to modify the Skip-Gram model so that it can integrate the network topological structure and attribute information to improve the final embedded performance. The experiments on four real datasets show that our method outperforms other state-of-the-art network embedding methods.
When developing multi-layer neural networks (MLNNs), determining an appropriate size can be computationally intensive. Cascade Correlation algorithms such as CasPer attempt to address this, however, associated researc...
详细信息
ISBN:
(纸本)9783030368081;9783030368074
When developing multi-layer neural networks (MLNNs), determining an appropriate size can be computationally intensive. Cascade Correlation algorithms such as CasPer attempt to address this, however, associated research often uses artificially constructed data. Additionally, few papers compare the effectiveness with standard MLNNs. This paper takes the ANUstressDB database and applies a genetic algorithm autoencoder to reduce the number of features. The efficiency and accuracy of CasPer on this dataset is then compared to CasCor, MLNN, KNN, and SVM. Results indicate the training time for CasPer was much lower than the MLNNs at a small cost to prediction accuracy. CasPer also had similar training efficiency to simple algorithms such as SVM, yet had a higher predictive ability. This indicates CasPer would be a good choice for difficult problems that require small training times. Furthermore, the cascading feature of the network makes it better at fitting to unknown problems, while remaining almost as accurate as standard MLNNs.
Mass cytometry is a new high-throughput technology that is becoming a cornerstone in immunology and cell biology research. With technological advancement, the number of cellular characteristics cytometry can simultane...
详细信息
ISBN:
(纸本)9783030304904;9783030304898
Mass cytometry is a new high-throughput technology that is becoming a cornerstone in immunology and cell biology research. With technological advancement, the number of cellular characteristics cytometry can simultaneously quantify grows, making analysis increasingly computationally onerous. In this paper, we investigate the potential of dimensionality reduction techniques to ease computational burden in clustering cytometry data whilst minimally diminishing clustering performance. We explore 3 such techniques: Principal Component Analysis (PCA), autoencoders (AE) and Uniform Manifold Approximation and Projection (UMAP). Thereafter we employ a recent clustering algorithm, ChronoClust, which clusters data at each time-point into cell populations and explicitly tracks them over time. We evaluate this approach through a 14-dimensional cytometry dataset describing the immune response to West Nile Virus over 8 days in mice. To obtain a broad sample of clustering performance, each of the four datasets (unreduced, PCA-, AE- and UMAP-reduced) is independently clustered 400 times, using 400 unique ChronoClust parameter value sets. We find that PCA and AE can reduce the computational expense whilst incurring a minimal degradation in clustering and cluster tracking performance.
With a surge of network data, attributed networks are widely applied for various applications. Recently, how to embed an attributed network into a low-dimensional representation space has gained a lot of attention. No...
详细信息
ISBN:
(纸本)9783030295639;9783030295622
With a surge of network data, attributed networks are widely applied for various applications. Recently, how to embed an attributed network into a low-dimensional representation space has gained a lot of attention. Noting that nodes in attributed networks not only have structural information, but also often contain attribute and label information. Actually, these types of information could help to learn effective node representations since they can strengthen the similarities of nodes. However, most existing embedding methods consider either network structure only or both of the structural and attribute information, ignoring node labels. They merely produce suboptimal results as these methods fail to leverage all of these information. To address this issue, we propose a novel method called EANE that is Exploiting tri-types of information (i. e. , network structure, node attributes and labels) for learning an effective Attributed Network Embedding. Specifically, EANE consists of three modules that separately encode these three information while preserving their correlations. Experimental results on three real-world datasets show that EANE outperforms the state-or-art embedding algorithms.
暂无评论