Variations in commands executed as part of the attack process can be used to determine the behavioural patterns of IoT attacks. Existing approaches rely on the domain knowledge of security experts to identify the beha...
详细信息
Variations in commands executed as part of the attack process can be used to determine the behavioural patterns of IoT attacks. Existing approaches rely on the domain knowledge of security experts to identify the behavioural patterns, categorise and classify cyber attacks. We proposed an autoencoder (AE)-based feature construction approach to remove the dependency of manually correlating commands and generate an efficient representation by automatically learning the semantic similarity between input features extracted through commands data. We applied three clustering algorithms, i.e., K-means, Gaussian Mixture Models and Density-based spatial clustering of applications with noise, on our data set of AE features. We discussed the clustering arrangements for understanding the impact of changes in commands on behavioural patterns of attacks and how attacks are grouped in the same or different clusters. Evaluation of our feature construction approach shows that the clustering algorithm grouped attacks with more common features values compared to clustering with original features. Moreover, we performed a comparative analysis of two existing feature extraction approaches on our data set considering the type of analysis in the process, generalisability of applying features, coverage to the data set and clustering arrangements. We found that challenges identified in applying existing approaches can be addressed with our proposed approach and improving features with AE resulted in providing meaningful clustering interpretations. (c) 2021 Elsevier B.V. All rights reserved.
PPG-enabled wearables are increasingly prevalent and have the potential to improve remote healthcare systems significantly. However, their implementation can be challenging under resource constraints. To alleviate the...
详细信息
ISBN:
(纸本)9781665452458
PPG-enabled wearables are increasingly prevalent and have the potential to improve remote healthcare systems significantly. However, their implementation can be challenging under resource constraints. To alleviate these constraints, this article proposed a lightweight autoencoder-based lossy PPG compression system for data streaming applications. The system reduces the amount of transmitted data by a factor of 8 while maintaining the information with the signal RMSE at 0.026 and peak MAE at 0.020. This is possible through the loss-conditional training approach on specific task learning. Its practical functionality and advantages are verified on a resource-constrained Docker container and an Arduino Nano 33 BLE microcontroller board. This system cut down transmission energy consumption by 84%.
The weak fault acoustic emission (AE) signals collected in the actual operating conditions of the engine are often submerged in the strong background noise. This paper proposes a denoising method of AE signals based o...
详细信息
The weak fault acoustic emission (AE) signals collected in the actual operating conditions of the engine are often submerged in the strong background noise. This paper proposes a denoising method of AE signals based on the combination of autoencoder and wavelet packet decomposition (AE-WPD) to address the above problem. Firstly, the wavelet packet is used to decompose engine background noise signals and noise-containing fault AE signals to enhance the local analysis capability of the autoencoder. Then, the dataset of each frequency band after decomposition is created. Among them, background noise signals are regarded as normal datasets. Noise-containing fault signals are treated as outlier datasets. The difference between each frequency band of background noise signals and noise-containing fault signals is analyzed. The autoencoder model is trained, validated and tested for effectiveness. In addition, a comparison is made with other commonly used denoising methods. Four types of evaluation indexes are introduced to quantitatively assess various methods. Finally, the real engine background noise signals with different signal-to-noise ratio (SNR) are added to the fault AE signals to verify the robustness of the proposed AE-WPD method. The experimental results show that the proposed AE-WPD method outperforms other denoising methods at different SNR. This lays the foundation for engine structural condition monitoring and subsequent fault identification and localization.
Deep learning has shown a great improvement in the performance of visual tasks. Image retrieval is the task of extracting the visually similar images from a database for a query image. The feature matching is performe...
详细信息
Deep learning has shown a great improvement in the performance of visual tasks. Image retrieval is the task of extracting the visually similar images from a database for a query image. The feature matching is performed to rank the images. Various hand-designed features have been derived in past to represent the images. Nowadays, the power of deep learning is being utilized for automatic feature learning from data in the field of biomedical image analysis. autoencoder and Siamese networks are two deep learning models to learn the latent space (i.e., features or embedding). autoencoder works based on the reconstruction of the image from latent space. Siamese network utilizes the triplets to learn the intra-class similarity and inter-class dissimilarity. Moreover, autoencoder is unsupervised, whereas Siamese network is supervised. We propose a Joint Triplet autoencoder Network (JTANet) by facilitating the triplet learning in autoencoder framework. A joint supervised learning for Siamese network and unsupervised learning for autoencoder is performed. Moreover, the Encoder network of autoencoder is shared with Siamese network and referred as the Siamcoder network. The features are extracted by using the trained Siamcoder network for retrieval purpose. The experiments are performed over Histopathological Routine Colon Cancer dataset. We have observed the promising performance using the proposed JTANet model against the autoencoder and Siamese models for colon cancer nuclei retrieval in histopathological images.
Macular edema is a retinal complication that occurs due to the presence of excess fluid between the retinal layers. This might lead to swelling in the retina and cause severe vision impairment if not detected in its e...
详细信息
Macular edema is a retinal complication that occurs due to the presence of excess fluid between the retinal layers. This might lead to swelling in the retina and cause severe vision impairment if not detected in its early stages. This paper presents a robust Edge Attention network (EANet) for segmenting the different retinal fluids like Intraretinal Fluid (IRF), Subretinal Fluid (SRF), and Pigment Epithelial Detachment (PED) from the Spectral Domain - Optical Coherence Tomography (SD-OCT) images. The proposed method employs a novel image enhancement technique by filtering OCT images using a BM3D (Block Matching and 3D Filtering) filter followed by Contrast Limited Adaptive Histogram Equalization (CLAHE) and a linear filter based on multivariate Taylor series to acquire the edge maps of the OCT images. A novel autoencoder based multiscale attention mechanism is incorporated with EANet that feeds on both the OCT image and edge-enhanced OCT image at every level of the encoder. The proposed network, EANet, has been trained and tested for the segmentation of all three types of fluids on the RETOUCH challenge dataset, and the segmentation of the IRF on the OPTIMA challenge and DUKE DME datasets. The average dice coefficient of IRF, SRF, and PED for the RETOUCH dataset is 0.683, 0.873, and 0.756, respectively, whereas it is 0.805, 0.77, and 0.756 for Cirrus, Spectralis, and Topcon vendors, respectively. The proposed method outperformed all the teams that participated in the OPTIMA challenge on all types of vendor images in terms of dice coefficient. The average dice coefficients of IRF on the OPTIMA and DUKE DME datasets are 0.84 and 0.72, respectively.
Link prediction aims to predict missing links or eliminate spurious links by employing known complex network information. As an unsupervised linear feature representation method, matrix factorization (MF)-based autoen...
详细信息
Link prediction aims to predict missing links or eliminate spurious links by employing known complex network information. As an unsupervised linear feature representation method, matrix factorization (MF)-based autoencoder (AE) can project the high-dimensional data matrix into the low-dimensional latent space. However, most of the traditional link prediction methods based on MF or AE adopt shallow models and single adjacency matrices, which cannot adequately learn and represent network features and are susceptible to noise. In addition, because some methods require the input of symmetric data matrix, they can only be used in undirected networks. Therefore, we propose a deep manifold matrix factorization autoencoder model using global connectivity matrix, called DM-MFAE-G. The model utilizes PageRank algorithm to get the global connectivity matrix between nodes for the complex network. DM-MFAE-G performs deep matrix factorization on the local adjacency matrix and global connectivity matrix, respectively, to obtain global and local multi-layer feature representations, which contains the rich structural information. In this paper, the model is solved by alternating iterative optimization method, and the convergence of the algorithm is proved. Comprehensive experiments on different real networks demonstrate that the global connectivity matrix and manifold constraints introduced by DM-MFAE-G significantly improve the link prediction performance on directed and undirected networks.
As Internet of Things (IoT) applications and devices rapidly grow, cyber-attacks on IoT networks/systems also have an increasing trend, thus increasing the threat to security and privacy. Botnet is one of the threats ...
详细信息
As Internet of Things (IoT) applications and devices rapidly grow, cyber-attacks on IoT networks/systems also have an increasing trend, thus increasing the threat to security and privacy. Botnet is one of the threats that dominate the attacks as it can easily compromise devices attached to an IoT networks/systems. The compromised devices will behave like the normal ones, thus it is difficult to recognize them. Several intelligent approaches have been introduced to improve the detection accuracy of this type of cyber-attack, including deep learning and machine learning techniques. Moreover, dimensionality reduction methods are implemented during the preprocessing stage. This research work proposes deep autoencoder dimensionality reduction method combined with Artificial Neural Network (ANN) classifier as botnet detection system for IoT networks/systems. Experiments were carried out using 3 -layer, 4-layer and 5-layer pre-processing data from the MedBIoT dataset. Experimental results show that using a 5-layer autoencoder has better results, with details of accuracy value of 99.72%, Precision of 99.82%, Sensitivity of 99.82%, Specificity of 99.31%, and F1-score value of 99.82%. On the other hand, the 5-layer autoencoder model succeeded in reducing the dataset size from 152 MB to 12.6 MB (equivalent to a reduction of 91.2%). Besides that, experiments on the N_BaIoT dataset also have a very high level of accuracy, up to 99.99%.
In big data era, multi-source heterogeneous data become the biggest obstacle to data sharing due to its high dimension and inconsistent structure. Using text classification to solve the ontology construction and mappi...
详细信息
In big data era, multi-source heterogeneous data become the biggest obstacle to data sharing due to its high dimension and inconsistent structure. Using text classification to solve the ontology construction and mapping problem of multi-source heterogeneous data can not only reduce manual operation, but also improve the accuracy and efficiency. This paper proposes an ontology construction and mapping scheme based on hybrid neural network and autoencoder. Firstly, the proposed text classification method uses the multi-core convolutional neural network to capture local features and uses the improved Bidirectional Long Short-Term Memory network to compensate for the shortcomings of the convolutional neural network that cannot obtain context-related information. Secondly, a similarity matching method is used for ontology mapping, which integrate autoencoder to improve anti-interference ability. We have carried out several sets of experiments to test the validity of the proposed ontology construction and mapping scheme.
With the advent of the big data era, the data quality problem is becoming more critical. Among many factors, data with missing values is one primary issue, and thus developing effective imputation models is a key topi...
详细信息
With the advent of the big data era, the data quality problem is becoming more critical. Among many factors, data with missing values is one primary issue, and thus developing effective imputation models is a key topic in the research community. Recently, a major research direction is to employ neural network models such as self-organizing mappings or automatic encoders for filling missing values. However, these classical methods can hardly discover interrelated features and common features simultaneously among data attributes. Especially, it is a very typical problem for classical autoencoders that they often learn invalid constant mappings, which dramatically hurts the filling performance. To solve the above-mentioned problems, we propose a missing-value-filling model based on a feature-fusion-enhanced autoencoder. We first incorporate into an autoencoder a hidden layer that consists of de-tracking neurons and radial basis function neurons, which can enhance the ability of learning interrelated features and common features. Besides, we develop a missing value filling strategy based on dynamic clustering that is incorporated into an iterative optimization process. This design can enhance the multi-dimensional feature fusion ability and thus improves the dynamic collaborative missing-value-filling performance. The effectiveness of the proposed model is validated by extensive experiments compared to a variety of baseline methods on thirteen data sets.
BackgroundDeep learning (DL) has been widely used for diagnosis and prognosis prediction of numerous frequently occurring diseases. Generally, DL models require large datasets to perform accurate and reliable prognosi...
详细信息
BackgroundDeep learning (DL) has been widely used for diagnosis and prognosis prediction of numerous frequently occurring diseases. Generally, DL models require large datasets to perform accurate and reliable prognosis prediction and avoid overlearning. However, prognosis prediction of rare diseases is still limited owing to the small number of cases, resulting in small *** paper proposes a multimodal DL method to predict the prognosis of patients with malignant pleural mesothelioma (MPM) with a small number of 3D positron emission tomography-computed tomography (PET/CT) images and clinical *** 3D convolutional conditional variational autoencoder (3D-CCVAE), which adds a 3D-convolutional layer and conditional VAE to process 3D images, was used for dimensionality reduction of PET images. We developed a two-step model that performs dimensionality reduction using the 3D-CCVAE, which is resistant to overlearning. In the first step, clinical data were input to condition the model and perform dimensionality reduction of PET images, resulting in more efficient dimension reduction. In the second step, a subset of the dimensionally reduced features and clinical data were combined to predict 1-year survival of patients using the random forest classifier. To demonstrate the usefulness of the 3D-CCVAE, we created a model without the conditional mechanism (3D-CVAE), one without the variational mechanism (3D-CCAE), and one without an autoencoder (without AE), and compared their prediction results. We used PET images and clinical data of 520 patients with histologically proven MPM. The data were randomly split in a 2:1 ratio (train : test) and three-fold cross-validation was performed. The models were trained on the training set and evaluated based on the test set results. The area under the receiver operating characteristic curve (AUC) for all models was calculated using their 1-year survival predictions, and the results were *** obtained
暂无评论