The prevalence of cyberattacks has been increasing annually, emphasizing the need for data-driven approaches for the prevention and mitigation of attacks. Analysis of network data facilitates the acquisition of usage ...
详细信息
ISBN:
(纸本)9798350326871
The prevalence of cyberattacks has been increasing annually, emphasizing the need for data-driven approaches for the prevention and mitigation of attacks. Analysis of network data facilitates the acquisition of usage information, thereby enabling the detection of anomalous behavior that deviates from established patterns. The identification of such anomalies may indicate potential threats to the network and may assist security management systems in preempting and mitigating potential threats. The present study employs unsupervised machine learningalgorithms with dimensionality reduction to detect anomalies in the NSL-KDD network dataset. The data is labeled although the real network datasets are unlabeled. We defined and applied a methodology to detect anomalies in the dataset. In the study, we used the following unsupervisedalgorithms: Local Outlier Factor (LOF), Elliptic Envelope (EE), SGD One Class SVM (SGD) and Isolation Forest (IF). For the kind of data we analysed, the unsupervisedlearning algorithm that obtained the highest AUC and F1-score evaluation rate was the Elliptic Envelope.
The reduction of CO2 emissions is a critical imperative in the pursuit of sustainable energy solutions. A viable avenue for mitigating CO2 emissions within the power sector is adopting energy communities, which empowe...
详细信息
ISBN:
(纸本)9798350387032;9798350387025
The reduction of CO2 emissions is a critical imperative in the pursuit of sustainable energy solutions. A viable avenue for mitigating CO2 emissions within the power sector is adopting energy communities, which empowers communities to achieve decentralization and sustainability. Furthermore, the convergence of energy communities and machine learningalgorithms represents a crucial boundary in developing contemporary energy systems. In this context, this paper implements a demand response-based model to balance the energy community's consumption and generation, considering the support of unsupervised learning algorithms and the active participation of the members in the planned demand response events. The planning of a demand response event is based on a ranking obtained by a set of unsupervisedlearning evaluation metrics that evaluate the members' data. This work's novelty consists of analyzing the impact of these metrics on the ranking of members and, consequently, on the demand response event's planning. For that, different combinations of these metrics will be considered during the members' ranking. The reliability of this approach was assessed using an energy community consisting of 50 buildings. The results show that an energy community could reduce 1.1 kg of CO2 emissions and increase sustainability by 13% through implementing the presented model.
In this paper we propose a novel recursive algorithm that models the neighborhood mechanism, which is commonly used in self-organizing neural networks (NNs). The neighborhood can be viewed as a map of connections betw...
详细信息
In this paper we propose a novel recursive algorithm that models the neighborhood mechanism, which is commonly used in self-organizing neural networks (NNs). The neighborhood can be viewed as a map of connections between particular neurons in the NN. Its relevance relies on a strong reduction of the number of neurons that remain inactive during the learning process. Thus it substantially reduces the quantization error that occurs during the learning process. This mechanism is usually difficult to implement, especially if the NN is realized as a specialized chip or in Field Programmable Gate Arrays (FPGAs). The main challenge in this case is how to realize a proper, collision-free, multi-path data flow of activations signals, especially if the neighborhood range is large. The proposed recursive algorithm allows for a very efficient realization of such mechanism. One of major advantages is that different learningalgorithms and topologies of the NN are easily realized in one simple function. An additional feature is that the proposed solution accurately models hardware implementations of the neighborhood mechanism. (C) 2015 Elsevier Inc. All rights reserved.
Determining methods to secure the process of data fusion against attacks by compromised nodes in wireless sensor networks (WSNs) and to quantify the uncertainty that may exist in the aggregation results is a critical ...
详细信息
ISBN:
(纸本)9780819481689
Determining methods to secure the process of data fusion against attacks by compromised nodes in wireless sensor networks (WSNs) and to quantify the uncertainty that may exist in the aggregation results is a critical issue in mitigating the effects of intrusion attacks. Published research has introduced the concept of the trustworthiness (reputation) of a single sensor node. Reputation is evaluated using an information-theoretic concept, the Kullback-Leibler (KL) distance. Reputation is added to the set of security features. In data aggregation, an opinion, a metric of the degree of belief, is generated to represent the uncertainty in the aggregation result. As aggregate information is disseminated along routes to the sink node(s), its corresponding opinion is propagated and regulated by Josang's belief model. By applying subjective logic on the opinion to manage trust propagation, the uncertainty inherent in aggregation results can be quantified for use in decision making. The concepts of reputation and opinion are modified to allow their application to a class of dynamic WSNs. Using reputation as a factor in determining interim aggregate information is equivalent to implementation of a reputation-based security filter at each processing stage of data fusion, thereby improving the intrusion detection and identification results based on unsupervised techniques. In particular, the reputation-based version of the probabilistic neural network (PNN) learns the signature of normal network traffic with the random probability weights normally used in the PNN replaced by the trust-based quantified reputations of sensor data or subsequent aggregation results generated by the sequential implementation of a version of Josang's belief model. A two-stage, intrusion detection and identification algorithm is implemented to overcome the problems of large sensor data loads and resource restrictions in WSNs. Performance of the two-stage algorithm is assessed in simulations of WSN sce
The World Health Organization (WHO) reports that in 2018, 422 million people throughout the globe are living with diabetes, making it one of the most widespread chronic life-threatening conditions. Early diagnosis is ...
详细信息
As network attacks are constantly and dramatically evolving, demonstrating new patterns, intelligent Network Intrusion Detection Systems (NIDS), using deep-learning techniques, have been actively studied to tackle the...
详细信息
As network attacks are constantly and dramatically evolving, demonstrating new patterns, intelligent Network Intrusion Detection Systems (NIDS), using deep-learning techniques, have been actively studied to tackle these problems. Recently, various autoencoders have been used for NIDS in order to accurately and promptly detect unknown types of attacks (i.e., zero-day attacks) and also alleviate the burden of the laborious labeling task. Although the autoencoders are effective in detecting unknown types of attacks, it takes tremendous time and effort to find the optimal model architecture and hyperparameter settings of the autoencoders that result in the best detection performance. This can be an obstacle that hinders practical applications of autoencoder-based NIDS. To address this challenge, we rigorously study autoencoders using the benchmark datasets, NSL-KDD, IoTID20, and N-BaIoT. We evaluate multiple combinations of different model structures and latent sizes, using a simple autoencoder model. The results indicate that the latent size of an autoencoder model can have a significant impact on the IDS performance.
The Kohonen Self-Organizing Mag (SOM) is an unsupervisedlearning technique for summarizing high-dimensional data so that similar inputs are, in general, mapped close to one another. When applied to textual data, SOM ...
详细信息
The Kohonen Self-Organizing Mag (SOM) is an unsupervisedlearning technique for summarizing high-dimensional data so that similar inputs are, in general, mapped close to one another. When applied to textual data, SOM has been shown to be able to group together related concepts in a data collection and to present major topics within the collection with larger regions. This article presents research in which we sought to validate these properties of SOM, called the Proximity and Size Hypotheses, through a user evaluation study. Building upon our previous research in automatic concept generation and classification, we demonstrated that the Kohonen SOM was able to perform concept clustering effectively, based on its concept precision and recall7 scores as judged by human experts. We also demonstrated a positive relationship between the size of an SOM region and the number of documents contained in the region. We believe this research has established the Kohonen SOM algorithm as an intuitively appearing and promising neural-network-based textual classification technique for addressing part of the longstanding "information overload" problem.
The traditional deep networks take raw pixels of data as input, and automatically learn features using unsupervised learning algorithms. In this configuration, in order to learn good features, the networks usually hav...
详细信息
ISBN:
(纸本)9781479957521
The traditional deep networks take raw pixels of data as input, and automatically learn features using unsupervised learning algorithms. In this configuration, in order to learn good features, the networks usually have multi-layer and many hidden units which lead to extremely high training time costs. As a widely used image compression algorithm, Discrete Cosine Transformation (DCT) is utilized to reduce image information redundancy because only a limited number of the DCT coefficients can preserve the most important image information. In this paper, it is proposed that a novel framework by combining DCT and deep networks for high speed object recognition system. The use of a small subset of DCT coefficients of data to feed into a 2-layer sparse auto-encoders instead of raw pixels. Because of the excellent decorrelation and energy compaction properties of DCT, this approach is proved experimentally not only efficient, but also it is a computationally attractive approach for processing high-resolution images in a deep architecture.
暂无评论