This work implements two anomaly detection algorithms for detecting Transmission Control Protocol Synchronized (TCP SYN) flooding attack. The two algorithms are an adaptive threshold algorithm and a cumulative sum (CU...
详细信息
ISBN:
(纸本)9781467376822
This work implements two anomaly detection algorithms for detecting Transmission Control Protocol Synchronized (TCP SYN) flooding attack. The two algorithms are an adaptive threshold algorithm and a cumulative sum (CUSUM) based algorithm. Furthermore, we fused the outcomes of the two algorithms using the logic OR operator at different thresholds of the two algorithms to obtain improved detection accuracy. Indeed, the results indicated that the OR operator performs better than the two algorithms in detecting SYN flooding attack and detection delay.
This work implements two anomaly detection algorithms for detecting Transmission Control Protocol Synchronized (TCP SYN) flooding attack. The two algorithms are an adaptive threshold algorithm and a cumulative sum (CU...
详细信息
ISBN:
(纸本)9781467376839
This work implements two anomaly detection algorithms for detecting Transmission Control Protocol Synchronized (TCP SYN) flooding attack. The two algorithms are an adaptive threshold algorithm and a cumulative sum (CUSUM) based algorithm. Furthermore, we fused the outcomes of the two algorithms using the logic OR operator at different thresholds of the two algorithms to obtain improved detection accuracy. Indeed, the results indicated that the OR operator performs better than the two algorithms in detecting SYN flooding attack and detection delay.
Today, sensors and/or anomaly detection algorithms (ADAs) are used to collect data in a wide variety of applications(e.g. Cyber security systems, sensor networks, etc.). Today, every sensor or ADA in its applied syste...
详细信息
ISBN:
(纸本)9781479944323
Today, sensors and/or anomaly detection algorithms (ADAs) are used to collect data in a wide variety of applications(e.g. Cyber security systems, sensor networks, etc.). Today, every sensor or ADA in its applied system participates in the collection of data throughout the entire system. The data collected from all of the sensors or ADAs are then integrated into one significant conclusion or decision, a process known as data fusion. However, the reliability, or reputation, of a single sensor or ADA may change over time, or may not be known at all. Since this reputation is taken into account when determining the final conclusion post data classification, one must be able to predict their reputations. We propose a new machine learning prediction technique (MLPT) to predict the reputation of each sensor or ADA. This technique is based on the existing 'Decision Tree Certainty Level' technique, or DTCL, which is the creation of many random decision trees (forests) with high certainty levels [Dolev et al. (2009)]. In particular, it was shown that the DTCL enhances the classification capabilities of CARTs (Classification and Regression Trees) [Briman et al. (1984)]. After applying the DTCL technique to the reputation data, we then apply a new evolutionary process on those decision trees to reduce the overall number of trees by merging only the most accurate trees and then using only these new trees to generate the reputation values. Thus, we combine DTCL and evolution techniques to enable the determination of sensor or ADA reputations by using only the most accurate trees. Finally, we demonstrate how to improve the data fusion process by identifying the most reliable portions of the collected data to reach more accurate conclusions.
Human activity anomalydetection plays a crucial role in the next generation of surveillance and assisted living systems. Most anomaly detection algorithms are generative models and learn features from raw images. Thi...
详细信息
Human activity anomalydetection plays a crucial role in the next generation of surveillance and assisted living systems. Most anomaly detection algorithms are generative models and learn features from raw images. This work shows that popular state-of-the-art autoencoder-based anomalydetection systems are not capable of effectively detecting human-posture and object-positions related anomalies. Therefore, a human pose-driven and object-detector-based deep learning architecture is proposed, which simultaneously leverages human poses and raw RGB data to perform human activity anomalydetection. It is demonstrated that pose-driven learning overcomes the raw RGB based counterpart limitations in different human activities classification. Extensive validation is provided by using popular datasets. Then, it is demonstrated that with the aid of object detection, the human activities classification can be effectively used in human activity anomalydetection. Moreover, novel challenging datasets, that is, BMbD, M-BMbD and JBMOPbD, are proposed for single and multi-target human posture anomalydetection and joint human posture and object position anomalydetection evaluations.
Financial institutions are subject to stringent regulatory reporting requirements tomanage operational risk in international financialmarkets. Producing accurate and timely reports has raised challenges in current dat...
详细信息
ISBN:
(纸本)9783031547119;9783031547126
Financial institutions are subject to stringent regulatory reporting requirements tomanage operational risk in international financialmarkets. Producing accurate and timely reports has raised challenges in current data processes of big data heterogeneity, system interoperability and enterprise-wide management. Data quality management is a key concern, with current approaches being timeconsuming, expensive, and risky. This research proposes to design, develop, and evaluate a Financial Reporting Data Quality Framework that allows non-IT data consumers to contextualize data observations. The framework will use anomalyalgorithms to detect and categorize observations as genuine business activities or data quality issues. To ensure sustainability and ongoing relevance, the framework will also embed an update mechanism.
anomalydetection in Hyperspectral Imagery(HSI)has received considerable attention because of its potential application in several *** anomaly detection algorithms for HSI have been proposed in the literature;however,...
详细信息
anomalydetection in Hyperspectral Imagery(HSI)has received considerable attention because of its potential application in several *** anomaly detection algorithms for HSI have been proposed in the literature;however,due to the use of different datasets in previous studies,an extensive performance comparison of these algorithms is *** this paper,an overview of the current state of research in hyperspectral anomalydetection is presented by broadly dividing all the previously proposed algorithms into eight different *** addition,this paper presents the most comprehensive comparative analysis to-date in hyperspectral anomalydetection by evaluating 22 algorithms on 17 different publicly available *** indicate that attribute and edge-preserving filtering-based detection(AED),local summation anomalydetection based on collaborative representation and inverse distance weight(LSAD-CR-IDW)and local summation unsupervised nearest regularized subspace with an outlier removal anomaly detector(LSUNRSORAD)perform better as indicated by the mean and median values of area under the receiver operating characteristic(ROC)***,this paper studies the effect of various dimensionality reduction techniques on anomaly *** indicate that reducing the number of components to around 20 improves the performance;however,any further decrease deteriorates the performance.
This study explores the potential of machine learning algorithms for earthquake prediction, utilizing fluid chemical anomaly data from hot springs. Six hot springs, located within an active fault zone along the southe...
详细信息
This study explores the potential of machine learning algorithms for earthquake prediction, utilizing fluid chemical anomaly data from hot springs. Six hot springs, located within an active fault zone along the southeastern coast of China, were carefully chosen as hydrochemical monitoring sites for an extended period of two and a half years. Using this data, a prediction model integrating six algorithms was developed to forecast M >= 5 earthquakes in Taiwan. The model's performance was validated against recorded earthquake events, and the factors influencing its predictive capability were analyzed. Our comprehensive analysis conclusively demonstrates the superiority of machine learning algorithms over traditional statistical methods for earthquake prediction. Additionally, including sampling time in the data sets significantly improves the model's predictive performance. However, it is important to note that the model's predictive performance varies across different hot spring and indicators type, highlighting the importance of identifying optimal indicators for specific scenarios. The model parameters, including the anomalydetection rate (P) and earthquake response time threshold (M), significantly impact the model's predictive capabilities. Therefore, adjustments are needed to optimize the model's performance for practical use. Despite limitations such as the inability to differentiate pre-earthquake anomalies from post-earthquake anomalies and pinpoint the precise location of earthquakes, this study successfully showcases the potential of machine learning algorithms in earthquake prediction, paving the way for further research and improved prediction methods. This study explores the potential of utilizing machine learning algorithms for earthquake prediction based on hydrothermal fluid chemical anomaly data. An earthquake prediction model integrating six machine learning algorithms was developed using continuous hot spring hydrochemical monitoring data. The mode
Recent technology evolution allows network equipment to continuously stream a wealth of "telemetry" information, which pertains to multiple protocols and layers of the stack, at a very fine spatial-grain and...
详细信息
Recent technology evolution allows network equipment to continuously stream a wealth of "telemetry" information, which pertains to multiple protocols and layers of the stack, at a very fine spatial-grain and high-frequency. This deluge of telemetry data clearly offers new opportunities for network control and troubleshooting, but also poses a serious challenge for what concerns its real-time processing. We tackle this challenge by applying streaming machine-learning techniques to the continuous flow of control and data-plane telemetry data, with the purpose of real-time detection of anomalies. In particular, we implement an anomalydetection engine that leverages DenStream, an unsupervised clustering technique, and apply it to features collected from a large-scale testbed comprising tens of routers traversed up to 3Terabit/s worth of real application traffic. We contrast DenStream with offline algorithms such as DBScan and Local Outlier Factor (LOF), as well as online algorithms such as the windowed version of DBScan, ExactSTORM, Continuous Outlier detection (COD) and Robust Random Cut Forest (RRCF). Our experimental campaign compares these seven algorithms under both accuracy and computational complexity viewpoints: results testify that DenStream (i) achieves detection results on par with RRCF, the best performing algorithm and (ii) is significantly faster than other approaches, notably over two orders of magnitude faster than RRCF. In spirit with the recent trend toward reproducibility of results, we make our code available as open source to the scientific community.
The proliferation of smart devices and computer networks has led to a huge rise in internet traffic and network attacks that necessitate efficient network traffic monitoring. There have been many attempts to address t...
详细信息
The proliferation of smart devices and computer networks has led to a huge rise in internet traffic and network attacks that necessitate efficient network traffic monitoring. There have been many attempts to address these issues;however, agile detecting solutions are needed. This research work deals with the problem of malware infections or detection is one of the most challenging tasks in modern computer security. In recent years, anomalydetection has been the first detection approach followed by results from other classifiers. anomalydetection methods are typically designed to new model normal user behaviors and then seek for deviations from this model. However, anomalydetection techniques may suffer from a variety of problems, including missing validations for verification and a large number of false positives. This work proposes and describes a new profile-based method for identifying anomalous changes in network user behaviors. Profiles describe user behaviors from different perspectives using different flags. Each profile is composed of information about what the user has done over a period of time. The symptoms extracted in the profile cover a wide range of user actions and try to analyze different actions. Compared to other symptom anomaly detectors, the profiles offer a higher level of user experience. It is assumed that it is possible to look for anomalies using high-level symptoms while producing less false positives while effectively finding real attacks. Also, the problem of obtaining truly tagged data for training anomaly detection algorithms has been addressed in this work. It has been designed and created datasets that contain real normal user actions while the user is infected with real malware. These datasets were used to train and evaluate anomaly detection algorithms. Among the investigated algorithms for example, local outlier factor (LOF) and one class support vector machine (SVM). The results show that the proposed anomaly-based and profile
Precision Agriculture is a broad, systemic, and multidisciplinary subject, dealing with an integrated information and technology management system, based on the concepts that the variability of space and time influenc...
详细信息
Precision Agriculture is a broad, systemic, and multidisciplinary subject, dealing with an integrated information and technology management system, based on the concepts that the variability of space and time influence crop yields. Precision farming aims at more comprehensive management of the agricultural production system as a whole. It uses a set of tools, instruments, and sensors to measure or detect parameters or targets of interest in the agroecosystem. Sensors are distributed in the environment and are usually communicated through a Wireless Sensor Network (WSN). Due to this dispersion of the sensors, errors could occur in Byzantine form or could be caused by safety factors, which can lead to a misinterpretation by the system of data analysis and actuation over the environment. anomaly detection algorithms can detect such problem sensors by allowing them to be replaced, or the wrong data is ignored. Therefore, this work presents a reference architecture and a heuristic algorithm that aid the decision of which anomalydetection to use based on the demands of agricultural environments. We performed a preliminary evaluation, analyzing different anomaly detection algorithms regarding execution time, accuracy, and scalability metrics. Results show that the decision-making supported by the proposed architecture reduces edge devices' power consumption by 18.59% while minimizing the device's temperature in up to 15.94% depending on the application workload and edge device characteristics. (C) 2020 Elsevier Ltd. All rights reserved.
暂无评论