Software-Defined Networking (SDN) is a networking paradigm that has the potential to revolutionize the way we develop and operate network infrastructures. SDN enables network engineers to quickly monitor networks, cen...
详细信息
Software-Defined Networking (SDN) is a networking paradigm that has the potential to revolutionize the way we develop and operate network infrastructures. SDN enables network engineers to quickly monitor networks, centrally manage networks, and quickly and accurately detect malicious traffic and link failures. In addition to its flexibility, SDN is also susceptible to various attacks like distributed Denial of Service (DDoS) that can bring down the entire network. In recent years, Deep Learning (DL) approaches have been applied for reliable and highly accurate traffic anomaly detection. Therefore, this paper proposes a method based on DL to classify the benign traffic from the DDoS attack traffic. The major contribution of this paper is to propose a novel hybrid DL method to perform the classification. In addition, we compare different deep learning methods using attack severity, which is integrated into the standard metrics to differentiate the quality of the results of DL methods. The attack severity will be calculated using an appropriate weighting of undiscovered intrusions (FN and FP) discovered during the testing phase. Besides, various DDoS attack detection research projects employ public datasets that are not specific to the SDN environment. We propose a method to determine the adequacy of a selected dataset using two proposed metrics based on quality and quantity conformity evaluation.
In a typical edge computing paradigm, multiple edge servers are located near the end users to provide augmented computation and bandwidth. As the resources of the edge servers are limited, precisely predicting the wor...
详细信息
In a typical edge computing paradigm, multiple edge servers are located near the end users to provide augmented computation and bandwidth. As the resources of the edge servers are limited, precisely predicting the workload of different edge servers can be of great importance in efficiently utilizing the edges. However, due to the dynamics of end users, the workload in the edge servers have lots of spikes. In this paper, we consider a deep learning method to predict the resource usage of edge servers. The main idea is to use graph neural network (GNN) to capture the interconnected topology of the edge servers. The edge servers that are close in proximity often have similar resource load patterns. The GNN is then concatenated with a LSTM layer to output the prediction value. The algorithm is evaluated in a real edge computing data set. The results show that our algorithm have high accuracy in predicting the resource usage. In addition, it outperforms other SOTA algorithms significantly.
This paper deals with power quality profile analysis of distributed generation (DG) system using unified power quality conditioner (UPQC). Despite the several benefits of DG like excellent energy supply, reducing the ...
详细信息
Python is rapidly becoming the lingua franca of machine learning and scientific computing. With the broad use of frameworks such as Numpy, SciPy, and TensorFlow, scientific computing and machine learning are seeing a ...
详细信息
Accurate prediction is highly important for clinical decision making and early treatment. In this paper, we study the imbalanced data problem in prediction, a key challenge existing in the healthcare area. Imbalanced ...
详细信息
ISBN:
(纸本)9781728190747
Accurate prediction is highly important for clinical decision making and early treatment. In this paper, we study the imbalanced data problem in prediction, a key challenge existing in the healthcare area. Imbalanced datasets bias classifiers towards the majority class, leading to an unsatisfied classification prediction performance on the minority class, which is known as imbalance problem. Existing imbalance learning methods may suffer from issues like information loss, overfitting, and high training time cost. To tackle these issues, we propose a novel ensemble learning method called Multiple bAlance Subsets Stacking (MASS) by exploiting a multiple balance subsets construction strategy. Furthermore, we improve MASS with introducing parallelism (parallel MASS) to reduce the training time cost. We evaluate MASS on three real-world healthcare datasets, and experimental results demonstrate that its prediction performance outperforms the state-of-art methods in terms of AUC, F1-score and MCC. Through the speedup analysis, parallel MASS reduces the training time cost greatly on large dataset, and its speedup increases as the data size grows.
Disconnection of photovoltaic (PV) farm from the grid, is the common practice on the occurrence of grid faults. Obeying the modern updated grid codes, developing a control algorithm for distributed converters for ridi...
详细信息
This paper presents a novel control plane protocol designed to enable cooperative resource sharing in heterogeneous edge cloud scenarios. While edge clouds offer the advantage of potentially lower latency for time cri...
详细信息
ISBN:
(纸本)9781728170022
This paper presents a novel control plane protocol designed to enable cooperative resource sharing in heterogeneous edge cloud scenarios. While edge clouds offer the advantage of potentially lower latency for time critical applications, computing load generated by mobile users at the network edge can be very bursty as compared with aggregated traffic served by a data center. This motivates the design of a shared control plane which enables dynamic resource sharing between edge clouds in a region. The proposed control plane is designed to exchange key compute and network parameters (such as CPU GIPS, % utilization, and network bandwidth) needed for cooperation between heterogeneous edge clouds across network domains. The protocol thus enables sharing mechanisms such as dynamic resource assignment, compute offloading, load balancing, multi-node orchestration, and service migration. A specific distributed control plane (DISCO) based on overlay neighbor distribution with hop-count limit is described and evaluated in terms of control overhead and performance using an experimental prototype running on the ORBIT radio grid testbed. The prototype system implements a heterogeneous network with 18 autonomous systems each with a compute cluster that participates in the control plane protocol and executes specified resource sharing algorithms. Experimental results are given comparing the performance of the baseline with no cooperation to that of cooperative algorithms for compute offloading, cluster computing and service chaining. An application level evaluation of latency vs. offered load is also carried out for an example time-critical application (image analysis for traffic lane detection). The results show significant performance gains (as much as 45% for the cluster computing example) vs. the no cooperation baseline in each case at the cost of relatively modest complexity and overhead.
Virtual power plants can help active distribution networks fully utilize and efficiently manage large amounts of distributed energy resources by calculating the aggregate flexibility of VPPs at the point of common cou...
Virtual power plants can help active distribution networks fully utilize and efficiently manage large amounts of distributed energy resources by calculating the aggregate flexibility of VPPs at the point of common coupling, which is usually described as a two-stage robust optimization problem. Column and constraint generation algorithm has become a mature method for solving two-stage robust optimization problem, which decomposes the original problem into master problem and sub-problem. However, solving sub-problem is usually time-consuming. In order to improve the solving efficiency of sub-problem, two acceleration strategies are proposed in this paper. Firstly, we linearize the bilinear term in the SP objective function using a binary expansion. Then, an extreme scenarios method is adopted to deal with the randomness of renewables, which allows sub-problem to be solved in parallel and reduces unnecessary conservativeness. In addition, a max-min model is constructed to obtain an upper bound of the actual operating cost of virtual power plants. The cost functions of virtual power plants can be expressed as a group of piecewise-linear functions based on convex hull. Finally, the correctness and effectiveness of the proposed method are verified by numerical test results of three different distribution systems.
To tackle the computation, resource poorness on the end devices, task offloading is developed to reduce the task completion time and improve the Quality-of-Service (QoS). Edge cloud facilitates such offloading by prov...
详细信息
ISBN:
(纸本)9781728190747
To tackle the computation, resource poorness on the end devices, task offloading is developed to reduce the task completion time and improve the Quality-of-Service (QoS). Edge cloud facilitates such offloading by provisioning resources at the proximity of the end devices. Modern applications are usually deployed as a chain of subtasks (e.g., microservices) where a special offloading strategy, referred as binary of loading, shall be applied. Binary offloading divides the chain into two parts, which will be executed on end device and the edge cloud, respectively. The offloading point in the chain therefore is critical to the QoS in terms of task completion time. Considering the system dynamics and algorithm sensitivity, we apply Q-learning to address this problem. In order to deal with the late feedback problem, a reward rewind match strategy is proposed to customize Q-learning. Trace-driven simulation results show that our customized Q-learning based approach is able to achieve significant reduction on the total execution time, outperforming traditional offloading strategies and non-customized Q-learning.
Mobile edge computing (MEC) platform allows its subscribers to utilize computational resource in close proximity to reduce the computation latency. In this paper, we consider two users each has a set of computation ta...
详细信息
ISBN:
(纸本)9781728190747
Mobile edge computing (MEC) platform allows its subscribers to utilize computational resource in close proximity to reduce the computation latency. In this paper, we consider two users each has a set of computation tasks to execute. In particular, one user is a registered subscriber that can access the computation service of MEC platform, while the other unregistered user cannot directly access the MEC service. In this case, we allow the registered user to receive computation offloading from the unregistered user, compute the received task(s) locally or further offload to the MEC platform, and charge a fee that is proportional to the computation workload. We study from the registered user's perspective to maximize its total utility that balances the monetary income and the cost on execution delay and energy consumption. We formulate a mixed integer non-linear programming (MINLP) problem that jointly decides the execution scheduling of the computation tasks (i.e., the device where each task is executed) and the computation/communication resource allocation. To tackle the problem, we first derive the closed-form solution of the optimal resource allocation given the integer task scheduling decisions. We then propose a reduced-complexity approximate algorithm to optimize the combinatorial computation scheduling decisions. Simulation results show that the proposed collaborative computation scheme effectively improves the utility of the helper user compared with other benchmark methods, and the proposed solution method approaches the optimal solution within 0.1% average performance gap with significantly reduced complexity.
暂无评论