As both new network attacks emerge and network traffic increases in volume, the need to perform network traffic inspection at high rates is ever increasing. The core of many security applications that inspect network ...
详细信息
As both new network attacks emerge and network traffic increases in volume, the need to perform network traffic inspection at high rates is ever increasing. The core of many security applications that inspect network traffic (such as Network Intrusion Detection) is pattern matching. At the same time, pattern matching is a major performance bottleneck for those applications: indeed, it is shown to contribute to more than 70% of the total running time of Intrusion Detection systems. Although numerous efficient approaches to this problem have been proposed on custom hardware, it is challenging for pattern matching algorithms to gain benefit from the advances in commodity hardware. This becomes even more relevant with the adoption of Network Function Virtualization, that moves network services, such as Network Intrusion Detection, to the cloud, where scaling on commodity hardware is key for performance. In this paper, we tackle the problem of pattern matching and show how to leverage the architecture features found in commodity platforms. We present efficient algorithmic designs that achieve good cache locality and make use of modern vectorization techniques to utilize data parallelism within each core. We first identify properties of pattern matching that make it fit for vectorization and show how to use them in the algorithmic design. Second, we build on an earlier, cache-aware algorithmic design and show how we apply cache-locality combined with SIMD gather instructions to pattern matching. Third, we complement our algorithms with an analytical model that predicts their performance and that can be used to easily evaluate alternative designs. We evaluate our algorithmic design with open data sets of real-world network traffic: Our results on two different platforms, Haswell and Xeon-Phi, show a speedup of 1.8x and 3.6x, respectively, over Direct Filter Classification (DFC), a recently proposed algorithm by Choi et al. for pattern matching exploiting cache locality, an
parallel computing is a type of computational construction in which multiple processors perform multiple small calculations at once and a whole large and complex set of problems. Dynamic simulation and real-world data...
详细信息
Development of internet of things (IoT) and smart devices eased life by offering numerous applications targeting to provide real-time low latency services, but they also brought challenges in handling huge data genera...
详细信息
Development of internet of things (IoT) and smart devices eased life by offering numerous applications targeting to provide real-time low latency services, but they also brought challenges in handling huge data generated from the powerful computations, to get a job done. Decentralized edge computing could help to achieve latency requirements of the applications by executing them closer to the user at edge of network, but most of the current studies actually deployed centralized approaches for cluster computing at edge, which put extra overhead of cluster formation and management. In this article, we propose to group heterogeneous edge nodes on task arrival with a more decentralized method and execute tasks in parallel to meet their deadline. On the other hand, to guarantee successful execution of critical IoT application running in an edge network, fault tolerance has to be significantly considered. For resource limited edge devices, there is a great need for efficient fault tolerance techniques, which can provide reliability based on device's local information, without worrying about overall network topology. In this article, our novel method is to increase task reliability being executed in distributed edge computing environment through finding reliability of an edge node locally, and by providing fault tolerance to increase overall application availability. Our proposed fault tolerance technique works in decentralized mode by executing new algorithms to handle above mentioned problems. Our experiment results show that our approach is effective as well as providing desired goals of achieving deadline for latency-aware IoT applications, with staggering decrease in overall network traffic along with ensuring reliability and availability.
The growing presence of Renewable Energy Sources (RESs) is heavily affecting the control and management of distribution grids. Battery Energy Storage systems (BESSs) are increasingly adopted to mitigate the adverse im...
详细信息
The growing presence of Renewable Energy Sources (RESs) is heavily affecting the control and management of distribution grids. Battery Energy Storage systems (BESSs) are increasingly adopted to mitigate the adverse impacts caused by the uncertainty of RES generators. Most of the existing works in the literature aims at using BESSs to maximize the self-consumption of energy provided by RES generators or focuses on the mitigation of voltage and/or frequency drift effects. More recently, some research works also studied the effects that BESSs have on the active power flows at the Point of Common Coupling (PCC) of prosumers. All these studies are based on the definition of proper energy Key Performance Indicators (KPIs) based on heterogeneous information collected from several measurement devices deployed in the field. This also means that the results of such studies could be significantly affected by the quality of data, particularly concerning the time resolution of the measurements. To better highlight this concern, this paper proposes an analysis of the impact of the time resolution of measurements on the evaluation of the absolute power ramp reduction at the PCC of prosumers in presence of BESS installations. The analysis was performed on real data collected from the eLUX lab of the University of Brescia, Italy, by varying the time resolution from 5 s to 15 min. The results of the analysis demonstrated that the time resolution has a great impact on the evaluation of the considered KPI, by leading to completely different performance assessments.
Edge Intelligence is a synergy that seems to be imperative to conclude the convergence of the Edge Computing and Internet of Things to support intelligent application very close to end users. IoT has pervaded our dail...
Edge Intelligence is a synergy that seems to be imperative to conclude the convergence of the Edge Computing and Internet of Things to support intelligent application very close to end users. IoT has pervaded our daily life by making things, interconnected through the Internet, smarter, distributed and more autonomous. The emerging development of intelligent applications in IoT has now started to gain significant attention. Cloud provides many benefits to IoT devices, including high-performance computing, a storage infrastructure, processing and analysis of large-scale data giving to IoT the opportunity to be robust, smart and self-configuring. The forthcoming emergence of Edge AI will extend the capabilities of the ‘legacy’ IoT, its potentials, the number of devices and the volumes of data. However, Cloud technologies face some accessibility challenges when providing services to end-users. For instance, mobile clients can move among different places, yet require Cloud services with minimum cost and short response time. The unstable connection between Cloud and mobile devices is expected to prevent providers from achieving the optimal performance. To cope with these limitations, we aspire that Edge AI converged with IoT environments materializes the desired AI-led distributed and ubiquitous intelligence in real computing systems. We then need additional effort to establish and deliver the convergence of Edge AI and IoT in formatting the future Intelligent IoT. The Intelligent IoT is envisioned to involve numerous autonomous & distributed computing and AI-driven entities capable of understanding their internal status (context), the status of their environment and peers (collaborative context) and take timely optimized actions to efficiently serve modern applications, like tactile internet and augmented AI-led gaming.
Now a days, data is generating by different industries on a daily basis which amount growing from terabyte to petabyte. To handle this large amount of data, big data technology. is great revolution and has impacted th...
详细信息
ISBN:
(纸本)9781728168234
Now a days, data is generating by different industries on a daily basis which amount growing from terabyte to petabyte. To handle this large amount of data, big data technology. is great revolution and has impacted the trends of the applied science. For the analysis of the large amount of data, Hadoop MapReduce technology is used in parallel multiple nodes. The numerous amount of data and information is stored through HDFS, Map and Reduce are two main functions of the MapReduce. There are many loop holes in MapReduce so for fast queries and realtime data stream, Spark is introduced. The base of the Spark is DAG and RDD approaches. The main purpose of this research is to make comparison between the basic features of Hadoop and Spark on which basis their performance will be evaluate.
The increasing penetration of converter-based generation in many power systems around the world has sparked a discussion about how to operate these power systems with the usual levels of efficiency, reliability and co...
详细信息
The increasing penetration of converter-based generation in many power systems around the world has sparked a discussion about how to operate these power systems with the usual levels of efficiency, reliability and cost-effectiveness. Current grid-following converter-based generators have proven to run stably in parallel to one another, even if there are thousands of them connected in a power system, and even in very small isolated power systems with extremely low system inertia. Discussions around the necessity of additional converter performance, usually under the 'grid-forming' and 'Virtual Synchronous Machines' concepts, have recently been transferred from the academic sphere to national and international industry fora. Formal discussions have started in Great Britain, in Germany and at ENTSO-E level. However, there is still a lot of uncertainty about the real and not simulated performance of grid-forming converters, whilst the needs case for requiring this radically different control method has not been adequately justified. With the present paper we raise key questions that will serve towards an objective discussion about power system needs, grid infeed technologies and their interaction.
The present paper describes a new paralleltime-domain simulation algorithm using a high performance computing environment - Julia - for the analysis of power system dynamics in large networks. The parallel algorithm ...
详细信息
The present paper describes a new paralleltime-domain simulation algorithm using a high performance computing environment - Julia - for the analysis of power system dynamics in large networks. The parallel algorithm adapts a parallel-in-space decomposition scheme to a previously sequential algorithm in order to develop a new parallelizable numerical solution of the power system equations. The parallel-in-space decomposition is based on the block bordered diagonal form, which reformulates the network admittance matrix into sub-blocks that can be solved in parallel. For the optimal spatial decomposition of the network, a new extended graph partitioning strategy is developed for load balancing and minimizing the communication between subnetworks. The new parallel simulation algorithm is tested using standard test networks of varying complexity. The simulation results are compared to those obtained from a sequential implementation in order to validate the solution accuracy and to determine the performance improvement in terms of computational speedup. Test simulations are conducted using the ForHLR II supercomputing cluster and show a huge potential in computational speedup with increasing network complexity.
Based on the AC/DC flexible interconnected system of distribution network, it is necessary to analyze the impact of AC side on DC side and DC side on AC side, and consider the impact of load fluctuation or photovoltai...
详细信息
ISBN:
(纸本)9781665414401
Based on the AC/DC flexible interconnected system of distribution network, it is necessary to analyze the impact of AC side on DC side and DC side on AC side, and consider the impact of load fluctuation or photovoltaic access, so as to comprehensively analyze the power flow distribution of AC system and DC system. In this paper, based on the graph database to establish the distribution network diagram power flow calculation related equipment model, using the basic principle of graph parallel processing model, based on Newton Raphson method, the alternating iteration method is used to realize the design of AC and DC diagram power flow calculation algorithm. Based on the distributed graph computing cluster system, the online AC/DC hybrid power flow calculation program is developed, and the online real-time operation of AC/DC hybrid power flow calculation is realized. The function of the software is reliable and simple, which can fully meet the needs of the project, so as to solve the practical problem of power flow analysis and calculation in the actual project..
暂无评论