At present, the traditional low carbon scheduling method of thermal power unit thermal storage capacity realizes the scheduling of thermal storage capacity by establishing a thermal power coordination model. Due to th...
详细信息
In a distributed quantum computation, a large quantum circuit gets sliced into sub -circuits that must be executed at the same time on a quantum computing cluster. The interactions between the sub -circuits are usuall...
详细信息
ISBN:
(纸本)9798331541378
In a distributed quantum computation, a large quantum circuit gets sliced into sub -circuits that must be executed at the same time on a quantum computing cluster. The interactions between the sub -circuits are usually defined in terms of non -local gates that require shared entangled pairs and classical communication between different nodes. Assuming that multiple end users submit distributed quantum computing (DQC) jobs to the cluster, an execution management problem arises. This is actually a parallel job scheduling problem, in which a set of jobs of varying processing times need to be scheduled on multiple machines while trying to minimize the length of the schedule. In a previous work, we started investigating the problem considering random circuits and approximating the length of each DQC job with the number of layers of the circuit. In this work, we put forward the study by considering a more realistic model for estimating DQC job lengths and by performing evaluations with circuits of practical interest.
In the dispersion problem, a group of k <= n mobile robots, initially placed on the vertices of an anonymous graph G with n vertices, must redistribute themselves so that each vertex hosts no more than one robot. W...
详细信息
ISBN:
(纸本)9783031814037;9783031814044
In the dispersion problem, a group of k <= n mobile robots, initially placed on the vertices of an anonymous graph G with n vertices, must redistribute themselves so that each vertex hosts no more than one robot. We address this challenge on an anonymous triangular grid graph, where each vertex can connect to up to six adjacent vertices. We propose a distributed deterministic algorithm that achieves dispersion on an unoriented triangular grid graph in O(root n) time, where n is the number of vertices. Each robot requires O(log n) bits of memory. The time complexity of our algorithm and the memory usage per robot are optimal. This work builds on previous studies by Kshemkalyani et al. [WALCOM 2020 [17]] and Banerjee et al. [ALGOWIN 2024 [3]]. Importantly, our algorithm terminates without requiring prior knowledge of n and resolves a question posed by Banerjee et al. [ALGOWIN 2024 [3]].
We herein briefly describe a multilevel approach to analyze parallel algorithms performances. The main outcome of such an approach is that the algorithm is described using a set of operators related to each other acco...
详细信息
ISBN:
(纸本)9798350363074;9798350363081
We herein briefly describe a multilevel approach to analyze parallel algorithms performances. The main outcome of such an approach is that the algorithm is described using a set of operators related to each other according to the problem decomposition. A set of block matrices (called decomposition and execution matrices) highlights fundamental characteristics of the algorithm, such as inherent parallelism and sources of overheads, and all the involved factors and their relationships, in a general but modular, flexible and adaptive fashion. The work aims to show how we can rewrite the well-known Ware-Amdhal's Law: in a previous work we already gave an expression for the Amdhal's law in our framework, but, even with the same meaning, it wasn't immediately comparable with the original one. Here we focus on that law and show how it comes exactly from our parameters. Moreover, we extend the law with a more general expression, that we call Generalized Amdhal's Law: the classical law will come as a particular case of the generalized one.
The escalating cyber-attacks targeting power infrastructure underscore the critical importance of smart grid security. However, existing solutions often struggle with the challenge of balancing security and performanc...
详细信息
ISBN:
(纸本)9798350318562;9798350318555
The escalating cyber-attacks targeting power infrastructure underscore the critical importance of smart grid security. However, existing solutions often struggle with the challenge of balancing security and performance overhead, leading to suboptimal protection or increased operational latency. To address this, we propose an intrusion detection system (IDS) designed to operate within P4-based programmable network devices, enabling real-time identification of critical attacks like distributed denial-of-service (DDoS) and false data injection (FDI). Central to our approach is a novel data structure optimized for time series data, capturing key information such as packet timing and data payload distribution. Leveraging decision trees, a robust machine learning technique, enables effective anomaly detection and prediction. Additionally, we integrate data compression techniques to reduce device memory usage while maintaining detection accuracy. Our evaluation results demonstrate minimal overhead in packet processing speed with 1 to 20 nanoseconds differences per packet, and enhanced data storage efficiency with compression ratios reaching up to 60.9%. Despite these optimizations, there is only a slight decrease in detection accuracy, such as a 2.81% drop in detecting false data injection attack (FDIA).
Adopting Digital Twin (DT) technology in vehicular edge computing (VEC) enables efficient capture of real-time state information of applications, thereby addressing complex task scheduling problems. Existing literatur...
详细信息
ISBN:
(纸本)9798350369458;9798350369441
Adopting Digital Twin (DT) technology in vehicular edge computing (VEC) enables efficient capture of real-time state information of applications, thereby addressing complex task scheduling problems. Existing literature studies considered only minimizing service latency for task offloading;however, there is room for exploring strategies to enhance user Quality of Experience (QoE) in timeliness and reliability domains. In this paper, we have developed an optimization framework using Mixed Integer Linear Programming (MILP), namely QuETOD, which minimizes service latency by allocating task execution responsibility to highly reliable and reputed vehicles in a DT-enabled VEC environment. The developed QuETOD framework clusters the vehicles based on the demand-supply theory of economics by considering computing resources and utilizing the multi-weighted subjective logic for getting the proper reputation update of the vehicles. The experimental results of the developed QuETOD system depict significant performance improvement in terms of QoE and reliability compared to the state-of-the-art works as high as 15% and 25%, respectively.
Fog computing, an evolution of cloud computing, has become increasingly popular for its ability to lessen the burden of such a centralized computing paradigm by distributing tasks generated by IoT across fog layers. E...
详细信息
ISBN:
(纸本)9798350369458;9798350369441
Fog computing, an evolution of cloud computing, has become increasingly popular for its ability to lessen the burden of such a centralized computing paradigm by distributing tasks generated by IoT across fog layers. Effectively managing real-time, delay-sensitive, and diverse IoT applications to enhance the Quality-of-Experience (QoE) presents significant challenges due to the dispersed nature and limited resources of fog nodes. Previous studies in fog computing task offloading have typically focused on either energy consumption or service delay. This paper introduces an optimization framework for task offloading within fog computing environments that aims to balance improved user QoE with reduced energy consumption, employing Mixed-Integer Linear Programming (MILP). Given the NP-hard nature of this framework, we have devised a Deep Q-Learning (DQL) based model for task offloading, termed ELTO-DQL, which aims for near-optimal solutions in polynomial time. Experimental results indicate that the ELTO-DQL model enhances energy efficiency and QoE by up to 19% and 15% respectively, outperforming contemporary benchmarks.
The current parallel resource allocation model of power network query engine task is mostly based on informatization model processing, and the allocation efficiency of parallel resources is relatively low, resulting i...
详细信息
ISBN:
(纸本)9798331530372;9798331530365
The current parallel resource allocation model of power network query engine task is mostly based on informatization model processing, and the allocation efficiency of parallel resources is relatively low, resulting in the decreasing allocation frequency, for this reason, we propose to design and analyze the design and analysis of the parallel resource allocation method of power network query engine task based on the cross-platform communication protocol. According to the current determination, real-time resource collection and preprocessing are carried out first, matching of resource feature points is carried out, cross-platform communication protocols are adopted to improve the allocation efficiency of parallel resources, cross-platform communication protocols are constructed for parallel resource allocation model of electric power network query engine task, and adaptive adjustment is adopted to realize resource allocation processing. Test results show that: compared with the cloud edge dynamic measurement resource allocation method, intelligent cooperative user scheduling and resource allocation, this design of cross-platform communication protocol power network query engine task parallel resource allocation method ultimately results in a relatively high frequency of allocation, which indicates that this design of cross-platform communication protocol power network query engine task parallel resource allocation method of real-time processing efficiency machine is targeted to a high degree, the effect of allocation is significantly improved, which has practical application value and innovative significance.
With the continuous development of the electricity market, the scale of power terminal access continues to expand, and distributed photovoltaic collection and control strategies have become an important research hotsp...
Occupancy refers to the presence of people in rooms and buildings. It is an essential input for IoT applications, including controlling lighting, heating, access, and monitoring space limitation policies. Occupancy in...
详细信息
ISBN:
(纸本)9798350369458;9798350369441
Occupancy refers to the presence of people in rooms and buildings. It is an essential input for IoT applications, including controlling lighting, heating, access, and monitoring space limitation policies. Occupancy information can also be used to improve users' comfort and to reduce energy waste in buildings. This paper evaluates the performance and resource consumption of recent machine learning techniques for occupancy detection and measurement by exploiting data from distributed environmental sensors. This evaluation is founded on a dataset captured by our dedicated sensor network for indoor monitoring, comprising temperature, humidity, and carbon dioxide (CO2) sensors. Using different sensor modalities and spatio-temporal data selections, we compare eight classification algorithms based on the accuracy achieved and the required runtimes. Binary classification for occupancy detection (OD) achieves accuracies over 90% for individual modalities and close to 100% for modality combinations. Multi-class classification for occupancy measurements (OM) shows as clear ranking of the sensor modalities, and gradient boosting algorithms are superior when combining sensor modalities and fusing data from multiple sensors.
暂无评论