Wireless mesh networks (WMNs) have been developed to answer the needs of many wireless applications. A major limiting parameter of the performance of WMNs is the interference between the several communications that oc...
详细信息
ISBN:
(纸本)9781479909582
Wireless mesh networks (WMNs) have been developed to answer the needs of many wireless applications. A major limiting parameter of the performance of WMNs is the interference between the several communications that occur simultaneously in the same network. To address this limitation, an adequate scheduling algorithm has to be implemented. Hence, this paper focuses on the scheduling problem under the physical interference model. This problem is known to be NP-Hard problem. In this paper we develop and propose two efficient scheduling algorithms. We evaluate their performances by simulation in terms of spatial reuse and we compare their performance with known previously-proposed algorithms. We show that our proposed algorithms provide high performances with low complexity.
Adopting Ahead-of-Time (AOT) compilation to improve a programming language's performance shows significant gains in execution time during consecutive program executions. However, such gains come at the cost of spe...
详细信息
ISBN:
(数字)9798331504830
ISBN:
(纸本)9798331515621
Adopting Ahead-of-Time (AOT) compilation to improve a programming language's performance shows significant gains in execution time during consecutive program executions. However, such gains come at the cost of spending computing resources on compiling methods. This is especially noticeable when implementing “dynamic-AOT” compilation, where loadable code is generated only for some of the methods and the rest of the program can be interpreted or compiled Just-in-Time. We propose a language-independent algorithm for scheduling partial compilations, an alternative to generating machine code for all the methods in the program at once. By distributing the compilation efforts among several program executions, the algorithm enables reducing the peak program execution duration for compilation runs. The compilation decisions are made to allow loading previously-compiled code, thereby reducing the partially-compiled program execution time. This is achieved by learning the program's partial call graph and guiding the compilation scheduling and loading decisions using this information. This research uses a WebAssembly runtime equipped with an AOT compiler to study the effects of the proposed algorithm. The AOT compiler uses Eclipse OMR-a language runtimes construction toolkit; but the proposed algorithm is independent of the target language and can also be used by other runtime construction toolkits. This paper includes a comparison of the execution times of a program compiled fully Ahead-of-Time and the proposed compilation scheduling algorithms. Using the proposed approach significantly reduces the maximum time taken by the compilation runs in most cases and reaches a peak performance similar to the traditional AOT compilation approach.
An increasing demand for high-performance systems has been observed in the domain of both general purpose and real-time systems, pushing the industry towards a pervasive transition to multi-core platforms. Unfortunate...
详细信息
An increasing demand for high-performance systems has been observed in the domain of both general purpose and real-time systems, pushing the industry towards a pervasive transition to multi-core platforms. Unfortunately, well-known and efficient scheduling results for single-core systems do not scale well to the multi-core domain. This justifies the adoption of more computationally intensive algorithms, but the complexity and computational overhead of these algorithms impact their applicability to real OSes. We propose an architecture to migrate the burden of multi-core scheduling to a dedicated hardware component. We show that it is possible to mitigate the overhead of complex algorithms, while achieving power efficiency and optimizing processors utilization. We develop the idea of “active monitoring” to continuously track the evolution of scheduling parameters as tasks execute on processors. This allows reducing the gap between implementable scheduling techniques and the ideal fluid scheduling model, under the constraints of realistic hardware.
We consider a transmission scheduling problem in which multiple agents receive update information through a shared Time Division Multiple Access (TDMA) channel. To provide timely delivery of update information, the pr...
详细信息
We consider a transmission scheduling problem in which multiple agents receive update information through a shared Time Division Multiple Access (TDMA) channel. To provide timely delivery of update information, the problem asks for a schedule that minimizes the overall Age of Information (AoI). We call this problem the Min-AoI problem. Several special cases of the problem are known to be solvable in polynomial time. Our contribution is threefold. First, we introduce a new job scheduling problem called the Min-WCS problem, and we prove that, for any constant r >= 1, every r-approximation algorithm for the Min-WCS problem can be transformed into an r-approximation algorithm for the Min-AoI problem. Second, we give a randomized 2.619-approximation algorithm, a randomized 3-approximation algorithm, which outperforms the previous one in certain scenarios, and a dynamic-programming-based exact algorithm for the Min-WCS problem. Finally, we prove that the Min-AoI problem is NP-hard.
In this paper, we study the problem of link scheduling for multi-hop wireless networks with per-flow delay constraints. Specifically, we are interested in algorithms that maximize the asymptotic decay-rate of the prob...
详细信息
ISBN:
(纸本)9781467307734;9781467307758
In this paper, we study the problem of link scheduling for multi-hop wireless networks with per-flow delay constraints. Specifically, we are interested in algorithms that maximize the asymptotic decay-rate of the probability with which the maximum end-to-end backlog among all flows exceeds a threshold, as the threshold becomes large. We provide both positive and negative results in this direction. By minimizing the drift of the maximum end-to-end backlog in the converge-cast on a tree, we design an algorithm, Largest-Weight-First(LWF), that achieves the optimal asymptotic decay-rate for the overflow probability of the maximum end-to-end backlog as the threshold becomes large. However, such a drift minimization algorithm may not exist for general networks. We provide an example in which no algorithm can minimize the drift of the maximum end-to-end backlog. Finally, we simulate the LWF algorithm together with a well known algorithm (the back-pressure algorithm) and a large-deviations optimal algorithm in terms of the sum-queue (the P-TREE algorithm) in converge-cast networks. Our simulation shows that our algorithm significantly performs better not only in terms of asymptotic decay-rate, but also in terms of the actual overflow probability.
With more and more data and service moved to different cloud computing systems, progress in the area of energy consumption and environmentally friendly practices must be made so these advancements do not plateau. The ...
详细信息
ISBN:
(数字)9781728137834
ISBN:
(纸本)9781728137841
With more and more data and service moved to different cloud computing systems, progress in the area of energy consumption and environmentally friendly practices must be made so these advancements do not plateau. The goal of this study is to provide exceptional insight regarding which scheduling algorithms will yield environmentally sustainable, commonly referred to as `green,' practices for energy efficiency. This will be achieved by tracking the similarities and differences of multiple scheduling algorithms, considering various datacenter topologies and their task sizes (MIPS). Trace files produced by the simulator will be used to create a visual representation of the observed data and analyze the energy consumption of the servers and switches (core, aggregation, and access).
The possibility of providing mobile devices with fast and affordable Cloud computing (CC) services is a major motivation for Cloudlet technology. However, for this happen, a suitable scheduling algorithm that minimize...
详细信息
ISBN:
(数字)9781728132891
ISBN:
(纸本)9781728132907
The possibility of providing mobile devices with fast and affordable Cloud computing (CC) services is a major motivation for Cloudlet technology. However, for this happen, a suitable scheduling algorithm that minimizes the consumer's (i.e. Mobile devices) waiting time under various Cloudlet scenarios needs to be determined. Hence, determining the suitability of a scheduling algorithm with respect to consumer's waiting time is the main objective of this study. To this end, we first present a brief review of existing scheduling algorithms as a road map towards evaluating the performance of each algorithm. Next, we present the performance evaluation of three popular scheduling algorithms under various Cloudlet scenarios. Specifically, an EdgeCloudSim simulator is used to evaluate the performance of MM1, Priority (PR) and Round Robin (RR) with respect to consumer's waiting time. Results gotten from the simulation show a noticeable difference in the performance of scheduling algorithms for different mobile applications as the number of mobile devices fluctuates.
In modern processors, energy savings are achieved using dynamic voltage and frequency scaling (DVFS). For task scheduling, where a task graph representing a program is allocated and ordered on multiple processors, DVF...
详细信息
ISBN:
(纸本)9781509028269
In modern processors, energy savings are achieved using dynamic voltage and frequency scaling (DVFS). For task scheduling, where a task graph representing a program is allocated and ordered on multiple processors, DVFS has been employed to reduce the energy consumption of the generated schedules, hence running the processors at heterogeneous speeds A prominent class of energy-efficient scheduling algorithms is slack reclamation algorithms, which try to use idle times (slack) to slow down processor speed to save energy. Several algorithms have been proposed and under the assumed system model they can achieve considerable energy savings. However, the question arises, how realistic and accurate these algorithms and models are when implemented and executed on real hardware. Can one achieve the promised energy savings? This paper proposes a methodology to investigate these questions and performs a first experimental evaluation of selected slack reclamation algorithms. Using schedules created by three scheduling algorithms for a set of task graphs, we generate code and execute it on a small parallel system. We measure the power consumption and compare the results between the algorithms and relate them to the expected values.
CPU scheduling plays a vital role in Operating Systems for undergraduate students. Understanding the CPU scheduling concepts and algorithms will positively affect students' further study on the course. However, te...
详细信息
CPU scheduling plays a vital role in Operating Systems for undergraduate students. Understanding the CPU scheduling concepts and algorithms will positively affect students' further study on the course. However, teaching and learning CPU scheduling algorithms using conventional lectures and textbooks is faced with difficulties by many teachers and students. First, textbooks illustrate the CPU scheduling algorithms in an incomplete and unclear manner. Second, students solve problems manually. They don't receive any immediate feedback on their solutions. Third, due to time restriction, the teacher has to select a few small problems. To overcome these problems, we developed a simple visual educational simulator, which can be used as an efficient tool for teaching and learning CPU scheduling algorithms for one processor. Although this simulation tool is similar to others, it has its own unique features. In this paper, the educational impact, functional capabilities and features for this simulator are discussed in details.
This work presents a comparative analysis of fog and edge computing scheduling algorithms, focusing on their impact on IoT workload optimization. The analysis considers task allocation, load balancing, energy efficien...
详细信息
ISBN:
(数字)9798350365306
ISBN:
(纸本)9798350365313
This work presents a comparative analysis of fog and edge computing scheduling algorithms, focusing on their impact on IoT workload optimization. The analysis considers task allocation, load balancing, energy efficiency, latency, and scalability. An advanced experimental setup with diverse workload scenarios simulates real-world IoT environments. Existing algorithms, alongside modifications and novel introductions, are evaluated. Real-world applications showcase practical considerations and the adaptability of these algorithms. Refined metrics capture both immediate and long-term effects on scalability. The results inform future fog and edge computing research, considering emerging technologies. The study identifies key factors for optimizing IoT workloads and highlights the importance of scheduling algorithms for maximizing performance.
暂无评论