Designing scheduling algorithms that work in synergy with TCP is a challenging problem in wireless networks. Extensive research on scheduling algorithms has focused on inelastic traffic, where there is no correlation ...
详细信息
Designing scheduling algorithms that work in synergy with TCP is a challenging problem in wireless networks. Extensive research on scheduling algorithms has focused on inelastic traffic, where there is no correlation between traffic dynamics and scheduling decisions. In this work, we study the performance of several scheduling algorithms in LTE networks, where the scheduling decisions are intertwined with wireless channel fluctuations to improve the system throughput. We use ns-3 simulations to study the performance of several scheduling algorithms with a specific focus on Max Weight (MW) schedulers with both UDP and TCP traffic, while considering the detailed behavior of OFDMA-based resource allocation in LTE networks. We show that, contrary to its performance with inelastic traffic, MW schedulers may not perform well in LTE networks in the presence of TCP traffic, as they are agnostic to the TCP congestion control mechanism. We then design a new scheduler called “Queue MW” (Q-MW) which is tailored specifically to TCP dynamics by giving higher priority to TCP flows whose queue at the base station is very small in order to encourage them to send more data at a faster rate. We have implemented Q-MW in ns-3 and studied its performance in a wide range of network scenarios in terms of queue size at the base station and round-trip delay. Our simulation results show that Q-MW achieves peak and average throughput gains of 37% and 10% compared to MW schedulers if tuned properly.
With the rapid development of network applications, network traffic is increasing, and traditional traffic scheduling methods are no longer able to meet the efficient and flexible network requirements. This paper prop...
详细信息
ISBN:
(数字)9798350368239
ISBN:
(纸本)9798350368246
With the rapid development of network applications, network traffic is increasing, and traditional traffic scheduling methods are no longer able to meet the efficient and flexible network requirements. This paper proposes a dynamic traffic scheduling algorithm based on Huawei network devices to optimize network resource utilization, reduce delay, and avoid network congestion. First, the traffic control characteristics of Huawei routers and switches are analyzed, and the existing five traffic scheduling methods are analyzed and evaluated. A dynamic scheduling mechanism based on priority queues is then designed, which can monitor network status in real-time and automatically adjust bandwidth allocation according to different traffic patterns. Second, the effectiveness of the algorithm in handling burst traffic and large-scale data transmission is verified through simulation experiments. The results show that the proposed dynamic traffic scheduling algorithm significantly improves data transmission efficiency and enhances the overall network performance. Finally, the paper discusses the potential application of the algorithm in real network environments and the future directions for improvement, providing a basis for further research.
The development of 3D integration technology significantly improves the bandwidth of network-on-chip (NoC) system. However, the 3D technology-enabled high integration density also brings severe concerns of temperature...
详细信息
The development of 3D integration technology significantly improves the bandwidth of network-on-chip (NoC) system. However, the 3D technology-enabled high integration density also brings severe concerns of temperature increase, which may impair system reliability and degrade the performance. Task scheduling has been regarded as one effective approach in eliminating thermal hotspot without introducing hardware overhead. However, centralized thermal-aware task scheduling algorithms for 3D-NoC have been limited for incurring high computational complexity as the system scale increase. In this paper, we propose a distributed agent-based thermal-aware task scheduling algorithm for 3D-NoC which shows high scheduling efficiency and high scalability. Experimental results have shown that when compared to the centralized algorithms, our algorithm can achieve up to 13 °C reduction in peak temperature of the system without sacrificing performance.
Active replication requires deterministic execution in each replica in order to keep them consistent. Debugging and testing need deterministic execution in order to avoid data races and "Heisenbugs". Beside ...
详细信息
Active replication requires deterministic execution in each replica in order to keep them consistent. Debugging and testing need deterministic execution in order to avoid data races and "Heisenbugs". Beside input, multi-threading constitutes a major source of nondeterminism. Several deterministic scheduling algorithms exist that allow concurrent but deterministic executions. Yet, these algorithms seem to be very different. Some of them were even developed without knowing the others. In this paper, we present the novel and flexible Unified Deterministic scheduling algorithm (UDS) for weakly and fully deterministic systems. Compared to existing algorithms, UDS has a broader parameter set, allowing for many configurations that can be used to adapt to a given work load. For the first time, UDS defines reconfiguration of a deterministic scheduler at run-time. Further, we informally show that existing algorithms can be imitated by a particular configuration of UDS, demonstrating its importance.
With the advancement of Internet of Things technology, Wireless Sensor-Actuator Networks (WSANs) have been widely applied in industrial process monitoring and control. However, due to the susceptibility of wireless co...
详细信息
Efficient GPU scheduling is the key to minimizing the execution time of the Deep Learning (DL) training workloads. DL training system schedulers typically allocate a fixed number of GPUs to each job, which inhibits hi...
详细信息
ISBN:
(数字)9781728199986
ISBN:
(纸本)9781728199993
Efficient GPU scheduling is the key to minimizing the execution time of the Deep Learning (DL) training workloads. DL training system schedulers typically allocate a fixed number of GPUs to each job, which inhibits high resource utilization and often extends the overall training time. The recent introduction of schedulers that can dynamically reallocate GPUs has achieved better cluster efficiency. This dynamic nature, however, introduces additional overhead by terminating and restarting jobs or requires modification to the DL training *** propose and develop an efficient, non-intrusive GPU scheduling framework that employs a combination of an adaptive GPU scheduler and an elastic GPU allocation mechanism to reduce the completion time of DL training workloads and improve resource utilization. Specifically, the adaptive GPU scheduler includes a scheduling algorithm that uses training job progress information to determine the most efficient allocation and reallocation of GPUs for incoming and running jobs at any given time. The elastic GPU allocation mechanism works in concert with the scheduler. It offers a lightweight and nonintrusive method to reallocate GPUs based on a “SideCar” process that temporarily stops and restarts the job's DL training process with a different number of GPUs. We implemented the scheduling framework as plugins in Kubernetes and conducted evaluations on two 16-GPU clusters with multiple training jobs based on TensorFlow. Results show that our proposed scheduling framework reduces the overall execution time and the average job completion time by up to 45% and 63%, respectively, compared to the Kubernetes default scheduler. Compared to a termination based scheduler, our framework reduces the overall execution time and the average job completion time by up to 20% and 37%, respectively.
We present Carousel-EDF, a new hierarchical scheduling algorithm for a system of identical processors, and its overhead-aware schedulability analysis based on demand bound functions. Carousel-EDF is an offshoot of NPS...
详细信息
We present Carousel-EDF, a new hierarchical scheduling algorithm for a system of identical processors, and its overhead-aware schedulability analysis based on demand bound functions. Carousel-EDF is an offshoot of NPS-F and preserves its utilization bounds, which are the highest among algorithms not based on a single dispatching queue and that have few preemptions. Furthermore, with respect to NPS-F, Carousel-EDF reduces by up to 50% the number of context switches and of preemptions caused by the high-level scheduler itself. The schedulability analysis we present in this paper is grounded on a prototype implementation of Carousel-EDF that uses a new implementation technique for the release of periodic tasks. This technique reduces the pessimism of the schedulability analysis presented and can be applied, with similar benefits, to other scheduling algorithms such as NPS-F.
A key features of the second generation for video broadcasting (DVB-S2) is the adoption of Adaptive Coding and Modulation (ACM) technology and Generic Stream Encapsulation (GSE) standards. In order to increase the sys...
详细信息
ISBN:
(纸本)9781467382250
A key features of the second generation for video broadcasting (DVB-S2) is the adoption of Adaptive Coding and Modulation (ACM) technology and Generic Stream Encapsulation (GSE) standards. In order to increase the system performance up to the Shannon limit a suitable resource management (RRM) exploiting this key features is a needed. Packet scheduling mechanisms particularly play a fundamental role to guarantee a better RRM, because they are responsible for choosing, with a fine time how to distribute the satellite resources among different terrestrial stations, taking into account channel condition and the quality-of-service requirements. This goal should be accomplish by providing, at the same time, an optimal trade-off between spectral efficiency, given the limited resources on satellite systems, while granting the requirements for quality-of-service. In this context, this paper provides an overview on the key issues that arise with the use of ACM technology and GSE encapsulation in the design of a scheduler to allocate satellite resources. Moreover, a survey on the most recent scheduling techniques is reported, including a comparison between different approaches presented in literature.
Data-adaptable reconfigurable embedded systems enable a flexible runtime implementation in which a system can transition the execution of tasks between hardware and software while simultaneously continuing to process ...
详细信息
Data-adaptable reconfigurable embedded systems enable a flexible runtime implementation in which a system can transition the execution of tasks between hardware and software while simultaneously continuing to process data during the transition. Efficient runtime scheduling of task transitions is needed to optimize system throughput and latency of the reconfiguration and transition periods. In this paper, we present and analyze several runtime transition scheduling algorithms and highlight the latency and throughput tradeoffs for an example system.
In public Infrastructure-as-a-Service (IaaS), virtual machines, servers, storage, and network are provided by cloud service providers. As a cloud service provider, who is facing a task for time constraint, how to sche...
详细信息
ISBN:
(纸本)9781479951529
In public Infrastructure-as-a-Service (IaaS), virtual machines, servers, storage, and network are provided by cloud service providers. As a cloud service provider, who is facing a task for time constraint, how to schedule the service resources to achieve the lowest cost becomes more and more important. Recently, most of works about MapReduce task scheduling are focus on homogeneous MapReduce framework. In this paper, we present the ILP formulation for solving the MapReduce task scheduling for time constrains problem in heterogeneous environment. This method considers processing speed, energy cost and time constrains at the same time. By using the method, we can finish the task in time and achieving lowest energy cost. Then, we solve this problem efficiently by using genetic algorithm(GA). According to our experimental results, the ILP formulation we proposed can always achieve the best solution, it also reduced the energy consumption by 10.15% compared to genetic algorithm.
暂无评论