In the packet-switched domain of the Universal Mobile Telecommunications System (UNITS), scheduling algorithms such as the Proportional Fairness (PF) and Round Robin (RR) are used to decide on resource allocation (tim...
详细信息
ISBN:
(纸本)9781424405268
In the packet-switched domain of the Universal Mobile Telecommunications System (UNITS), scheduling algorithms such as the Proportional Fairness (PF) and Round Robin (RR) are used to decide on resource allocation (time and code space) for the users. According to the literature, the PIT algorithm will provide significant throughput gain only if the channel variation is slow enough for the scheduler to track but fast enough such that it does not have to wait for too long in order to experience a constructive fading. In this paper, the authors provide a comparison study between the PIT & RR algorithm. Matlab simulations are conducted under various different settings and environments in order to gauge and understand the characteristics of the PF scheduler. Simulation results show that under average channel conditions with fading within 5dB and user diversity of 5, the PF provides a cell throughput gain of 5% over the RR and an individual UE bit rate gain of 7% over RR. When simulated with a higher SNR margin of 10dB and user diversity of 10, the PIT showed better performance compared to the RR, with a cell throughput gain of 18% and an individual bit rate gain of 20% over those achieved by the RR. This seems to suggest that the PF is able to better adapt to an increased user diversity and Signal-to-Noise (SNR) margin compared to the RR. The findings in this paper underscore the critical characteristics of the Proportional Fairness scheduling algorithm under different channel conditions. The findings also set the tone for further research to optimise the trade-off achieved by the PF.
Modern computer system is organized with multi-core processing system. The scheduling of processes in multiprocessing may turn into more complex task. In multi-core processing system, there are two or more cores embed...
详细信息
ISBN:
(纸本)9789811055089;9789811055072
Modern computer system is organized with multi-core processing system. The scheduling of processes in multiprocessing may turn into more complex task. In multi-core processing system, there are two or more cores embedded into a single chip. This architecture provides more efficiency in terms of throughput than single processor architecture. Previously, most of the work has been done in creating new scheduling algorithms for multi-core processing system, but small consideration has been given to merge user priority and system priority. In this paper, researcher has proposed Smart Job First Dynamic Round Robin algorithm with smart Time Quantum (SJFDRR) in multi-core processing system in which a smart priority factor (SPF) is calculated for each process. The process which has lowest value of SPF is scheduled first. The time quantum is calculated dynamically for each processor. By this algorithm the average waiting time and average turnaround time and context switch is significantly decreases which lead to increase in performance of the system.
scheduling algorithm is an important step in cloud computing, which determines the effectiveness of the system. Focus on the business requirement of mimic common operating environment (MCOE), especially the incomplete...
详细信息
ISBN:
(纸本)9781728128658
scheduling algorithm is an important step in cloud computing, which determines the effectiveness of the system. Focus on the business requirement of mimic common operating environment (MCOE), especially the incomplete consideration of load balancing and heterogeneous in traditional scheduling algorithms, this paper presents an entropy weight clustering scheduling (EWCS) algorithm, which combines the dynamic heterogeneous redundancy (DHR) architecture of mimetic defense theory and K-Means clustering of machine learning to complete the nodes selection on the cloud platform. This algorithm consists of four steps: risk value screening, load balancing, entropy weight calculation and clustering optimization. The simulation results show that the algorithm is reasonable and can serve MCOE well. It is also an effective attempt to apply machine learning method to scheduling problem.
Hyperledger Fabric (Fabric for short), is a consortium blockchain platform that adopts the smart contract paradigm and provides complete operational functions. Although it has become the system with the highest throug...
详细信息
ISBN:
(纸本)9798350369205;9798350369199
Hyperledger Fabric (Fabric for short), is a consortium blockchain platform that adopts the smart contract paradigm and provides complete operational functions. Although it has become the system with the highest throughput among open source blockchain systems, its performance cannot meet the needs of industrial-grade application scenarios. To further expand the application scenarios of blockchain, this paper proposes a Transaction Batch Processing scheduling (TBPS) algorithm for multi-channel Fabric networks based on Lyapunov optimization theory. The algorithm maximizes the consensus efficiency of the system while ensuring the minimum transaction accumulation, and provides stability conditions and optimal performance for the system under transaction batch processing. Finally, we built a blockchain network of Fabric's latest stable version v2.0 via the cloud platform, providing an order of magnitude of algorithmic parameters by testing transaction processing rates. To simulate the distribution of performance indicators such as transaction delay, system transaction accumulation and average transaction processing rate under different impact factors, and verify the effectiveness of the proposed TBPS algorithm.
With the rise and development of cloud computing technology, more and more individuals and companies are hosting applications to cloud servers. Thanks to the strong support of virtualization technology, the developmen...
详细信息
ISBN:
(纸本)9781728136608
With the rise and development of cloud computing technology, more and more individuals and companies are hosting applications to cloud servers. Thanks to the strong support of virtualization technology, the development of cloud computing technology has become faster. Virtualization technology can isolate the user from the underlying physical hardware and generate virtual machines of various configurations for the user. However, as a basic unit for cloud users, virtual machine scheduling algorithms have become a research hotspot in academia and industry. In the existing research, most of the virtual machine scheduling algorithms are aimed at simplifying or satisfying user requirements. However, with the further expansion of the cloud cluster size and the increase of multi-tenant parallel tasks, the existing virtual machine scheduling algorithm can not meet the current requirements. In this paper, according to the characteristics of virtual machine and physical machine, a new virtual machine scheduling algorithm is proposed: Maximum Filling (MF) algorithm, which can improve the scheduling efficiency of virtual machines and reduce the usage of physical machines in cloud clusters. Finally, the effect of the algorithm is compared. The proposed algorithm is obviously better than the comparison algorithm.
Computational Grids provide computing power by sharing resources across administrative domains. This sharing, coupled with the need to execute distrusted task from arbitrary users, introduces security hazards. This st...
详细信息
ISBN:
(纸本)3540482741
Computational Grids provide computing power by sharing resources across administrative domains. This sharing, coupled with the need to execute distrusted task from arbitrary users, introduces security hazards. This study mainly examines the integration of the notion of "trust" into resource management based on Grid economic model to enhance Grid security. Our contributions are two-fold: First, we propose a trust function which based on dynamic trust changing and construct a Grid trust model based on behavior. Second, we present trust-aware time optimization scheduling algorithm within budget constraints and trust-aware cost optimization scheduling algorithm within deadline constraints. The performance of these algorithms excels that of algorithm without considering trust via theory analysis and simulation experiment.
The optimal scheduling algorithm based on non-periodic information is model and analyzed. Compared with typical scheduling algorithms, it improves non-periodic tasks theoretical delay. Finally, the simulation shows th...
详细信息
ISBN:
(纸本)9783037850732
The optimal scheduling algorithm based on non-periodic information is model and analyzed. Compared with typical scheduling algorithms, it improves non-periodic tasks theoretical delay. Finally, the simulation shows that the scheduling algorithm is effective in reducing delay problems of the non-periodic communication task.
This paper studied the real-time performance of three-tier client/server architecture used in remote monitoring system. A scheduling algorithm was adopted in middleware of this architecture, which based on the regulat...
详细信息
ISBN:
(纸本)9780878492268
This paper studied the real-time performance of three-tier client/server architecture used in remote monitoring system. A scheduling algorithm was adopted in middleware of this architecture, which based on the regulation: setting the priority of task according to the percentage of useful data in the data-buffer or the LAD (most locally available data first) but not according to the earliest deadline. Contrast simulation of the improved algorithm and EDF (earliest deadline first algorithm) had been achieved from program developed using VC++ at different average task lengths and update workloads under the tentative parameters such as the size of data in different data-buffer, the time used for fetch an object from these data-buffers and average task inter-arrival time. The results showed that the LAD's completed task percentage before deadlines was higher than EDF, which proved LAD was more suitable to improve the real-time performance of three-tier client/server architecture.
Edge-cloud jobs are rapidly prevailing in many application domains, posing the challenge of using both resource-strenuous edge devices and elastic cloud resources. Efficient resource allocation on such jobs via schedu...
详细信息
ISBN:
(纸本)9781665458221
Edge-cloud jobs are rapidly prevailing in many application domains, posing the challenge of using both resource-strenuous edge devices and elastic cloud resources. Efficient resource allocation on such jobs via scheduling algorithms is essential to guarantee their performance, e.g. latency. Deep reinforcement learning (DRL) is increasingly adopted to make scheduling decisions but faces the conundrum of achieving high rewards at a low training overhead. It is unknown if such a DRL can be applied to timely tune the scheduling algorithms that are adopted in response to fast changing workloads and resources. In this paper, we propose EdgeTuner to effectively leverage DRL to select scheduling algorithms online for edge-cloud jobs. The enabling features of EdgeTuner are sophisticated DRL model that captures complex dynamics of Edge-Cloud jobs/tasks and an effective simulator to emulate the response times of short-running jobs in accordance to dynamically changing scheduling algorithms. EdgeTuner trains DRL agents offline by directly interacting with the simulator. We implement EdgeTuner on Kubernetes scheduler and extensively evaluate it on Kubernetes cluster testbed driven by the production traces. Our results show that EdgeTuner outperforms prevailing scheduling algorithms by achieving significant lower job response time while accelerating DRL training speed by more than 180x.
High-speed ATM switches mostly implement the memory architecture of advanced input queuing for unicast traffic. When handling multicast traffic this is infeasible due to the high cost it inserts. A set of multicast sc...
详细信息
ISBN:
(纸本)0780382927
High-speed ATM switches mostly implement the memory architecture of advanced input queuing for unicast traffic. When handling multicast traffic this is infeasible due to the high cost it inserts. A set of multicast scheduling algorithms implementing input queuing has been proposed in the past. In this paper we present a new multicast scheduling algorithm, FINE, that maintains multiple queues for incoming packets in an M x N switch, a total of 2(N) - 1 queues. FINE provides improved characteristics over alternatives as to the throughput it achieves and the fairness it ensures. In addition, it presents suppleness in keeping with the system's properties and its applications features.
暂无评论