The 3 rd Generation Partnership Project (3GPP) introduced Long Term Evolution (LTE) in release 8, and afterwards it was updated significantly in later releases (referred to as LTE-Advanced). LTE and LTE-Advanced (LTE...
详细信息
The 3 rd Generation Partnership Project (3GPP) introduced Long Term Evolution (LTE) in release 8, and afterwards it was updated significantly in later releases (referred to as LTE-Advanced). LTE and LTE-Advanced (LTE-A) aim to achieve higher spectral efficiency, higher data rates, robustness and flexibility. Intelligent channel-aware radio resource scheduling is one of the key features of LTE-A. A number of schedulers proposed in the literature rely on the feedback sent from the Users Equipment (UE) without considering the presence of feedback delay. In this paper, we analyse the effect of the uplink delay on the cell performance of existing schedulers, in terms of throughput and the users' fairness. We then propose an adaptive hybrid scheduler to overcome the effect of the uplink delay on the scheduler performance. The simulation results show that our proposed scheduling algorithm outperforms the existing schedulers in the presence of uplink feedback delay.
Advances in Cloud Computing attracted scientists to deploy their HPC applications to the cloud to benefit from the flexibility of the platform such as scalability and on-demand services. Nevertheless, HPC programs can...
详细信息
Advances in Cloud Computing attracted scientists to deploy their HPC applications to the cloud to benefit from the flexibility of the platform such as scalability and on-demand services. Nevertheless, HPC programs can face serious challenges in the cloud that could undermine the gained benefits. This paper first compares the performance of several HPC benchmarks on a commodity cluster and Amazon public cloud to illustrate the confronted challenges. To mitigate the problem, we have introduced a novel approach called ASETS, "A SDN Empowered Task scheduling System", to schedule data-intensive High Performance Computing (HPC) tasks in a Cloud environment. In this paper, we focus on the implementation and performance analysis of ASETS and its first algorithm called SETSA, (SDN Empowered Task scheduling Algorithm). ASETS uses the "bandwidth awareness" capability of SDN to better utilize the bandwidths when assigning tasks to virtual machines. This approach aims to improve the performance of HPC programs and provides an efficient HPC-as-a-Service (HPCaaS) platform. The paper briefly describes the architecture and the algorithm, and then focuses on the details of the implementation and performance analysis of ASETS and SETSA. Preliminary results indicate that ASETS delivers substantial performance improvement for HPCaaS as the degree of multi-tenancy increases. This result is significant since it indicates both the users and the cloud service providers can benefit from ASETS.
Long Term Evolution (LTE) network is a promising wireless technology which provides mobile users with high data rate and low latency. LTE has been developed by the 3rd Generation Partnership Project (3GPP) to improve ...
详细信息
Long Term Evolution (LTE) network is a promising wireless technology which provides mobile users with high data rate and low latency. LTE has been developed by the 3rd Generation Partnership Project (3GPP) to improve the end to end system delay and support higher Quality of Services (QoS). However, 3GPP has not defined a firm provision to handle the scheduling process, so that scheduling becomes an open issue for researchers. Instead, they defined nine classes with their corresponding Quality of Services (QoS) requirements. In this paper, a novel scheduling algorithm is proposed based on class service using cooperative game theory technique. The available resources are fairly distributed among classes as proportion which results in higher fairness level among classes. In the second level, the users with tightest delay requirements are prioritized. The proposed algorithm is compared with Proportional Fairness (PF) and Exponential (EXP) rule algorithms in terms of throughput, delay and fairness index. The proposed model outperforms PF and EXP schemes in all comparative parameters.
This paper studies the sum-capacity of the Multiple Input Single Output (MISO) Gaussian broadcast channel where K single-antenna users are served by a base station with N antennas, with N < K. The generalized Degre...
详细信息
ISBN:
(纸本)9781467364300
This paper studies the sum-capacity of the Multiple Input Single Output (MISO) Gaussian broadcast channel where K single-antenna users are served by a base station with N antennas, with N < K. The generalized Degrees-of-Freedom (gDoF) for this system is derived as the solution of a Maximum Weighted Bipartite Matching (MWBM) problem, where, roughly speaking, each of the N transmit antennas is assigned to a different user. The MWBM problem inspires a user selection algorithm where a subset of N out of K users is served. The proposed algorithm runs in polynomial-time (rather than involving an exhaustive search among all possible subsets of size N out of K users) and extends the classical DoF analysis to more realistic wireless channel configurations where users can experience very different channel gains from the base station. Extensive numerical simulations, run in practically relevant Rayleigh fading environments for different numbers of users and of antennas, show that the throughput achieved by serving the set of N users selected by the MWBM-based algorithm is at most N log(K) bits away from an outer bound to the sum-capacity, where in principle all the K users are served. Comparisons with another widely used user scheduling algorithm are also provided.
LTE-based satellite mobile communication system combines LTE standards and satellite communication technologies and have particular advantages in providing mobile communication services. However, the long transmission...
详细信息
LTE-based satellite mobile communication system combines LTE standards and satellite communication technologies and have particular advantages in providing mobile communication services. However, the long transmission delay in satellite communication poses a severe challenge to radio resource scheduling of the system. Focusing on the long transmission delay of satellite-ground link, we propose BSP, a Buffer Status Prediction based scheduling approach. BSP includes a Buffer Status Prediction and Reporting mechanism (BSPR) and a Buffer Status based Proportional Fairness scheduling algorithm (BSPF). eNB can get more accurate buffer status information from UEs, used for making scheduling decision, through BSPR mechanism; BSPF scheduling algorithm bases on PF scheduling algorithm and considers the quality of service class and data flow of services by adding buffer statuses of UEs as parameters of scheduling decision. Simulation results show that BSP scheduling approach can improve resource utilization of the system effectively and satisfy throughput requirements of different services preferably.
Network functions are widely deployed in modern networks, providing various network services ranging from intrusion detection to HTTP caching. Various virtual network function instances can be consolidated into one ph...
详细信息
Network functions are widely deployed in modern networks, providing various network services ranging from intrusion detection to HTTP caching. Various virtual network function instances can be consolidated into one physical middlebox. Depending on the type of services, packet processing for different flows consumes different hardware resources in the middlebox. Previous solutions of multi-resource packet scheduling suffer from high computational complexity and memory cost for packet buffering and scheduling, especially when the number of flows is large. In this paper, we design a novel low-complexity and space-efficient packet scheduling algorithm called Myopia, which supports multi-resource environments such as network function virtualization. Myopia is developed based upon the fact that most Internet traffic is contributed by a small fraction of elephant flows. Myopia schedules elephant flows with precise control and treats mice flows using FIFO, to achieve simplicity of packet buffering and scheduling. We will demonstrate, via theoretical analysis, prototype implementation, and simulations, that Myopia achieves multi-resource fairness at low cost with short packet delay.
In this paper, we proposed a new disk scheduling algorithm for improving the performance of modern storage devices in terms of throughput. Since the invention of disk with movable heads, many researchers tried to impr...
详细信息
In this paper, we proposed a new disk scheduling algorithm for improving the performance of modern storage devices in terms of throughput. Since the invention of disk with movable heads, many researchers tried to improve the I/O performance by using intelligent algorithms for disk scheduling. Memory capacity and speed of the processor are increasing several times than the speed of disk. This disparity shows that this cannot supply data to processor as fast as it requires. As soon as data is received by the processor from disk, data is processed immediately and then processor waits for next data from disk. Hence disk I/O performance becomes a bottleneck. Some advanced methods and scheduling algorithms are required for increasing disk I/O performance and using the disk efficiently. We can improve the performance of disk by reducing total number of head movements needed to serve requests. In this paper, we proposed and implemented an optimized two head disk scheduling algorithm (OTHDSA) which reduces total number of head movements. Experimental analysis has been carried out and it is observed that this proposed algorithm requires less number of head movements than many existing disk scheduling algorithms and hence it maximizes the throughput.
Traditional packet scheduling is mainly designed for increasing spectral efficiency (SE) but not for the energy efficiency (EE). Self-organized network (SON) has prospective for self-configuring, self-optimizing self-...
详细信息
Traditional packet scheduling is mainly designed for increasing spectral efficiency (SE) but not for the energy efficiency (EE). Self-organized network (SON) has prospective for self-configuring, self-optimizing self-healing and minimizes the energy consumption in the network. We consider self-optimizing and self-healing property of SON and investigate a novel energy efficient scheduling algorithm for LTEA. We first compare the state the of art scheduling in view of energy efficiency, then explain the tradeoff between EE and SE. System level simulation (SLS) analysis shows that the investigated SON approach achieves notable energy gain over traditional scheduling algorithm.
Asymmetric multicore processors (AMPs) have been proposed as an energy-efficient alternative to symmetric mul-ticore processors (SMPs). However, AMPs derive their performance from core specialization, which requires c...
详细信息
Asymmetric multicore processors (AMPs) have been proposed as an energy-efficient alternative to symmetric mul-ticore processors (SMPs). However, AMPs derive their performance from core specialization, which requires co-running applications to be scheduled to run on their most appropriate core types. Despite extensive research on AMP scheduling, developing an effective scheduling algorithm remains challenging. Contention for shared resources is a key performance-limiting factor, which often renders existing contention-free scheduling algorithms ineffective. We introduce a contention-aware scheduling algorithm for ARM's ***, a commercial AMP platform. Our algorithm comprises an offline stage and an online stage. The offline stage builds a performance interference model for an application by training it with a set of co-running applications. Guided by this model, the online stage schedules a workload by assigning its applications to their most appropriate core types in order to minimize the performance degradation caused by contention for shared resources. Our model can accurately predict the performance degradation of an application when co-running with other applications with an average prediction error of 9.60%. Compared with the default scheduler provided for ARM's *** and the speedup-factor-driven scheduler, our contention-aware scheduler can improve overall system performance by up to 28.32% and 28.51%, respectively.
Job scheduling (JS) is one of the most important issues in a Cloud system. The objective of job schedulers in cloud computing is to meet users' requirements and optimize the utilization of cloud resources. To achi...
详细信息
Job scheduling (JS) is one of the most important issues in a Cloud system. The objective of job schedulers in cloud computing is to meet users' requirements and optimize the utilization of cloud resources. To achieve better QoS with high resource utilization in a cloud environment, an improved backfill algorithm (IBA) using balanced spiral (BS) method can be used. Results show that IBA minimizes resource idleness up to a great extent. However, IBA does not provide support for handling job priority and QoS. We need an algorithm to maximize the resource utilization while considering priority.
暂无评论