This paper introduces time-triggered mechanism for avionic real-time messages in fibre channel systems which enables real-time messages in the network to be sent, forwarded and received at a predetermined time, avoidi...
详细信息
ISBN:
(纸本)9781665495752
This paper introduces time-triggered mechanism for avionic real-time messages in fibre channel systems which enables real-time messages in the network to be sent, forwarded and received at a predetermined time, avoiding conflicts between messages. algorithm proposed divides messages to be arranged into several groups by periods with priority assigned. Group with higher priority is always arranged first. Compared with other methods, the proposed grouped sifting algorithm for schedule tables achieved higher resource utilization with links' maximum timeslot occupancy rate up to 98.9% and links' average timeslot occupancy rate up to 93.9% in some cases. Lower time delays have been verified by simulations as well, especially in networks with numerous messages.
Microservice architecture competes with the traditional monolithic design by offering benefits of agility, flexibility, reusability resilience, and ease of use. Nevertheless, due to the increase in internal communicat...
详细信息
ISBN:
(纸本)9781665400602
Microservice architecture competes with the traditional monolithic design by offering benefits of agility, flexibility, reusability resilience, and ease of use. Nevertheless, due to the increase in internal communication complexity, care must be taken for resource-usage scaling in harmony with placement scheduling, and request balancing to prevent cascading performance degradation across microservices. We prototype Run Wild, a resource management system that controls all mechanisms in the microservice-deployment process covering scaling, scheduling, and balancing to optimize for desirable performance on the dynamic cloud driven by an automatic, united, and consistent deployment plan. In this paper, we also highlight the significance of co-location aware metrics on predicting the resource usage and continuing the deployment plan. We conducted experiments with an actual cluster on the IBM Cloud platform. RunWild reduced the 90th percentile response time by 11% and increased average throughput by 10% with more than 30% lower resource usage for widely used autoscaling benchmarks on Kubernetes clusters.
Cloud computing is used as a backbone infrastructure to meet exponentially increasing computational and storage demands. This increase in service demands in smart cities will result in escalating energy consumption in...
详细信息
Cloud computing is used as a backbone infrastructure to meet exponentially increasing computational and storage demands. This increase in service demands in smart cities will result in escalating energy consumption in cloud datacenters. Such a rise in energy consumption will result in upsurge of operational costs and emission of greenhouse gases. In this work, a green cloud computing algorithm named "Cost-based Energy Efficient scheduling Technique for Dynamic Volage Frequency Scaling (DVFS) Systems (CEEST)" is proposed. The proposed algorithm reduces energy consumption without compromising the quality of service (QoS). The goal of this algorithm is optimization and management of servers in the datacenters by utilizing maximum resources of the servers and powering off the underutilized servers. CEEST utilizes the scaling of virtual machines to finish jobs in the deadlines to reduce violations of service level agreement (SLA). Simulation results prove that the proposed algorithm outperforms the existing algorithms in terms of execution time, energy consumption, resource utilization, and SLA violations. The proposed algorithm saves energy up to 30% in comparison to existing algorithms. The utilization of resources is also significantly increased by to 30%. In terms of SLA violations, the proposed algorithm reduced SLA violations up to 50%.
Industry 4.0 is changing the way to produce, pursuing increased flexibility of production systems and an ever-greater decision-making autonomy of the machines. The aim is to achieve high level of performances even in ...
详细信息
Industry 4.0 is changing the way to produce, pursuing increased flexibility of production systems and an ever-greater decision-making autonomy of the machines. The aim is to achieve high level of performances even in market scenarios requiring high level of customization, as the Mass Customisation (MC) paradigm imposes. Current hierarchical Manufacturing Planning and Control (MPC) systems showed limits in catching this goal, primarily due to their structural lack of flexibility. For this reason, the interest in the hybrid MPC architectures like the semi-heterarchical one is increasing. The objective of this work is to contribute to the design of such an architecture, proposing a new scheduling mechanism for the lowest decisional level. This mechanism, differently from the ones already proposed in the literature, schedules the next jobs to be admitted in the system choosing them by couples. The proposed rule has been tested through a simulation environment in three different scenarios of demand generation rate. The results showed an improvement in demand absorption and productivity compared to the rules used up to now. Copyright (C) 2021 The Authors.
One of cloud computing's fundamental problems is the balancing of loads, which is essential for evenly distributing the workload across all nodes. This study proposes a new load balancing algorithm, which combines...
详细信息
One of cloud computing's fundamental problems is the balancing of loads, which is essential for evenly distributing the workload across all nodes. This study proposes a new load balancing algorithm, which combines maximum-minimum and round-robin (MMRR) algorithm so that tasks with long execution time are allocated using maximum-minimum and tasks with lowest execution task will be assigned using round-robin. Cloud analyst tool was used to introduce the new load balancing techniques, and a comparative analysis with the existing algorithm was conducted to optimize cloud services to clients. The study findings indicate that 'MMRR has brought significant changes to cloud services. MMRR performed better from the algorithms tested based on the whole response time and cost-effectiveness (89%). The study suggested that MMRR should be implemented for enhancing user satisfaction in the cloud service.
In the last decade, cloud computing becomes the most demanding platform to resolve issues and manage requests across the Internet. Cloud computing takes along terrific opportunities to run cost-effective scientific wo...
详细信息
In the last decade, cloud computing becomes the most demanding platform to resolve issues and manage requests across the Internet. Cloud computing takes along terrific opportunities to run cost-effective scientific workflows without the requirement of possessing any set-up for customers. It makes available virtually unlimited resources that can be attained, organized, and used as required. Resource scheduling plays a fundamental role in the well-organized allocation of resources to every task in the cloud environment. However along with these gains many challenges are required to be considered to propose an efficient scheduling algorithm. An efficient scheduling algorithm must enhance the implementation of goals like scheduling cost, load balancing, makespan time, security awareness, energy consumption, reliability, service level agreement maintenance, etc. To achieve the aforementioned goals many state-of-the-art scheduling techniques have been proposed based upon hybrid, heuristic, and meta-heuristic approaches. This work reviewed existing algorithms from the perspective of the scheduling objective and strategies. We conduct a comparative analysis of existing strategies along with the outcomes they provide. We highlight the drawbacks for insight into further research and open challenges. The findings aid researchers by providing a roadmap to propose efficient scheduling algorithms.
In this paper, we study a new class of single-iteration scheduling algorithms for input-queued switches based on a new arbitration idea called highest rank first (HRF). We first demonstrate the effectiveness of HRF by...
详细信息
In this paper, we study a new class of single-iteration scheduling algorithms for input-queued switches based on a new arbitration idea called highest rank first (HRF). We first demonstrate the effectiveness of HRF by a simple algorithm named Basic-HRF. In Basic-HRF, virtual output queues (VOQs) at an input port are ranked according to their queue sizes. The rank of a VOQ, coded by log(N + 1) bits, where N is the switch size, is sent to the corresponding output as a request. Unlike all existing iterative algorithms, the winner is selected based on the ranks of the requests/grants. We show that the rank-based arbitration outperforms the widely adopted queue-based arbitration. To improve the performance under heavy load and maximize the match size, Basic-HRF is integrated with an embedded round-robin scheduler. The resulting HRF algorithm is shown to beat almost all existing single-iteration algorithms. But, the complexity of HRF is high due to the use of multi-bit requests. A novel request encoding/decoding mechanism is then designed to reduce the request size to a single bit while keeping the original performance of HRF. A unique feature of the resulting coded HRF (CHRF) algorithm is that the single-bit request indicates an in crease or decrease of a VOQ rank, rather than an empty VOQ or not. We show that the CHRF is the most efficient single-bit-single-iteration algorithm.
Cloud computing has recently been evolved in terms of the dynamic provision of computing resources to the users based on payment for usage on a pay-as-you-go basis. This provides feasibility to gain access to the larg...
详细信息
Cloud computing has recently been evolved in terms of the dynamic provision of computing resources to the users based on payment for usage on a pay-as-you-go basis. This provides feasibility to gain access to the large-scale and high-speed resources without establishing their own computing infrastructure to execute high-performance computing (HPC) applications. However, for the past several years, the efficient utilization of resources on a compute cloud has become a prime interest of the scientific community. One of the major causes behind inefficient resource utilization is the imbalance distribution of workload in a distributed computing. This paper contemplates the scheduling objectives of contemporary state-of-the-art heuristics to investigate their behavior to map HPC jobs to resources. Furthermore, the status of workload distribution in cloud computing is also critically assessed. A set of nine scheduling heuristics is validated in the CloudSim simulation environment. The potential of all the heuristics in terms of resource utilization is assessed by combining the workload balancing and machine-level load imbalance using different instances of benchmark scientific datasets (i.e., Heterogeneous Computing scheduling Problems instances and Google Cloud Jobs dataset). The empirical assessment shows that it is not only an optimal solution to schedule the independent jobs on machines solely based on the execution time, throughput, and average resource utilization ratio;instead, the machine-level load balancing must also be considered to effectuate the usage of full capacity of computing power in a cloud system. Among all the heuristics, Resource-aware load balancing algorithm (RALBA) heuristic has outperformed, and it seems an optimal choice in terms of the tradeoff between complexity and the performance in terms of resource utilization and machine-level load balancing.
In emerging business markets, the data is becoming new gold for the industries. There are several places such as marketing, manufacturing, and analysis of future trends becoming data-intensive to achieve growth. The t...
详细信息
In emerging business markets, the data is becoming new gold for the industries. There are several places such as marketing, manufacturing, and analysis of future trends becoming data-intensive to achieve growth. The tiny sensor placements in the field, cities, industry, buildings and sea are helping to collect data and process it either for information retrieval or decision making. The periodic scheduling of radio transceivers assists in accomplishing efficient energy utilization in sensors often uses Time-division multiple access (TDMA) protocols for node scheduling. Our objective is to reduce the amount of time;the receiver node is in the wake-up state. Besides, the slot which potentially not being used for an extended period can utilize by other nodes. The efficient utilization of slots can help to achieve low power duty cycles with low latency. To accomplish that, we propose the emulation of the classroom learning environment with the Q-learning grading for node scheduling. We proposed an analytical mapping of WSNs to classroom learning. The initial benchmark of the performance has compared to the IEEE.802.11 (TDMA-scheduling). Further, the TDMA driven protocols such as Z-MAC, learning-driven protocols (E-MAC or aloha-Q), i-Queue are compared to evaluate the parameters such as energy consumption, throughput, and latency.
With the continuous development and maturity of cloud computing technology, the scale and number of cloud data center (CDC) are also expanding. This increasingly draws attention to the problem of high energy consumpti...
详细信息
With the continuous development and maturity of cloud computing technology, the scale and number of cloud data center (CDC) are also expanding. This increasingly draws attention to the problem of high energy consumption in CDCs. Dynamic virtual machine (VM) consolidation is a promising approach for reducing energy consumption. VM migration, as a VM consolidation technology, can effectively improve the utilization of physical machine (PM) and optimize the scheduling process of CDCs. However, most VM integration algorithms, in existing research, are aimed at improving the utilization of PMs. Excessive utilization of PMs may increase the competition for shared resources among the VMs running on them. As a result, the performance of these VMs deteriorates, and the execution time of cloud tasks is increased or even interrupted. This study systematically analyzes the overall architecture of CDCs. Subsequently, migration rules are established for the one-dimensional and multidimensional trusted VMs. A high-applicability heterogeneous CDC resource management algorithm based on trusted VM migration (HTVM2) is then proposed. The proposed algorithm not only solves the onedimensional VM migration problem of homogeneous and heterogeneous CDCs but also those of multidimensional VMs. This improves the success rate of VM migration, reduces the energy consumption of the CDC, and improves load balancing while ensuring VM performance. Finally, the algorithm was compared with the other three algorithms outperforming them all, as demonstrated by experimental results.
暂无评论