New hardware platforms like hybrid-architecture CPUs with different kinds of cores are the essential units that make computing devices more energy efficient. In this paper, we explore the energy consumption pattern of...
详细信息
Autoscaling mechanisms are used widely to scale computing instances, under varying load conditions. Containerized cloud applications managed by orchestrators-such as Kubernetes with the Horizontal Pod Autoscaler (HPA)...
详细信息
New hardware platforms like hybrid-architecture CPUs with different kinds of cores are the essential units that make computing devices more energy efficient. In this paper, we explore the energy consumption pattern of...
详细信息
ISBN:
(数字)9798331504830
ISBN:
(纸本)9798331515621
New hardware platforms like hybrid-architecture CPUs with different kinds of cores are the essential units that make computing devices more energy efficient. In this paper, we explore the energy consumption pattern of *** applications on modern hardware and suggest how we can improve it based on our findings. *** is a JavaScript runtime that powers *** applications in the cloud. In this research we combine hybrid-architecture CPUs with *** to analyze the energy consumption pattern of the applications. The results show that by wisely using different configurations of CPU cores, we can save energy and meet service quality requirements. This hypothesis has been verified by the approach we proposed. We provide recommendations for software developers and cloud service providers on how to make better use of energy on such systems.
Containerization-based deployment is widely practiced by organiza-tions because of its lightweight and portable nature. The containers are managed by orchestrators, such as Kubernetes, to ensure reli-ability and scala...
详细信息
Containerization-based deployment is widely practiced by organiza-tions because of its lightweight and portable nature. The containers are managed by orchestrators, such as Kubernetes, to ensure reli-ability and scalability among other properties. Autoscaling is an essential component for cloud applications and thus Kubernetes’s Horizontal Pod Autoscaler (HPA) makes scaling decisions utiliz-ing default metrics, such as CPU and memory utilization or other application-oriented metrics like event loop lag. We consider the above-mentioned metrics as the primary metrics for the autoscaling decision, but explore other metrics, such as HTTP requests and garbage collection related metrics. We consider these additional metrics as secondary metrics that may help better explain the val-ues seen for the primary metrics. We investigated and analyzed the correlation of secondary metrics to the primary metrics and observed patterns indicating significant coupling. Based on these observations, we propose a condition-based autoscaling process using secondary metrics like total garbage collection pause time to improve scaling based on the primary metrics. We demonstrate using a technique of killing pods that exceed thresholds on the sec-ondary metrics instead of scaling in or out as one example of how secondary metrics can be used to improve autoscaling based on the primary metrics. We evaluate our methodology and compare the performance results against the default autoscaling mechanism for various workloads and share results illustrating that it can achieve better results.
The backbone of *** is a single-threaded event loop, so computationally intensive tasks are bound to the performance of a single core. Modules with different architectures have been built to provide parallelism and sc...
详细信息
The backbone of *** is a single-threaded event loop, so computationally intensive tasks are bound to the performance of a single core. Modules with different architectures have been built to provide parallelism and scaling. However, their properties differ, making them appropriate for different cases. In order to assist software engineers in choosing the most appropriate module in the most efficient way, we perform an empirical study to investigate the modules' characteristics and functionality, taking into account system variances. Crucially, we present and apply an evaluation methodology focusing on four aspects: compute-intensive task execution, sharing data, communication and overhead. The results suggest that instance type (*** thread vs. *** process) is not enough to decide the most appropriate one. We find that modules with the highest performance in most cases can sacrifice other aspects, such as support and/or functionality and/or performance in fewer cases, while platform variances play a significant part.
暂无评论