Workload prediction is critical in enabling proactive resource management of cloud *** workload prediction is valuable for cloud users and providers as it can effectively guide many practices,such as performance assur...
详细信息
Workload prediction is critical in enabling proactive resource management of cloud *** workload prediction is valuable for cloud users and providers as it can effectively guide many practices,such as performance assurance,cost reduction,and energy consumption ***,cloud workload prediction is highly challenging due to the complexity and dynamics of workloads,and various solutions have been proposed to enhance the prediction *** paper aims to provide an in-depth understanding and categorization of existing solutions through extensive literature *** existing surveys,for the first time,we comprehensively sort out and analyze the development landscape of workload prediction from a new perspective,i.e.,application-oriented rather than prediction methodologies per ***,we first introduce the basic features of workload prediction,and then analyze and categorize existing efforts based on two significant characteristics of cloud applications:variability and ***,we also investigate how workload prediction is applied to resource ***,open research opportunities in workload prediction are highlighted to foster further advancements.
Forecasting Human mobility is of great significance in the simulation and control of infectious diseases like COVID-19. To get a clear picture of potential future outbreaks, it is necessary to forecast multi-step Ori...
详细信息
Intelligent reflecting surface(IRS)assisted with the wireless powered communication network(WPCN)can enhance the desired signal energy and carry out the power-sustaining problem in ocean monitoring *** this paper,we i...
详细信息
Intelligent reflecting surface(IRS)assisted with the wireless powered communication network(WPCN)can enhance the desired signal energy and carry out the power-sustaining problem in ocean monitoring *** this paper,we investigate a reliable communication structure where multiple buoys transmit data to a base station(BS)with the help of the unmanned aerial vehicle(UAV)-mounted IRS and harvest energy from the base station *** organically combine WPCN with maritime data collection scenario,a scheduling protocol that employs the time division multiple access(TDMA)is proposed to serve multiple buoys for uplink data ***,we compare the full-duplex(FD)and half-duplex(HD)mechanisms in the maritime data collection system to illustrate different performances under these two *** maximize the fair energy efficiency under the energy harvesting constraints,a joint optimization problem on user association,BS transmit power,UAV’s trajectory and IRS’s phase shift is *** solve the non-convex problem,the original problem is decoupled into several subproblems,and successive convex optimization and block coordinate descent(BCD)methods are employed obtain the near-optimal solutions *** results demonstrate that the UAV-mounted IRS can significantly improve energy efficiency in our considered system.
In this article, the Kalman filter design problem is investigated for linear discrete-time systems under binary encoding schemes. Under such a scheme, the local information is quantized into a bit string by the remote...
详细信息
Efficient resource allocation in computing networks is essential for managing fluctuating demands and optimizing system performance. Traditional auction and pricing models often fail to adapt to diverse demands and su...
详细信息
Federated Learning (FL), hailed as a potent approach in merging medical expertise, promises to elevate collaborative efforts among healthcare institutions while safeguarding the privacy and security of sensitive medic...
详细信息
Federated Learning (FL), as one of the effective methods to solve the problem of medical data silos, can promote mutual cooperation among medical institutions under the premise of safeguarding the privacy and security...
详细信息
Using Quadrics as the object representation has the benefits of both generality and closed-form projection derivation between image and world spaces. Although numerous constraints have been proposed for dual quadric r...
详细信息
Social behavior metadata has altered business models and human lifestyles. However, the inclusion of personal information in social behavior metadata poses risks of identity exposure for users. Under the requirements ...
详细信息
Deep neural networks (DNNs) have demonstrated exceptional performance, leading to diverse applications across various mobile devices (MDs). Considering factors like portability and environmental sustainability, an inc...
详细信息
ISBN:
(数字)9783982674100
ISBN:
(纸本)9798331534646
Deep neural networks (DNNs) have demonstrated exceptional performance, leading to diverse applications across various mobile devices (MDs). Considering factors like portability and environmental sustainability, an increasing number of MDs are adopting energy harvesting (EH) techniques for power supply. However, the computational intensity of DNNs presents significant challenges for their deployment on these resource-constrained devices. Existing approaches often employ DNN partition or offloading to mitigate the time and energy consumption associated with running DNNs on MDs. Nonetheless, existing methods frequently fall short in accurately modeling the execution time of DNNs, and do not consider to use thread allocation for further latency and energy consumption optimization. To solve these problems, we propose a dynamic DNN partition and thread allocation method to optimize the latency and energy consumption of running DNNs on EH-enabled MDs. Specifically, we first investigate the relationship between DNN inference latency and allocated threads and establish an accurate DNN latency prediction model. Based on the prediction model, a DRL-based DNN partition (DDP) algorithm is designed to find the optimal partitions for DNNs. A thread allocation (TA) algorithm is proposed to reduce the inference latency. Experimental results from our test-bed platform demonstrate that compared to four benchmarking methods, our scheme can reduce DNN inference latency and energy consumption by up to 37.3% and 38.5%.
暂无评论