Featured Application This work makes possible to extract heart rate variability components online in order to monitor the underlying human body systems, in particular to determine the activity of the sympathetic and p...
详细信息
Featured Application This work makes possible to extract heart rate variability components online in order to monitor the underlying human body systems, in particular to determine the activity of the sympathetic and parasympathetic branches of the autonomic nervous system (ANS) as well as the balance between them, and indirectly in detecting and monitoring many common diseases related to the cardiovascular system and ANS. Such a solution can, for example, be directly embedded into Holter devices. A more precise determination of the components' properties can give the opportunity to link them to specific physiological processes, especially those of very low and ultra-low frequencies, which has not yet been fully achieved, increasing the practical importance of this *** Heart rate variability (HRV) containing four components of high (HF), low (LF), very low (VLF), and ultra-low (ULF) frequencies provides insight into the cardiovascular and autonomic nervous system functions. Classical spectral analysis is most often used in research on HRV and its components. The aim of this work was to develop and validate an online HRV decomposition algorithm for monitoring the associated physiological processes. The online algorithm was developed based on variational mode decomposition (VMD), validated on synthetic HRV with known properties and compared with its offline adaptive version AVMD, standard VMD, continuous wavelet transform (CWT), and wavelet package decomposition (WPD). Finally, it was used to decompose 36 real all-night HRVs from two datasets to analyze the properties of the four extracted components using the Hilbert transform. The statistical tests confirmed that the online VMD (VMDon) algorithm returned results of comparable quality to AVMD and CWT, and outperformed standard VMD and WPD. VMDon, AVMD, and CWT extracted four components from the real HRV with frequency content slightly exceeding the previously recognized ranges, suggesting the possibility
Infrastructure-as-a-Service(IaaS)cloud platforms offer resources with diverse buying *** can run an instance on the on-demand market which is stable but expensive or on the spot market with a significant ***,users hav...
详细信息
Infrastructure-as-a-Service(IaaS)cloud platforms offer resources with diverse buying *** can run an instance on the on-demand market which is stable but expensive or on the spot market with a significant ***,users have to carefully weigh the low cost of spot instances against their poor *** instances will be revoked when the revocation event ***,an important problem that an IaaS user faces now is how to use spot in-stances in a cost-effective and low-risk *** on the replication-based fault tolerance mechanism,we propose an on-line termination algorithm that optimizes the cost of using spot instances while ensuring operational *** prove that in most cases,the cost of our proposed online algorithm will not exceed twice the minimum cost of the optimal of-fline algorithm that knows the exact future a *** a large number of experiments,we verify that our algorithm in most cases has a competitive ratio of no more than 2,and in other cases it can also reach the guaranteed competitive ratio.
Traditional clustering often results in imbalanced clusters, limiting its suitability for real-world problems. In response, capacitated clustering methods have emerged, aiming to achieve balanced clusters by limiting ...
详细信息
ISBN:
(纸本)9798400716348
Traditional clustering often results in imbalanced clusters, limiting its suitability for real-world problems. In response, capacitated clustering methods have emerged, aiming to achieve balanced clusters by limiting points in each cluster. In this paper, we introduce on-line algorithms with provable bounds on opened centers and cost approximation. We validate our methods experimentally.
With the continuous development in the field of machine learning, there is an increasing demand for the cloud-based machine learning inference services, which are latency-sensitive tasks, such as the service requests ...
详细信息
With the continuous development in the field of machine learning, there is an increasing demand for the cloud-based machine learning inference services, which are latency-sensitive tasks, such as the service requests from the Internet of Things (IoT) devices. These inference services are generally accompanied by fluctuations and uncertainties, so they often require vastly varied numbers of servers at different time spots. As a result, how to dynamically and rationally schedule cloud servers for inference services has become an important issue. Alibaba Cloud currently provides its serverless instances called elastic container instance (ECI), and due to the advantages of pay-as-you-go billing and second-level elasticity, they are well-suited for handling bursty or fluctuating workloads. At the same time, Alibaba Cloud's subscription-based elastic compute service (ECS) instances can be used for steady-state workloads. Our objective is to dynamically combine these two types of instances to deal with inference service requests. In this article, we propose a deterministic online algorithm that can rationally schedule these two types of instances to optimize costs without requiring knowledge of future workloads. We prove that the proposed online algorithm achieves a competitive ratio of no more than 2 compared to the optimal offline algorithm. Through simulation experiments, we demonstrate that our algorithm outperforms three benchmarks, which are all-reserved algorithm, all-on-demand algorithm, and traditional online algorithms that only use ECS instances. Our algorithm exhibits superiority across various workloads and can significantly reduces costs in most cases.
Data privacy concerns and related regulations such as the General Data Protection Regulation in machine learning have fostered a boom in federated learning (FL). However, the costly infrastructure and time-consuming d...
详细信息
Data privacy concerns and related regulations such as the General Data Protection Regulation in machine learning have fostered a boom in federated learning (FL). However, the costly infrastructure and time-consuming deployments pose significant barriers to the widespread adoption of FL in real-world scenarios. To increase the user-friendliness of federated learning while reducing deployment costs and improving its scalability, service providers are beginning to offer federated learning as a service (FLaaS) in the cloud. Due to the distributed nature of FL, communication overhead imposes significant network costs on FLaaS providers. In mainstream cloud platforms, there are two main types of billing methods for networking products, which are on-demand and reserved. How to optimally combine these two billing models to optimize communication cost in the face of time-varying demands of federated learning in the cloud poses a challenge to FLaaS providers. To address this problem, we propose onlineNS, an online algorithm for optimally making networking product selection decisions without prior knowledge of future demand sequences. Our algorithm can achieve better cost performance compared to online algorithms that are widely used in practice. The theoretical analysis and simulations based on real-world traces as well as synthetic datasets validate the effectiveness of our online algorithm and demonstrate that it can achieve better cost performance compared to benchmarks with the same communication performance.
Mobile edge Industrial Internet of Things (MEIIoT) is composed of Industrial Internet of Things (IIoT) and mobile edge computing, which is currently a new type of IIoT. MEIIoT has the characteristics of large scale an...
详细信息
Mobile edge Industrial Internet of Things (MEIIoT) is composed of Industrial Internet of Things (IIoT) and mobile edge computing, which is currently a new type of IIoT. MEIIoT has the characteristics of large scale and strong dynamics (e.g., network topology or number of IIoT devices would change from time to time). The placement of virtualized network functions (VNFs) in MEIIoT refers to placing multiple network functions (e.g., motion analyzer and video processor) on edge nodes in a form of software instances, so that IIoT devices can flexibly obtain services of these VNFs. However, an edge node can only be placed a small number of VNFs, because of its limited storage and computing resources. Therefore, if an IIoT device requires multiple VNFs, it needs to transmit its data to access several edge nodes, which would cause high delay. How to optimally place all the VNFs on edge nodes in MEIIoT, so that the whole access delay for all IIoT devices that requiring VNFs is minimized, is a challenging problem. In this article, we design an online placement algorithm. First, we decompose a long-term VNFs optimization problem into a series of one-shot optimization problems. Second, we formulate these one-shot problems into integer nonlinear programming problems, and prove that they are NP-hard. To overcome this hardness, we then propose a heuristic algorithm. Finally, we carried out extensive experiments with real-world datasets to validate the efficacy of our proposed solution.
The huge amount of energy consumed by the data centers around the world every year motivates the cloud service providers to operate data centers in a more energy efficient way. A promising solution is to turn off the ...
详细信息
The huge amount of energy consumed by the data centers around the world every year motivates the cloud service providers to operate data centers in a more energy efficient way. A promising solution is to turn off the idle servers, which, however, may be turned on later, incurring a significant startup cost. The problem turns to dynamically provisioning the workload, and cutting down the energy cost which includes the power to support the running of data center and the startup cost. Different from previous studies that usually consider the worst-case performance guarantee when designing online algorithms, this paper considers the average case which is more practical. We propose a simple online algorithm based on the expectation of job interval of workload, which is proven to be optimal for exponential and uniform distributions and achieves tight competitive ratio e/e-1 and 4/3, respectively, for them. Simulations using the synthetic data verify our theoretical analysis. Numerical results employing Google's data center workload trace demonstrate that the proposed algorithm outperforms the worst case-based algorithm in terms of operation cost reduction.
This paper deals with the canonical single-processor online scheduling problem with the position-based learning effect. Specially speaking, a round of jobs arriving online over time will be processed on a single proce...
详细信息
This paper deals with the canonical single-processor online scheduling problem with the position-based learning effect. Specially speaking, a round of jobs arriving online over time will be processed on a single processor. Noticeably, in this model, for each job Jk, the actual processing time pkl is defined as a power function of its position l, i.e., pkl = pkl/3, where pk indicates its normal processing time and beta <= 0 is the learning index. Our goal is to make the sum of completion times as small as possible. For this problem, we testify that there is no online algorithm with a competitive ratio of less than 2. Most notably, we design an online algorithm entitled as Delayed Shortest Normal Processing Time (DSNPT), matching the lower bound proposed by us, and hence DSNPT is optimal.(c) 2022 Elsevier B.V. All rights reserved.
In Vehicle-to-Everything scenarios, the efficient and real-time offloading of multi-task from vehicles to roadside units with higher computing power presents a challenging endeavor. This challenge is amplified by the ...
详细信息
ISBN:
(纸本)9798350310900
In Vehicle-to-Everything scenarios, the efficient and real-time offloading of multi-task from vehicles to roadside units with higher computing power presents a challenging endeavor. This challenge is amplified by the dynamic nature of computing power in roadside units, which fluctuates in real time due to resource sharing among multiple vehicles. Consequently, accurately determining the computing power of roadside units prior to offloading becomes a significant hurdle for vehicles. To overcome this challenge and enhance the performance of online algorithms used for task offloading, we propose a novel approach that leverages machine learning techniques. This approach utilizes historical data to predict the real-time computing power of roadside units. By incorporating machine learning predictions, our proposed approach aims to mitigate the uncertainty associated with the decision-making process of the online algorithm. This, in turn, enables vehicles to make more informed decisions regarding task offloading. Moreover, to enhance the robustness of the algorithm against potential prediction errors, our approach adopts a partial trust mechanism towards the predicted outcomes. By considering this partial trust, we aim to maintain the algorithm's reliability in real-world scenarios. Furthermore, we conduct theoretical analysis and comprehensive experiments to demonstrate the superiority of our proposed algorithm in terms of task offloading performance and robustness.
With the popularity of cloud computing increasing rapidly in recent years, the use of cloud, represented by IaaS and SaaS, is being embraced by more and more users. One of the flexibility of the cloud lies in its pay-...
详细信息
With the popularity of cloud computing increasing rapidly in recent years, the use of cloud, represented by IaaS and SaaS, is being embraced by more and more users. One of the flexibility of the cloud lies in its pay-as-you-go usage model, which allows users to purchase and release cloud instances on their own demand, reducing the possible financial loss caused by wasted resources. For a SaaS provider that uses the pay-as-you-go payment model to purchase cloud instances, it is important to make a reasonable decision on when to release as many as idle on-demand cloud instances to achieve cost savings when the number of incoming user demands is in a declining phase, taking into account the cost of the start-up time to acquire new cloud instances and the penalty cost that may be incurred while SaaS users wait. In order to make optimal decisions when there is not enough knowledge to predict the future trend of incoming demands, we propose an online instance releasing algorithm which can effectively help SaaS providers to reduce the cost when using on-demand instances. Through theoretical analysis we show our online algorithm can achieve a competitive ratio of less than 2 for a variety of penalty functions. Our extensive simulation experiments based on both real Google workload data and simulated demand sequences demonstrate that the proposed online algorithm is stable and efficient.
暂无评论