This paper studies the online maximum edge-weighted b-matching problem. The input of the problem is a weighted bipartite graph G = (L, R, E, w). Vertices in R arrive online, and each vertex in L can be matched to at m...
详细信息
This paper studies the online maximum edge-weighted b-matching problem. The input of the problem is a weighted bipartite graph G = (L, R, E, w). Vertices in R arrive online, and each vertex in L can be matched to at most b vertices in R. The objective is to maximize the total weight of the matching edges. We give a randomized algorithm GREEDY-RT for this problem, and show that its competitive ratio is Omega(1/Pi(log*wmax-1)(J=1) log((j)) w(max)) where w(max) is an upper bound on the edge weights, which may not be known ahead of time. We can improve the competitive ratio to Omega(1/logw(max)) if w(max) is known to the algorithm when it starts. We also derive an upper bound O(1/logw(max)) on the competitive ratio, suggesting that GREEDY-RT is near optimal. We also consider deterministic algorithms;we present a near optimal algorithm GREEDY-D which has competitive ratio 1/1+2 xi(w(max)+1)(1/xi) where xi = min{b, left perpendicular ln(1 + w(max))right perpendicular}. We also study a variant of the problem called online maximum two-sided vertex-weighted b-matching problem, and give a modification of the randomized algorithm GREEDY-RT called GREEDY-vRT for this variant. We show that the competitive ratio of GREEDY-vRT is also near optimal. (C) 2015 Elsevier B.V. All rights reserved.
We consider the semi-online multiprocessor scheduling problem with m identical, parallel machines to minimize the makespan, where the jobs arrive in decreasing order of processing times. The famous Longest Processing ...
详细信息
We consider the semi-online multiprocessor scheduling problem with m identical, parallel machines to minimize the makespan, where the jobs arrive in decreasing order of processing times. The famous Longest Processing Time (LPT) algorithm by Graham (1966) [4] for the classical offline multiprocessor scheduling problem schedules the jobs in decreasing order of processing times and has a worst-case bound of 4/3-1/(3m). So far, no algorithm with a better competitive ratio than the LPT algorithm has been given for the semi-online scheduling problem with decreasing processing times. In this note, we present a 5/4-competitive algorithm for m >= 3 and an algorithm that is the best possible for m = 3, i.e. an algorithm with competitive ratio (1 + root 37)/6. (C) 2012 Elsevier B.V. All rights reserved.
We study the problem of exploring an unknown undirected connected graph. Beginning in some start vertex, a searcher must visit each node of the graph by traversing edges. Upon visiting a vertex for the first time, the...
详细信息
We study the problem of exploring an unknown undirected connected graph. Beginning in some start vertex, a searcher must visit each node of the graph by traversing edges. Upon visiting a vertex for the first time, the searcher learns all incident edges and their respective traversal costs. The goal is to find a tour of minimum total cost. Kalyanasundaram and Pruhs (Constructing competitive tours from local information, Theoretical Computer Science 130, pp. 125-138, 1994) proposed a sophisticated generalization of a Depth First Search that is 16-competitive on planar graphs. While the algorithm is feasible on arbitrary graphs, the question whether it has constant competitive ratio in general has remained open. Our main result is an involved lower bound construction that answers this question negatively. On the positive side, we prove that the algorithm has constant competitive ratio on any class of graphs with bounded genus. Furthermore, we provide a constant competitive algorithm for general graphs with a bounded number of distinct weights. (C) 2012 Elsevier B.V. All rights reserved.
This paper considers an online scheduling problem arising from Quality-of-Service (QoS) applications. We are required to schedule a set of jobs, each with release time, deadline, processing time and weight. The object...
详细信息
This paper considers an online scheduling problem arising from Quality-of-Service (QoS) applications. We are required to schedule a set of jobs, each with release time, deadline, processing time and weight. The objective is to maximize the total value obtained for scheduling the jobs. Unlike the traditional model of this scheduling problem, in our model unfinished jobs also get partial values proportional to their amounts processed. No non-timesharing algorithm for this problem with competitive ratio better than 2 is known. We give a new non-timesharing algorithm GAP that improves this ratio for bounded values of m, where m can be the number of concurrent jobs or the number of weight classes. The competitive ratio is improved from 2 to 1.618 (golden ratio) which is optimal for m = 2, and when applied to cases with in > 2 it still gives a competitive ratio better than 2, e.g. 1.755 when m = 3. We also give a new study of the problem in the multiprocessor setting, giving an upper bound of 2 and a lower bound of 1.25 for the competitiveness. Finally, we consider resource augmentation and show that O(log alpha) speedup or extra processors is sufficient to achieve optimality, where a is the importance ratio. We also give a tradeoff result, showing that in fact a small amount of extra resources is sufficient for achieving close-to-optimal competitiveness. (C) 2004 Elsevier B.V. All rights reserved.
We introduce and study the online pause and resume problem. In this problem, a player attempts to find the k lowest (alternatively, highest) prices in a sequence of fixed length T, which is revealed sequentially. At e...
详细信息
We introduce and study the online pause and resume problem. In this problem, a player attempts to find the k lowest (alternatively, highest) prices in a sequence of fixed length T, which is revealed sequentially. At each time step, the player is presented with a price and decides whether to accept or reject it. The player incurs a switching cost whenever their decision changes in consecutive time steps, i.e., whenever they pause or resume purchasing. This online problem is motivated by the goal of carbon-aware load shifting, where a workload may be paused during periods of high carbon intensity and resumed during periods of low carbon intensity and incurs a cost when saving or restoring its state. It has strong connections to existing problems studied in the literature on online optimization, though it introduces unique technical challenges that prevent the direct application of existing algorithms. Extending prior work on threshold-based algorithms, we introduce double-threshold algorithms for both the minimization and maximization variants of this problem. We further show that the competitive ratios achieved by these algorithms are the best achievable by any deterministic online algorithm. Finally, we empirically validate our proposed algorithm through case studies on the application of carbon-aware load shifting using real carbon trace data and existing baseline algorithms.
We introduce and study a general version of the fractional online knapsack problem with multiple knapsacks, heterogeneous constraints on which items can be assigned to which knapsack, and rate-limiting constraints on ...
详细信息
We introduce and study a general version of the fractional online knapsack problem with multiple knapsacks, heterogeneous constraints on which items can be assigned to which knapsack, and rate-limiting constraints on the assignment of items to knapsacks. This problem generalizes variations of the knapsack problem and of the one-way trading problem that have previously been treated separately, and additionally finds application to the real-time control of electric vehicle (EV) charging. We introduce a new algorithm that achieves a competitive ratio within an additive factor of one of the best achievable competitive ratios for the general problem and matches or improves upon the best-known competitive ratio for special cases in the knapsack and one-way trading literatures. Moreover, our analysis provides a novel approach to online algorithm design based on an instance-dependent primal-dual analysis that connects the identification of worst-case instances to the design of algorithms. Finally, we illustrate the proposed algorithm via trace-based experiments of EV charging.
In an online decision problem, one makes a sequence of decisions without knowledge of the future. Each period, one pays a cost based on the decision and observed state. We give a simple approach for doing nearly as we...
详细信息
In an online decision problem, one makes a sequence of decisions without knowledge of the future. Each period, one pays a cost based on the decision and observed state. We give a simple approach for doing nearly as well as the best single decision, where the best is chosen with the benefit of hindsight. A natural idea is to follow the leader, i.e. each period choose the decision which has done best so far. We show that by slightly perturbing the totals and then choosing the best decision, the expected performance is nearly as good as the best decision in hindsight. Our approach, which is very much like Hannan's original game-theoretic approach from the 1950s, yields guarantees competitive with the more modern exponential weighting algorithms like Weighted Majority. More importantly, these follow-the-leader style algorithms extend naturally to a large class of structured online problems for which the exponential algorithms are inefficient. (c) 2004 Elsevier Inc. All rights reserved.
In the online version of the classic k-means clustering problem, the points of a dataset u(1), u(2),... arrive one after another in an arbitrary order. When the algorithm sees a point, it should either add it to the s...
详细信息
In the online version of the classic k-means clustering problem, the points of a dataset u(1), u(2),... arrive one after another in an arbitrary order. When the algorithm sees a point, it should either add it to the set of centers, or let go of the point. Once added, a center cannot be removed. The goal is to end up with set of roughly k centers, while competing in k-means objective value with the best set of k centers in hindsight. online versions of k-means and other clustering problem have received significant attention in the literature. The key idea in many algorithms is that of adaptive sampling: when a new point arrives, it is added to the set of centers with a probability that depends on the distance to the centers chosen so far. Our contributions are as follows: 1. We give a modified adaptive sampling procedure that obtains a better approximation ratio (improving it from logarithmic to constant). 2. Our main result is to show how to perform adaptive sampling when data has outliers (>> k points that are potentially arbitrarily far from the actual data, thus rendering distance-based sampling prone to picking the outliers). 3. We also discuss lower bounds for k-means clustering in an online setting.
Virtual machine (VM) migration is a widely used technique in cloud computing systems to increase reliability. There are also many other reasons that a VM is migrated during its lifetime, such as reducing energy consum...
详细信息
ISBN:
(纸本)9781728112466
Virtual machine (VM) migration is a widely used technique in cloud computing systems to increase reliability. There are also many other reasons that a VM is migrated during its lifetime, such as reducing energy consumption, improving performance, maintenance, etc. During a live VM migration, the underlying VM continues being up until all or part of its data has been transmitted from source to destination. The remaining data are transmitted in an off-line manner by suspending the corresponding VM. The longer the off-line transmission time, the worse the performance of the respective VM. The above is because during the off-line data transmission, the VM service is down. Because a running VM's memory is subject to changes, already transmitted data pages may get dirtied and thus needing re-transmission. The decision of when suspending the VM is not a trivial task at all. The above is justified by the fact that when suspending the VM early we may result in transmitting off-line a significant amount of data degrading thus the VM's performance. On the other hand, a long waiting time to suspend the VM may result in re-transmitting a huge amount of dirty data, leading in that way to waste of resources. In this paper, we tackle the joint problem of minimizing both the total VM migration time (reflecting the resources spent during a migration) and the VM downtime (reflecting the performance degradation). The aforementioned objective functions are weighted according to the needs of the underlying cloud provider/user. To tackle the problem, we propose an online deterministic algorithm resulting in an strong competitive ratio, as well as a randomized online algorithm achieving significantly better results against the deterministic algorithm.
Our main result is an optimal online algorithm for preemptive scheduling on uniformly related machines with the objective to minimize makespan. The algorithm is deterministic, yet it is optimal even among all randomiz...
详细信息
ISBN:
(纸本)3540388753
Our main result is an optimal online algorithm for preemptive scheduling on uniformly related machines with the objective to minimize makespan. The algorithm is deterministic, yet it is optimal even among all randomized algorithms. In addition, it is optimal for any fixed combination of speeds of the machines, and thus our results subsume all the previous work on various special cases. Together with a new lower bound it follows that the overall competitive ratio of this optimal algorithm is between 2.054 and e approximate to 2.718. We also give a complete analysis of the competitive ratio for three machines.
暂无评论