In the sequential setting, a decades-old fundamental result in online algorithms states that if there is a c-competitive randomized online algorithm against an adaptive, offline adversary, then there is a c-competitiv...
详细信息
In the sequential setting, a decades-old fundamental result in online algorithms states that if there is a c-competitive randomized online algorithm against an adaptive, offline adversary, then there is a c-competitive deterministic algorithm. The adaptive, offline adversary is the strongest adversary among the ones usually considered, so the result states that if one has to be competitive against such a strong adversary, then randomization does not help. This implies that researchers do not consider randomization against an adaptive, offline adversary. We prove that in a distributed setting, this result does not necessarily hold, so randomization against an adaptive, offline adversary becomes interesting again. (C) 2020 Elsevier B.V. All rights reserved.
Double Compare-And-Swap (DCAS) is a tremendously useful synchronization primitive, which is also notoriously difficult to implement efficiently from objects that are provided by hardware. We present a randomized imple...
详细信息
ISBN:
(纸本)9781450380539
Double Compare-And-Swap (DCAS) is a tremendously useful synchronization primitive, which is also notoriously difficult to implement efficiently from objects that are provided by hardware. We present a randomized implementation of DCAS with O (log n) expected amortized step complexity against the oblivious adversary, where n is the number of processes in the system. This is the only algorithm to-date that achieves sub-linear step complexity. We achieve that by first implementing two novel algorithms as building blocks. One is a mechanism that allows processes to repeatedly agree on a random value among multiple proposed ones, and the other one is a restricted bipartite version of DCAS.
randomized algorithms often outperform their deterministic counterparts in terms of simplicity and efficiency. In this paper, we consider randomized Incremental Constructions (RICs) that are very popular, in particula...
详细信息
ISBN:
(纸本)9781665435772
randomized algorithms often outperform their deterministic counterparts in terms of simplicity and efficiency. In this paper, we consider randomized Incremental Constructions (RICs) that are very popular, in particular in combinatorial optimization and computational geometry. Our contribution is Collaborative Parallel RIC (CPRIC) - a novel approach to parallelizing RIC for modern parallel architectures like vector processors and GPUs. We show that our approach based on a work-stealing mechanism avoids the control-flow divergence of parallel threads, thus improving the performance of parallel implementation. Our extensive experiments on CPU and GPU demonstrate the advantages of our CPRIC approach that achieves an average speedup between 4x and 5x compared to the naively parallelized RIC.
It has been known since the early 1980s that Byzantine Agreement in the full information, asynchronous model is impossible to solve deterministically against even one crash fault [FLP 1985], but that it can be solved ...
详细信息
ISBN:
(纸本)9781450392648
It has been known since the early 1980s that Byzantine Agreement in the full information, asynchronous model is impossible to solve deterministically against even one crash fault [FLP 1985], but that it can be solved with probability 1 [Ben-Or 1983], even against an adversary that controls the scheduling of all messages and corrupts up to f < n/3 players [Bracha 1987]. The main downside of [BenOr 1983, Bracha 1987] is that they terminate with 2T(..) latency in expectation whenever f = Theta(n). King and Saia [KS 2016, KS 2018] developed a polynomial protocol (polynomial latency, polynomial local computation) that is resilient to f < ( 1.14 x 10(-9)).. Byzantine faults. The new idea in their protocol is to detect-and blacklist-coalitions of likely-bad players by analyzing the deviations of random variables generated by those players over many rounds. In this work we design a simple collective coin-flipping protocol such that if any coalition of faulty players repeatedly does not follow protocol, then they will eventually be detected by one of two simple statistical tests. Using this coin-flipping protocol, we solve Byzantine Agreement in polynomial latency, even in the presence of up to f < n/4 Byzantine faults. This comes close to the f < n/3 upper bound on the maximum number of faults [LSP 1982, BT 1985, FLM 1986].
In this paper, we study the Online Non-metric Facility Location with Service-Quality Costs problem (Non-metric OFL-SQC), a generalization of the well-known Online Non-metric Facility Location problem (Non-metric OFL),...
详细信息
ISBN:
(纸本)9789897585692
In this paper, we study the Online Non-metric Facility Location with Service-Quality Costs problem (Non-metric OFL-SQC), a generalization of the well-known Online Non-metric Facility Location problem (Non-metric OFL), in which facilities have, in addition to opening costs, service-quality costs. Service-quality costs are determined by the quality of the service provided by each facility so as the higher the quality, the lower the service-quality cost. These are motivated by companies wishing to incorporate the quality of third-party services into their optimization decisions. Clients are scattered around facilities and arrive in groups over time. Each arriving group is composed of a number of clients at different locations. Non-metric OFL-SQC asks to serve each client in the group by connecting it to an open facility. Opening a facility incurs an opening cost and connecting a client to a facility incurs a connecting cost, which is the distance between the client and the facility. Moreover, for each group, the algorithm needs to pay the sum of the service-quality costs associated with the facilities serving the clients of the group. The aim is to serve each arriving group while minimizing the total facility opening costs, connecting costs, and service-quality costs. We develop the first online algorithm for non-metric OFL-SQC and analyze it using the standard notion of competitive analysis, in which the online algorithm's worst-case performance is measured against the optimal offline solution that can be constructed optimally given all the input sequence in advance.
A teacher in a school plays significant role in classroom while teaching the students. Similarly, learning via privileged information (LUPI) gives extra information generated by a teacher to 'teach' the learni...
详细信息
ISBN:
(数字)9781728186719
ISBN:
(纸本)9781728186719
A teacher in a school plays significant role in classroom while teaching the students. Similarly, learning via privileged information (LUPI) gives extra information generated by a teacher to 'teach' the learning algorithm while training. This paper proposes minimum variance embedded random vector functional link network with privileged information (MVRVFL+). The proposed MVRVFL+ minimizes the intraclass variance of the training data and uses privileged information paradigm which provides the additional knowledge during the training of the model. The proposed MVRVFL+ classification model is evaluated on 43 benchmark UCI datasets. From the experimental analysis, the proposed MVRVFL+ showed best average accuracy and emerged as the lowest average rank classifier among the baseline models.
We study the power of multiple choices in online stochastic matching. Despite a long line of research, existing algorithms still only consider two choices of offline neighbors for each online vertex because of the tec...
详细信息
ISBN:
(纸本)9781450392648
We study the power of multiple choices in online stochastic matching. Despite a long line of research, existing algorithms still only consider two choices of offline neighbors for each online vertex because of the technical challenge in analyzing multiple choices. This paper introduces two approaches for designing and analyzing algorithms that use multiple choices. For unweighted and vertexweighted matching, we adopt the online correlated selection (OCS) technique into the stochastic setting, and improve the competitive ratios to 0.716, from 0.711 and 0.7 respectively. For edge-weighted matching with free disposal, we propose the Top Half Sampling algorithm. We directly characterize the progress of the whole matching instead of individual vertices, through a differential inequality. This improves the competitive ratio to 0.706, breaking the 1 - 1/e. barrier in this setting for the first time in the literature. Finally, for the harder edge-weighted problem without free disposal, we prove that no algorithms can be 0.703 competitive, separating this setting from the aforementioned three.
作者:
Zuo, QianWei, YiminXiang, HuaWuhan Univ
Sch Math & Stat Wuhan 430072 Peoples R China Wuhan Univ
Hubei Key Lab Computat Sci Wuhan 430072 Peoples R China Peking Univ
Sch Comp Sci Beijing 100871 Peoples R China Fudan Univ
Sch Math Sci Shanghai 200433 Peoples R China Fudan Univ
Shanghai Key Lab Contemporary Appl Math Shanghai 200433 Peoples R China
Compared with the ordinary least squares method, for total least squares (TLS) problem we take into account not only the observation errors, but also the errors in the measurement matrix, which is more realistic in pr...
详细信息
Compared with the ordinary least squares method, for total least squares (TLS) problem we take into account not only the observation errors, but also the errors in the measurement matrix, which is more realistic in practical applications. Motivated by recent advances in quantum-inspired computing, which have shown promise for solving a variety of optimization problems. For the large-scale discrete ill-posed problem Ax approximate to b, our proposed method leverages quantum-inspired techniques to perform a truncated singular value decomposition (SVD) of the measurement matrix. This allows us to efficiently approximate the TTLS solution, We analyze the accuracy of the quantum-inspired truncated total least squares algorithm both theoretically and numerically. In our theoretical analysis, we compare the approximation accuracy of the proposed quantum-inspired method with TTLS and RTTLS methods. The results of our numerical experiments demonstrate the efficiency of the proposed method in terms of both approximation accuracy and computational efficiency, and show that it can provide accurate solutions for large-scale ill-posed problems.
Count sketch [1] is one of the popular sketching algorithms widely used for frequency estimation in data streams, and pairwise inner product for real-valued vectors [2]. Recently, Shi et al. [3] extended the count ske...
详细信息
Count sketch [1] is one of the popular sketching algorithms widely used for frequency estimation in data streams, and pairwise inner product for real-valued vectors [2]. Recently, Shi et al. [3] extended the count sketch (CS) and suggested a higher-order count sketch (HCS) algorithm that compresses input tensors (or vectors) into succinct tensors which closely approximates the value of queried features of the input. The major advantage of HCS is that it is more space-efficient than count sketch. However, their paper didn't comment on estimating pairwise inner product from the sketch. This note demonstrates that HCS also closely approximates the pairwise inner product. We showed that their sketch gives an unbiased estimate of the pairwise inner product, and gives a concentration analysis of the estimate. & COPY;2023 Elsevier B.V. All rights reserved.
In many combinatorial optimization problems we want a particular set of k out of n items with some certain properties (or constraints). These properties may involve the k items. In the worst case a deterministic algor...
详细信息
In many combinatorial optimization problems we want a particular set of k out of n items with some certain properties (or constraints). These properties may involve the k items. In the worst case a deterministic algorithm must scan n-k items in the set to verify the k items. If we pick a set of k items randomly and verify the properties, it will take about (n/k)k verifications, which can be a really large number for some values of k and n. In this article we introduce a significantly faster randomized strategy with very high probability to pick the set of such k items by amplifying the probability of obtaining a target set of k items and show how this probability boosting technique can be applied to solve three different combinatorial optimization problems efficiently. In all three applications algorithms that use the probability boosting technique show superiority over their deterministic counterparts.
暂无评论