This paper deals with probabilistic methods and randomized algorithms for robust control design. The main contribution is to introduce a new technique, denoted as "pack-based strategy". When combined with re...
详细信息
ISBN:
(纸本)9781424414970
This paper deals with probabilistic methods and randomized algorithms for robust control design. The main contribution is to introduce a new technique, denoted as "pack-based strategy". When combined with recent results available in the literature, this technique leads to significant improvements in terms of sample size reduction. One of the main results is to show that for fixed confidence delta, the required sample size increases as 1/epsilon, where epsilon denotes the guaranteed accuracy. Using this technique for non-convex optimization problems involving Boolean expressions consisting of polynomials, we prove that the number of required samples grows with the accuracy parameter epsilon as 1/epsilon ln 1/epsilon.
We study the classical multi-item capacitated lot-sizing problem with hard capacities. There are N items, each of which has specified sequence of demands over a finite planning horizon of discrete T periods;the demand...
详细信息
ISBN:
(纸本)9783540727910
We study the classical multi-item capacitated lot-sizing problem with hard capacities. There are N items, each of which has specified sequence of demands over a finite planning horizon of discrete T periods;the demands are known in advance but can vary from period to period. All demands must be satisfied on time. Each order incurs a time-dependent fixed ordering cost regardless of the combination of items or the number of units ordered, but the total number of units ordered cannot exceed a given capacity C. On the other hand, carrying inventory from period to period incurs holding costs. The goal is to find a feasible solution with minimum overall ordering and holding costs. We show that the problem is strongly NP-Hard, and then propose a novel facility location type LP relaxation that is based on an exponentially large subset of the well-known flow-cover inequalities;the proposed LP can be solved to optimality in polynomial time via an efficient separation procedure for this subset of inequalities. Moreover, the optimal solution of the LP can be rounded to a feasible integer solution with cost that is at most twice the optimal cost;this provides a 2-Approximation algorithm, being the first constant approximation algorithm for the problem. We also describe an interesting on-the-fly variant of the algorithm that does not require to solve the LP a-priori with all the flow-cover inequalities. As a by-product we obtain the first theoretical proof regarding the strength of flow-cover inequalities in capacitated inventory models. We believe that some of the novel algorithmic ideas proposed in this paper have a promising potential in constructing strong LP relaxations and LP-based approximation algorithms for other inventory models, and for the capacitated facility location problem.
The GENERALIZED MAXIMUM DISPERSION problem asks for a partition of a given graph into p vertex-disjoint sets, each of them having at most k vertices. The goal is to maximize the total edge-weight of the induced subgra...
详细信息
ISBN:
(纸本)9783540728689
The GENERALIZED MAXIMUM DISPERSION problem asks for a partition of a given graph into p vertex-disjoint sets, each of them having at most k vertices. The goal is to maximize the total edge-weight of the induced subgraphs. We present the first LP-based approximation algorithm.
In this paper, we study two general semi-infinite programming problems by means of statistical learning theory. The sample size results obtained with this approach are generally considered to be very conservative by t...
详细信息
ISBN:
(纸本)9781424414970
In this paper, we study two general semi-infinite programming problems by means of statistical learning theory. The sample size results obtained with this approach are generally considered to be very conservative by the control community. The main contribution of this paper is to demonstrate that this is not necessarily the case. Using as a starting point one-side results from statistical learning theory, we obtain bounds on the number of required samples that are manageable for "reasonable" values of confidence delta and accuracy epsilon. In particular, we provide sample size bounds growing with 1/epsilon ln 1/epsilon instead of the usual 1/epsilon(2) ln 1/epsilon(2) dependence.
In this paper, we investigate a special distribution, called eigen-distribution, on assignments for game tree T(2)(k) with random properties. There are two cases, where the assignments to leaves are independently dist...
详细信息
ISBN:
(纸本)9781595934802
In this paper, we investigate a special distribution, called eigen-distribution, on assignments for game tree T(2)(k) with random properties. There are two cases, where the assignments to leaves are independently distributed (ID) and correlated distributed (CD). In ID setting, we prove that the distributional probability rho belongs to [root 7-1/3, root 5-1/2], and rho is a strictly increasing function on rounds k is an element of [1,infinity). In CD setting, we propose a reverse assigning technique (RAT) to form I-set and 0-set, then show that E(1)-distribution (namely, a particular distribution on assignments of 1-set such that the complexity of any deterministic algorithm is equal) is the unique eigen-distribution.
Restart strategies are commonly used for minimizing the computational cost of randomized algorithms, but require prior knowledge of the run-time distribution in order to be effective. We propose a portfolio of two str...
详细信息
Restart strategies are commonly used for minimizing the computational cost of randomized algorithms, but require prior knowledge of the run-time distribution in order to be effective. We propose a portfolio of two strategies, one fixed, with a provable bound on performance, the other based on a model of run-time distribution, updated as the two strategies are run on a sequence of problems. Computational resources are allocated probabilistically to the two strategies, based on their performances, using a well-known K-armed bandit problem solver. We present bounds on the performance of the resulting technique, and experiments with a satisfiability problem solver, showing rapid convergence to a near-optimal execution time.
We study a particular aspect of the balanced allocation paradigm (also known as the "two-choices paradigm"): constant sized bins. packed as tightly as possible. Let d >= 1 be fixed, and assume there are m...
详细信息
ISBN:
(纸本)3540275800
We study a particular aspect of the balanced allocation paradigm (also known as the "two-choices paradigm"): constant sized bins. packed as tightly as possible. Let d >= 1 be fixed, and assume there are m bins of capacity at each. To each of it <= dm balls two possible bins are assigned at random. How close can dm/n = 1 + E epsilon be to 1 so that with high probability each ball can be put into one of the two bins assigned to it without any bin overflowing? We show that epsilon > (21e)d(-1) is sufficient. If a new ball arrives with two new randomly assigned bins, we wish to rearrange some of the balls already present in order to accommodate the new ball. We show that on average it takes constant time to rearrange the balls to achieve this, for epsilon > beta(d), for some constant beta < 1. An alternative way to describe the problem is in data structure language. Generalizing cuckoo hashing [R. Pagh, F.F. Rodler, Cuckoo hashing, J. algorithms 51 (2004) 122-144], we consider a hash table with in positions, each representing a bucket of capacity d >= 1. Keys are assigned to buckets by two fully random hash functions. How many keys can be placed in these bins, if key x may go to bin h(1) (x) or to bin h(2) (x)? We obtain an implementation of a dictionary that accommodates it keys in in = (1 + epsilon)n/d buckets of size d = O (log(1/epsilon)), so that key x resides in bucket h(1) (x) or h(2) (x). For a lookup operation, only two hash functions have to be evaluated and two segments of d contiguous memory cells have to be inspected. If d >= 1 + 3.26 center dot ln(1/epsilon), a static arrangement exists with high probability. If d >= 16 center dot ln(1/epsilon), a dynamic version of the dictionary exists so that the expected time for inserting a new key is log(1/epsilon) (O(log log(1/epsilon))). (C) 2007 Elsevier B. V. All rights reserved.
Quantum algorithms and complexity have recently been studied not only for discrete, but also for some numerical problems. Most attention has been paid so far to the integration and approximation problems, for which a ...
详细信息
Quantum algorithms and complexity have recently been studied not only for discrete, but also for some numerical problems. Most attention has been paid so far to the integration and approximation problems, for which a speed-up is shown in many important cases by quantum computers with respect to deterministic and randomized algorithms on a classical computer. In this paper, we deal with the randomized and quantum complexity of initial-value problems. For this nonlinear problem, we show that both randomized and quantum algorithms yield a speed-up over deterministic algorithms. Upper bounds on the complexity in the randomized and quantum settings are shown by constructing algorithms with a suitable cost, where the construction is based on integral information. Lower bounds result from the respective bounds for the integration problem. (C) 2004 Elsevier Inc. All rights reserved.
Consider the problem of locating servers in a network for the purpose of storing data, performing an application, etc., so that at least one server will be available to clients even if up to k component failures occur...
详细信息
Consider the problem of locating servers in a network for the purpose of storing data, performing an application, etc., so that at least one server will be available to clients even if up to k component failures occur throughout the network. Letting G = (V, E) be the graph with vertex set V and edge set E representing the topology of the network, and letting L subset of V be a set of potential locations for the servers, a fundamental problem is to determine a minimum-size set S subset of L such that the network remains connected to S even if up to k component failures occur throughout the network. We say that such a set S is k-fault-tolerant. In this paper we present an algebraic characterization of k-fault-tolerant sets in terms of a. ne embeddings of G in k-dimensional Euclidean space. Employing this characterization, we present a polynomial-time Monte Carlo algorithm for computing a minimum-size k-fault-tolerant subset S of L. In fact, we solve the following more general problem for directed networks: given a digraph G = (V, E) (an undirected graph is equivalent to a symmetric digraph) and a subset L subset of V, we find a k-fault-tolerant subset S of L having minimum cost, where a unary integer cost c(v) is associated with locating a server at vertex v epsilon V.
Programs written in C and C++ are susceptible to memory errors, including buffer overflows and dangling pointers. These errors, which can lead to crashes, erroneous execution, and security vulnerabilities, are notorio...
详细信息
Programs written in C and C++ are susceptible to memory errors, including buffer overflows and dangling pointers. These errors, which can lead to crashes, erroneous execution, and security vulnerabilities, are notoriously costly to repair. Tracking down their location in the source code is difficult, even when the full memory state of the program is available. Once the errors are finally found, fixing them remains challenging: even for critical security-sensitive bugs, the average time between initial reports and the issuance of a patch is nearly one month. We present Exterminator, a system that automatically corrects heap-based memory errors without programmer intervention. Exterminator exploits randomization to pinpoint errors with high precision. From this information, Exterminator derives runtime patches that fix these errors both in current and subsequent executions. In addition, Exterminator enables collaborative bug correction by merging patches generated by multiple users. We present analytical and empirical results that demonstrate Exterminator's effectiveness at detecting and correcting both injected and real faults.
暂无评论