A classical branch of graph algorithms is graph transversals, where one seeks a minimum-weight subset of nodes in a node-weighted graph G which intersects all copies of subgraphs F from a fixed family Ƒ. Many such gra...
详细信息
We give fault-tolerant algorithms for establishing synchrony in distributed systems in which each of the n nodes has its own clock. Our algorithms operate in a very strong fault model: we require self-stabilisation, i...
详细信息
We refine the bound on the packing number, originally shown by Haussler, for shallow geometric set systems. Specifically, let V be a finite set system defined over an n-point set X;we view V as a set of indicator vect...
详细信息
ISBN:
(纸本)9783939897835
We refine the bound on the packing number, originally shown by Haussler, for shallow geometric set systems. Specifically, let V be a finite set system defined over an n-point set X;we view V as a set of indicator vectors over the n-dimensional unit cube. A δ-separated set of V is a subcollection W, s.t. The Hamming distance between each pair u, v 2 W is greater than δ, where δ > 0 is an integer parameter. The δ-packing number is then defined as the cardinality of the largest δ-separated subcollection of V. Haussler showed an asymptotically tight bound of φ((n/δ)d) on the δ-packing number if V has VC-dimension (or primal shatter dimension) d. We refine this bound for the scenario where, for any subset, X′ ⊆ X of size m ≤ n and for any parameter 1 ≤ κ ≤ m, the number of vectors of length at most κ in the restriction of V to X′ is only O(md1kd-d1 ), for a fixed integer d > 0 and a real parameter 1 ≤ d1 ≤ d (this generalizes the standard notion of bounded primal shatter dimension when d1 = d). In this case when V is "κ-shallow" (all vector lengths are at most κ), we show that its δ-packing number is O(nd1kd-d1/δd), matching Haussler s bound for the special cases where d1 = d or κ = n. We present two proofs, the first is an extension of Haussler s approach, and the second extends the proof of Chazelle, originally presented as a simplification for Haussler s proof.
In empirical cognitive science, for human learning, a semantic or behavioral U-shape occurs when a learner first learns, then unlearns, and, finally, relearns, some target concept. Within the formal framework of Induc...
详细信息
In empirical cognitive science, for human learning, a semantic or behavioral U-shape occurs when a learner first learns, then unlearns, and, finally, relearns, some target concept. Within the formal framework of Inductive Inference, for learning from positive data, previous results have shown, for example, that such U-shapes are unnecessary for explanatory learning, but are necessary for behaviorally correct and non-trivial vacillatory learning. Herein we also distinguish between semantic and syntactic U-shapes. We answer a number of open questions in the prior literature as well as provide new results regarding syntactic U-shapes. Importantly for cognitive science, we see more of a previously noticed pattern that, for parameterized learning criteria, beyond very few initial parameter values, U-shapes are necessary for full learning power. We analyze the necessity of U-shapes in two memory-limited settings. The first setting is Bounded Memory State (BMS) learning, where a learner has an explicitly-bounded state memory, and otherwise only knows its current datum. We show that there are classes learnable with three (or more) memory states that are not learnable non-U-shapedly with any finite number of memory states. This result is surprising, since, for learning with one or two memory states, U-shapes are known to be unnecessary. This solves an open question from the literature. The second setting is that of Memoryless Feedback (MLF) learning, where a learner may ask a bounded number of questions about what data has been seen so far, and otherwise only knows its current datum. We show that there is a class learnable memorylessly with a single feedback query such that this class is not learnable non-U-shapedly memorylessly with any finite number of feedback queries. We employ self-learning classes together with the Operator Recursion Theorem for many of our results, but we also introduce two new techniques for obtaining results. The first is for transferring inclusion
Determining the precise integrality gap for the subtour LP relaxation of the traveling salesman problem is a significant open question, with little progress made in thirty years in the general case of symmetric costs ...
详细信息
We prove that the number of vertices of given degree in random planar maps satisfies a central limit theorem with mean and variance that are asymptotically linear in the number of edges. The proof relies on an analyti...
详细信息
With the widespread of social networks, the risk of information sharing has become inevitable. Sharing a user's particular information in social networks is an all-or-none decision. Users receiving friendship invi...
详细信息
With the widespread of social networks, the risk of information sharing has become inevitable. Sharing a user's particular information in social networks is an all-or-none decision. Users receiving friendship invitations from others may decide to accept this request and share their information or reject it in which case none of their information will be shared. Access control in social networks is a challenging topic. Social network users would want to determine the optimum level of details at which they share their personal information with other users based on the risk associated with the process. In this paper, we formulate the problem of data sharing in social networks using two different models: (i) a model based on \emph{diffusion kernels}, and (ii) a model based on access control. We show that it is hard to apply the former in practice and explore the latter. We prove that determining the optimal levels of information sharing is an NP-hard problem and propose an approximation algorithm that determines to what extent social network users share their own information. We propose a trust-based model to assess the risk of sharing sensitive information and use it in the proposed algorithm. Moreover, we prove that the algorithm could be solved in polynomial time. Our results rely heavily on adopting the super modularity property of the risk function, which allows us to employ techniques from convex optimization. To evaluate our model, we conduct a user study to collect demographic information of several social networks users and get their perceptions on risk and trust. In addition, through experimental studies on synthetic data, we compare our proposed algorithm with the optimal algorithm both in terms of risk and time. We show that the proposed algorithm is scalable and that the sacrifice in risk is outweighed by the gain in efficiency.
Ant Colony Optimization (ACO) is a powerful metaheuristic for solving combinatorial optimization problems. With this paper we contribute to the theoretical understanding of this kind of algorithm by investigating the ...
详细信息
ISBN:
(纸本)9781450300728
Ant Colony Optimization (ACO) is a powerful metaheuristic for solving combinatorial optimization problems. With this paper we contribute to the theoretical understanding of this kind of algorithm by investigating the classical minimum cut problem. An ACO algorithm similar to the one that was proved successful for the minimum spanning tree problem is studied. Using rigorous runtime analyses we show how the ACO algorithm behaves similarly to Karger and Stein's algorithm for the minimum cut problem as long as the use of pheromone values is limited. Hence optimal solutions are obtained in expected polynomial time. On the other hand, we show that high use of pheromones has a negative effect, and the ACO algorithm may get trapped in local optima resulting in an exponential runtime to obtain an optimal solution. This result indicates that ACO algorithms may be inappropriate for finding minimum cuts. Copyright 2010 ACM.
A Random Geometric Graph (RGG) in two dimensions is constructed by distributing n nodes independently and uniformly at random in [0, {the square root of}n]~2 and creating edges between every pair of nodes having Eucli...
详细信息
ISBN:
(纸本)9780898717013
A Random Geometric Graph (RGG) in two dimensions is constructed by distributing n nodes independently and uniformly at random in [0, {the square root of}n]~2 and creating edges between every pair of nodes having Euclidean distance at most r, for some prescribed r. We analyze the following randomized broadcast algorithm on RGGs. At the beginning, only one node from the largest connected component of the RGG is informed. Then, in each round, each informed node chooses a neighbor independently and uniformly at random and informs it. We prove that with probability 1 - O(n~(-1)) this algorithm informs every node in the largest connected component of an RGG within O({the square root of}n/r+log n) rounds. This holds for any value of r larger than the critical value for the emergence of a connected component with Ω(n) nodes. In order to prove this result, we show that for any two nodes sufficiently distant from each other in [0, {the square root of}n]~2, the length of the shortest path between them in the RGG, when such a path exists, is only a constant factor larger than the optimum. This result has independent interest and, in particular, gives that the diameter of the largest connected component of an RGG is {direct}({the square root of}/r), which surprisingly has been an open problem so far.
In recent years a lot of progress has been made in understanding the behavior of evolutionary computation methods for single- and multi-objective problems. Our aim is to analyze the diversity mechanisms that are impli...
详细信息
In recent years a lot of progress has been made in understanding the behavior of evolutionary computation methods for single- and multi-objective problems. Our aim is to analyze the diversity mechanisms that are implicitly used in evolutionary algorithms for multi-objective problems by rigorous runtime analyses. We show that, even if the population size is small, the runtime can be exponential where corresponding single-objective problems are optimized within polynomial time. To illustrate this behavior we analyze a simple plateau function in a first step and extend our result to a class of instances of the well-known SETCOVER problem.
暂无评论