作者:
Cocco, SMonasson, RENS
CNRS Phys Theor Lab F-75005 Paris France CNRS
Lab Dynam Fluides Complexes F-67000 Strasbourg France CNRS
Phys Theor Lab F-67000 Strasbourg France
An analysis of the average-case complexity of solving random 3-Satisfiability (SAT) instances with backtrack algorithms is presented. We first interpret previous rigorous works in a unifying framework based on the sta...
详细信息
An analysis of the average-case complexity of solving random 3-Satisfiability (SAT) instances with backtrack algorithms is presented. We first interpret previous rigorous works in a unifying framework based on the statistical physics notions of dynamical trajectories, phase diagram and growth process. It is argued that, under the action of the Davis-Putnam-Loveland-Logemann (DPLL) algorithm, 3-SAT instances are turned into 2 + p-SAT instances whose characteristic parameters (ratio alpha of clauses per variable, fraction p of 3-clauses) can be followed during the operation, and define resolution trajectories. Depending on the location of trajectories in the phase diagram of the 2+p-SAT model, easy (polynomial) or hard (exponential) resolutions are generated. Three regimes are identified, depending on the ratio alpha of the 3-SAT instance to be solved. Lower satisfiable (sat) phase: for small ratios, DPLL almost surely finds a solution in a time growing linearly with the number N of variables. Upper sat phase: for intermediate ratios, instances are almost surely satisfiable but finding a solution requires exponential time (similar to 2(Nomega) with omega > 0) with high probability. Unsat phase: for large ratios, there is almost always no solution and proofs of refutation are exponential. An analysis of the growth of the search tree in both upper sat and unsat regimes is presented, and allows us to estimate omega as a function of a. This analysis is based on an exact relationship between the average size of the search tree and the powers of the evolution operator encoding the elementary steps of the search heuristic. (C) 2003 Elsevier B.V. All rights reserved.
We introduce a novel amortised resource analysis couched in a type-and-effect system. Our analysis is formulated in terms of the physicist's method of amortised analysis and is potentialbased. The type system make...
详细信息
We introduce a novel amortised resource analysis couched in a type-and-effect system. Our analysis is formulated in terms of the physicist's method of amortised analysis and is potentialbased. The type system makes use of logarithmic potential functions and is the first such system to exhibit logarithmic amortised complexity. With our approach, we target the automated analysis of self-adjusting data structures, like splay trees, which so far have only manually been analysed in the literature. In particular, we have implemented a semi-automated prototype, which successfully analyses the zig-zig case of splaying, once the type annotations are fixed.
The performance of hash tables is analyzed in a parallel context. Assuming that a hash table of fixed size is allocated in the shared memory of a PRAM with n processors, a Ph-step is defined as a PRAM computation, in ...
详细信息
The performance of hash tables is analyzed in a parallel context. Assuming that a hash table of fixed size is allocated in the shared memory of a PRAM with n processors, a Ph-step is defined as a PRAM computation, in which each processor searches or inserts a key in the table. It is shown that, in the case of uniform hashing, the average duration of a Ph-step for all insertions (worst case) in OMEGA(log(1/alpha'-n), where alpha and alpha' are the load factors of the table before and after the execution of the Ph-step.
In many cases, a key step in neuronal information processing is phase synchronization of neurons (as oscillators). Substantial evidence suggests that an universal mechanism is behind the synchronization. Recently, sto...
详细信息
In many cases, a key step in neuronal information processing is phase synchronization of neurons (as oscillators). Substantial evidence suggests that an universal mechanism is behind the synchronization. Recently, stochastic equations were proposed to model the synchronization. However, it is unclear what force ultimately drives this universal mechanism. From algorithmic perspective, we analyze solutions of these stochastic equations. The result enhances the current analysis for the phase synchronization;importantly, it shows that noise is the ultimate driving force. (c) 2007 Elsevier B.V. All rights reserved.
To protect RFID systems against the relay attack, distance bounding protocols are proposed based upon the round trip time measurements of the executed messages. With such protocols, in addition to tags' authentica...
详细信息
To protect RFID systems against the relay attack, distance bounding protocols are proposed based upon the round trip time measurements of the executed messages. With such protocols, in addition to tags' authentication, the reader estimates an upper bound for the physical distance between the tag and itself. Recently, Kim and Avoine proposed two distance bounding protocols called KM and KA2 both utilize mixed random and predefined challenges. It is well-known that KA2 can provide a good performance in view of mafia fraud resistance and system memory requirements for the tags. Since RFID systems and distance bounding protocols are particularly susceptible to noise, in this paper, KM and KA2 are analyzed to compute the rejection probability of a valid tag due to channel errors. In this case, the analysis as well as simulation results shows that increasing the number of rounds (iterations) with predefined challenges causes the rejection probability of a valid tag to increase. (c) 2015 Elsevier B.V. All rights reserved.
Recently, an agile software development technique called extreme programming has caught the attention of practitioners and researchers in the software industry. A core practice of extreme programming is pair programmi...
详细信息
Recently, an agile software development technique called extreme programming has caught the attention of practitioners and researchers in the software industry. A core practice of extreme programming is pair programming, where two developers work on the same piece of code. We introduce the problem of assigning pairs of developers to modules so as to maximize the commonality -a measure of the extent to which common developers work on related modules -subject to a load-balancing constraint that is motivated by the need to control the completion time of the project. We consider two variants of this problem. In MCAP(n), a developer is teamed up with exactly one other developer to form a pair that works together for the entire duration of the project. In MCAP(s), we allow a developer to pair with more than one other developer during the project. This "pair-splitting" version of the problem facilitates knowledge dissemination among developers, but can increase the effort needed for a developer to adjust to the work habits of several partners. The difference between the commonality achieved with and without pair splitting crucially depends on the underlying structure of the problem. For trees, we show that the value of the maximum commonality is the same for both MCAPn and MCAPs. Additionally, we obtain polynomial-time algorithms for both of these variants. For general graphs, both problems MCAPn and MCAPs are shown to be strongly NP-complete. We prove that the maximum commonality for MCAPs is at most 3 2 times the maximum commonality of MCAPn. We also provide polynomial-time algorithms and approximation results for a number of special cases of these problems.
The Karmarkar algorithm for linear programming (1984) has generated excitement and controversy. This controversy has been the theme of many popular panel discussions, reporting on experimental results both supporting ...
详细信息
The Karmarkar algorithm for linear programming (1984) has generated excitement and controversy. This controversy has been the theme of many popular panel discussions, reporting on experimental results both supporting and refuting claims of substantial speed-ups. How does one resolve these conflicting reports? The thrust of the work here has been on implementation and experimentation. We argue that further theoretical analysis of the algorithm can shed light on this controversy. In particular we ask: What is the ‘average’ behavior of the algorithm, and what are its dynamical properties? In this paper we do not address the issue of work per iteration, perhaps the most controversial aspect of the algorithm in practice. We do look, however, at the iterative process, and here we give a new proof of the number of iterative steps required by the algorithm in the worst case. Our proof is considerably simpler than Karmarkar's, and provides new information and insight. Specifically, our proof identifies an explicitly geometric quantity that measures the algorithm's progress at each step. Analyzing the changing size of this quantity gives information about the dynamics of the algorithm.
作者:
Trelea, ICINA PG
UMR Genie & Microbiol Proc Alimentaires F-78850 Thiverval Grignon France
The particle swarm optimization algorithm is analyzed using standard results from the dynamic system theory. Graphical parameter selection guidelines are derived. The exploration-exploitation tradeoff is discussed and...
详细信息
The particle swarm optimization algorithm is analyzed using standard results from the dynamic system theory. Graphical parameter selection guidelines are derived. The exploration-exploitation tradeoff is discussed and illustrated. Examples of performance on benchmark functions superior to previously published results are given. (C) 2002 Elsevier Science B.V. All rights reserved.
In the running time analysis of the algorithm Find and versions of it appear as limiting distributions solutions of stochastic fixed points equation of the form X D = Sigma(i) AiXi o Bi + C on the space D of cadlag fu...
详细信息
In the running time analysis of the algorithm Find and versions of it appear as limiting distributions solutions of stochastic fixed points equation of the form X D = Sigma(i) AiXi o Bi + C on the space D of cadlag functions. The distribution of the D-valued process X is invariant by some random linear affine transformation of space and random time change. We show the existence of solutions in some generality via the Weighted Branching Process. Finite exponential moments are connected to stochastic fixed point of supremum type X D = sup(i) (A(i)X(i) + C-i) on the positive reals. Specifically we present a running time analysis of m-median and adapted versions of Find. The finite dimensional distributions converge in L-1 and are continuous in the cylinder coordinates. We present the optimal adapted version in the sense of low asymptotic average number of comparisons. The limit distribution of the optimal adapted version of Find is a point measure on the function [0, 1] there exists t -> 1 + min{t, 1 - t}.
We study the Kolmogorov complexity of a Binary Insertion Tree, and present a succinct encoding scheme for Binary Insertion Trees produced from incompressible permutations. Based on the encoding scheme, we obtain a sim...
详细信息
We study the Kolmogorov complexity of a Binary Insertion Tree, and present a succinct encoding scheme for Binary Insertion Trees produced from incompressible permutations. Based on the encoding scheme, we obtain a simple incompressibility argument that yields an asymptotic analysis of the average height of a Binary Insertion Tree. This argument further implies that the QuickSort algorithm sorts a permutation of n elements in Theta (n lg n) comparisons on average. (C) 2007 Elsevier B.V. All rights reserved.
暂无评论