The hardness of fitness functions is an important research topic in the field of evolutionary computation. In theory, this paper can help with understanding the ability of evolutionary algorithms (EAs). In practice, t...
详细信息
The hardness of fitness functions is an important research topic in the field of evolutionary computation. In theory, this paper can help with understanding the ability of evolutionary algorithms (EAs). In practice, this paper may provide a guideline to the design of benchmarks. The aim of this paper is to answer the following research questions. Given a fitness function class, which functions are the easiest with respect to an EA? Which are the hardest? How are these functions constructed? This paper provides theoretical answers to these questions. The easiest and hardest fitness functions are constructed for an elitist (1 + 1) EA to maximize a class of fitness functions with the same optima. It is demonstrated that the unimodal functions are the easiest and deceptive functions are the hardest in terms of the time-based fitness landscape. This paper also reveals that in a fitness function class, the easiest function to one algorithm may become the hardest to another algorithm, and vice versa.
Farber and Keil presented an O(n3) algorithm for finding a minimum weight dominating set on permutation graphs. In this paper, we take a new approach for solving the same problem. The algorithm takes O(n(m + n)) steps...
详细信息
Farber and Keil presented an O(n3) algorithm for finding a minimum weight dominating set on permutation graphs. In this paper, we take a new approach for solving the same problem. The algorithm takes O(n(m + n)) steps, where m is the number of edges in a permutation graph G of n nodes. Therefore, our algorithm is particularly good for the sparse permutation graphs.
This paper studies a generalization of the internal path length of a random digital search tree to bucket trees, which are capable of storing up to b records per node. We show that under the assumption of the symmetri...
详细信息
This paper studies a generalization of the internal path length of a random digital search tree to bucket trees, which are capable of storing up to b records per node. We show that under the assumption of the symmetric Bernoulli probabilistic model the expected path length of a tree built from N records is asymptotically Nlog(2)N + (A + delta(1)(log(2)N))N and the variance is (C + epsilon(1)(log(2)N))N, where A and C depend on b only. The continuous functions delta(1) and epsilon(1) are periodic with mean 0 and period 1. The proof's are analytical and make use of generating functions, harmonic sums and the Mellin integral transform. An important and very general tool for the analysis is Mellin's convolution integral. These results and techniques are motivated by a paper by Flajolet and Richmond and by Kirschenhofer, Prodinger, and Szpankowski. (C) 2000 Elsevier Science B.V. All rights reserved.
The failure rate of the Apriori algorithm is studied analytically for the case of random shoppers. The time needed by the Apriori algorithm is determined by the number of item sets that are output (successes: item set...
详细信息
The failure rate of the Apriori algorithm is studied analytically for the case of random shoppers. The time needed by the Apriori algorithm is determined by the number of item sets that are output (successes: item sets that occur in at least k baskets) and the number of item sets that are counted but not output (failures: item sets where all subsets of the item set occur in at least k baskets but the full set occurs in less than k baskets). The number of successes is a property of the data;no algorithm that is required to output each success can avoid doing work associated with the successes. The number of failures is a property of both the algorithm and the data. We find that under a wide range of conditions the performance of the Apriori algorithm is almost as bad as is permitted under sophisticated worst-case analyses. In particular, there is usually a bad level with two properties: (1) it is the level where nearly all of the work is done, and (2) nearly all item sets counted are failures. Let l be the level with the most successes, and let the number of successes on level l be approximately ((m)(l)) for some m. Then, typically, the Apriori algorithm has total output proportional to approximately ((m)(l)) and total work proportional to approximately ((m)(l+1)). In addition m is usually much larger than l, so the ratio of work to output is proportional to approximately m/(l+1). The analytical results for random shoppers are compared against measurements for three data sets. These data sets are more like the usual applications of the algorithm. In particular, the buying patterns of the various shoppers are highly correlated. For most thresholds, these data sets also have a bad level. Thus, under most conditions nearly all of the work done by the Apriori algorithm consists in counting item sets that fail.
We consider the problem of efficiently performing a reduce-scatter operation in a message passing system. Reduce-scatter is the composition of an element-wise reduction on vectors of n elements initially held by n pro...
详细信息
We consider the problem of efficiently performing a reduce-scatter operation in a message passing system. Reduce-scatter is the composition of an element-wise reduction on vectors of n elements initially held by n processors, with a scatter of the resulting vector among the processors. In this paper, we present two algorithms for the reduce-scatter operation, designed in LogGP. The first algorithm assumes an associative and commutative reduction operator and it is optimal in LogGP within a small constant factor. The second algorithm allows the reduction operator to be noncommutative, and it is asymptotically optimal when values to be combined are large arrays. To achieve these results, we developed a complete analysis of both algorithms in LogGP, including the derivation of lower bounds for the reduce-scatter operation, and the study of the m-item version of the problem, i.e., the case when the initial elements are vectors themselves. Reduce-scatter has been included as a collective operation in the MPI standard message passing library, and can be used, for instance, in parallel matrix-vector multiply when the matrix is decomposed by columns. To model a message passing system, we adopted the LogGP model, an extension of LogP that allows the modeling of messages of different length. While this choice makes the analysis somewhat more complex, it leads to more realistic results in the case of gather/scatter algorithms.
Lattice is widely used in cryptography since it has potential for defending quantum attacks. One of the significant problems in such cryptography is the shortest vector problem (SVP). This problem is to find the non-z...
详细信息
Lattice is widely used in cryptography since it has potential for defending quantum attacks. One of the significant problems in such cryptography is the shortest vector problem (SVP). This problem is to find the non-zero shortest vector in lattice. The SVP is an NP-hard problem under randomized reductions proven by Ajtai, and many cryptosystems are secure under the assumption that SVP is hard, such as NTRU. On the other hand, some primitives of lattice-based cryptography require relatively short vectors. In this paper, we propose a new SVP algorithm that can be performed in time complexity O(n(3)). We also prove that the Hermite factor of the proposed algorithm is polynomial-bounded.
We present a 2D triangle mesh simplification model which is able to produce high quality approximations of any original planar mesh, regardless of the shape of the original mesh. This method consists of two phases: a ...
详细信息
We present a 2D triangle mesh simplification model which is able to produce high quality approximations of any original planar mesh, regardless of the shape of the original mesh. This method consists of two phases: a self-organizing algorithm and a triangulation algorithm. The self-organizing algorithm is an unsupervised incremental clustering algorithm which provides us a set of nodes representing the best approximation of the original mesh. The triangulation algorithm reconstructs the simplified mesh from the planar points obtained by the self-organizing training process. Some examples are detailed with the purpose of demonstrating the ability of the model to perform the task of simplifying an original mesh with irregular shape. (C) 2010 Elsevier Inc. All rights reserved.
We consider the money distribution problem for a micro-payment scheme using a distributed server system: in particular, for an automatic charging scheme named PayPerClick that allows Internet users to view Web pages f...
详细信息
We consider the money distribution problem for a micro-payment scheme using a distributed server system: in particular, for an automatic charging scheme named PayPerClick that allows Internet users to view Web pages for which access charges are levied without tedious payment procedures. A major bottleneck in the scheme is the network traffic caused by the distribution of electronic money to many different servers. We propose a simple online algorithm for distributing electronic money to servers, so that the network traffic is minimized. The algorithm achieves the optimal online competitive ratio. We also consider a weighted version, for which we give an asymptotically optimal online algorithm within a constant factor.
A key ingredient in branch and bound (B&B) solvers for mixed-integer programming (MIP) is the selection of branching variables since poor or arbitrary selection can affect the size of the resulting search trees by...
详细信息
A key ingredient in branch and bound (B&B) solvers for mixed-integer programming (MIP) is the selection of branching variables since poor or arbitrary selection can affect the size of the resulting search trees by orders of magnitude. A recent article by Le Bodic and Nemhauser (Math Program 166(1-2):369-405, 2017) investigated variable selection rules by developing a theoretical model of B&B trees from which they developed some new, effective scoring functions for MIP solvers. In their work, Le Bodic and Nemhauser left several open theoretical problems, solutions to which could guide the future design of variable selection rules. In this article, we first solve many of these open theoretical problems. We then implement an improved version of the model-based branching rules in SCIP 6.0, a state-of-the-art academic MIP solver, in which we observe an 11% geometric average time and node reduction on instances of the MIPLIB 2017 Benchmark Set that require large B&B trees.
Recurrence formulations for various problems such as finding an optimal order of matrix multiplication, finding an optimal binary search tree, and optimal triangultion of polygons, assume a similar form. In [2], [5], ...
详细信息
Recurrence formulations for various problems such as finding an optimal order of matrix multiplication, finding an optimal binary search tree, and optimal triangultion of polygons, assume a similar form. In [2], [5], a CREW PRAM algorithm was given to solve such dynamic programming problems. The algorithm uses O(n6/log n) processors and runs in O(log2n) time. In this short note, a modified algorithm is presented that reduces the processor requirement to O(n6/log 5n) while maintaining the same time complexity of O(log2n).
暂无评论