Solving binary-real problems with bio-inspired algorithms is an active research matter. However, the efficiency of the employed algorithm varies drastically by tailoring the governing equations or just by adopting &qu...
详细信息
Solving binary-real problems with bio-inspired algorithms is an active research matter. However, the efficiency of the employed algorithm varies drastically by tailoring the governing equations or just by adopting "more adequate" parameter setting. Within this framework, we aim to improve the parameter setting of the binary particle swarm optimization (BPSO). We derive a Markov chain model of BPSO. The transition probabilities reveal that the acceleration coefficients control the transition speed between the exploitation and exploration phases. The transition probabilities also depict a poor exploration ratio in high-dimensional search spaces. Increasing the values of the acceleration coefficients may enhance the exploration ratio. Nevertheless, overly high values for these coefficients present some shortcomings. Numerical experiments realized on three different problem sets (e.g. multidimensional knapsack problem) further prove the need to increase the acceleration coefficients as the search space dimension rises. We recommend a set of equations governing the best setting for acceleration coefficients. Finally, a comparison with other BPSO variants reveals the merits of the suggested setting over the conventional ones.
The Euclidean algorithm for computing the greatest common divisor of two integers is, as D. E. Knuth has remarked, ''the oldest nontrivial algorithm that has survived to the present day.'' Credit for t...
详细信息
The Euclidean algorithm for computing the greatest common divisor of two integers is, as D. E. Knuth has remarked, ''the oldest nontrivial algorithm that has survived to the present day.'' Credit for the first analysis of the running time of the algorithm is traditionally assigned to Gabriel Lame, for his 1844 paper. This article explores the historical origins of the analysis of the Euclidean algorithm. A weak bound on the running time of this algorithm was given as early as 1811 by Antoine-Andre-Louis Reynaud. Furthermore, Lame's basic result was known to Emile Leger in 1837, and a complete, valid proof along different lines was given by Pierre-Joseph-Etienne Finck in 1841. (C) 1994 Academic Press, Inc.
We investigate the average similarity of random strings as captured by the average number of 'cousins' in the underlying tree structures. Analytical techniques including poissonization and the Mellin transform...
详细信息
We investigate the average similarity of random strings as captured by the average number of 'cousins' in the underlying tree structures. Analytical techniques including poissonization and the Mellin transform are used for accurate calculation of the mean. The string alphabets we consider are m-ary, and the corresponding trees are m-ary trees. Certain analytic issues arise in the m-ary case that do not have an analog in the binary case.
It is known that every positive integer n can be represented as a finite sum of the form Sigma(i)a(i)2(i), where a(i) is an element of {0, 1, -1} and no two consecutive a(i)'s are non-zero ("nonadjacent form&...
详细信息
It is known that every positive integer n can be represented as a finite sum of the form Sigma(i)a(i)2(i), where a(i) is an element of {0, 1, -1} and no two consecutive a(i)'s are non-zero ("nonadjacent form'', NAF). Recently, Muir and Stinson [14, 15] investigated other digit sets of the form {0, 1, x}, such that each integer has a nonadjacent representation (such a number x is called admissible). The present paper continues this line of research. The topics covered include transducers that translate the standard binary representation into such a NAF and a careful topological study of the (exceptional) set (which is of fractal nature) of those numbers where no finite look-ahead is sufficient to construct the NAF from left-to-right, counting the number of digits 1 (resp. x) in a (random) representation, and the non-optimality of the representations if x is different from 3 or -1.
System developers frequently encounter programs which involve multiple stacks, each with dynamically varying size. Keeping multiple stacks in a common area with sequential allocation can cause trouble in such a situa...
详细信息
System developers frequently encounter programs which involve multiple stacks, each with dynamically varying size. Keeping multiple stacks in a common area with sequential allocation can cause trouble in such a situation. Developers would hate to impose a maximum size on each stack since the size is usually unpredictable. Also, to store multiple variable-size stacks in sequential locations of a common memory area, the obstacle of overflow must be solved. An overflow situation will cause an error to indicate that the stack is already full, even though there are still more items to be put in. A solution for overflow is reallocating memory, in which room is made for the overflowed stacks by taking some space from stacks that are not yet filled. Knuth (1973) proposed a simple solution in reallocating memory by more operations. He analyzed the average number of movements when overflow occurs and developed a formula concerning the number of stacks and pushed items. The worst sequence of pushed data is examined and gives interesting properties.
For an ordered file of records with uniformly distributed key values, we examine an existing batched searching algorithm based on recursive use of interpolation searches. The algorithm, called Recursive Batched Interp...
详细信息
For an ordered file of records with uniformly distributed key values, we examine an existing batched searching algorithm based on recursive use of interpolation searches. The algorithm, called Recursive Batched Interpolation Search (RBIS) in this paper, uses a divide-and-conquer technique for batched searching. The expected-case complexity of the algorithm is shown to beO(m loglog (2n/m) +m), wheren is the size of the file andm is the size of the query batch. Simulations are performed to demonstrate the savings of batched searching using RBIS. Also, simulations are performed to compare alternative batched searching algorithms which are based on either interpolation search or binary search. When the file's key values are uniformly distributed, the simulation results confirm that interpolation-search based algorithms are superior to binary-search based algorithms. However, when the file's key values are not uniformly distributed, a straight-forward batched interpolation search deteriorates quickly as the batch size increases, but algorithm RBIS still outperforms binary-search based algorithms when the batch size passes a threshold value.
This paper analyzes the early-insertion standard coalesced hashing method (EISCH), which is a variant of the standard coalesced hashing algorithm (SCH) described in [Knu73], [Vit80] and [Vit82b]. The analysis answers ...
详细信息
This paper analyzes the early-insertion standard coalesced hashing method (EISCH), which is a variant of the standard coalesced hashing algorithm (SCH) described in [Knu73], [Vit80] and [Vit82b]. The analysis answers the open problem posed in [Vit80]. The number of probes per successful search in full tables is 5% better with EISCH than with SCH.
In this paper we explore the use of 2-3 trees to represent sorted lists. We analyze the worst-case cost of sequences of insertions and deletions in 2-3 trees under each of the following three assumptions: (i) only ins...
详细信息
In this paper we explore the use of 2-3 trees to represent sorted lists. We analyze the worst-case cost of sequences of insertions and deletions in 2-3 trees under each of the following three assumptions: (i) only insertions are performed; (ii) only deletions are performed; (iii) deletions occur only at the small end of the list and insertions occur only away from the small end. Our analysis leads to a data structure for representing sorted lists when the access pattern exhibits a (perhaps time-varying) locality of reference. This structure has many of the properties of the representation proposed by Guibas, McCreight, Plass and Roberts [A new representation for linear lists, Proc. Ninth Annual Symposium on Theory of Computing, Boulder, CO, 1977, pp. 49–60], but it is substantially simpler and may be practical for lists of moderate size.
The search rearrangement backtracking algorithm of Bitner and Reingold [Comm. ACM, 18 (1975), pp. 651–655] introduces at each level of the backtrack tree a variable with a minimal number of remaining values; search o...
详细信息
The search rearrangement backtracking algorithm of Bitner and Reingold [Comm. ACM, 18 (1975), pp. 651–655] introduces at each level of the backtrack tree a variable with a minimal number of remaining values; search order may differ on different branches. For conjunctive normal form formulas with v variables, s literals per term $(s \geqq 3)$, and $v^\alpha $ terms $((s/2) < \alpha < s)$, the average number of nodes in a search rearrangement backtrack tree is $\exp [\Theta (v^{(s - \alpha - 1)/(s - 2)} )]$ (i.e., for some positive constants $a_1,a_2 $, and $v_0 $, when $v \geqq v_0 $ the number of nodes is between $\exp (a_1 v^{(s - \alpha - 1)/(s - 2)} )$ and $\exp (a_2 v^{(s - \alpha - 1)/(s - 2)} )$. For $1 < \alpha \leqq s/2$ the average number of nodes is between $\exp [\Theta (v^{(s - \alpha - 1)/(s - 2)} )]$ and $\exp [\Theta ((\ln v)^{(s - 1)/(s - 2)} v^{(s - \alpha - 1)/(s - 2)} )]$. This compares with $\exp [\Theta (v^{(s - \alpha )/(s - 1)} )]$ for ordinary backtracking. For $1 < \alpha < s$, simple search rearrangement has approximately the same effect on speeding up backtracking as does reducing the problem complexity by decreasing the number of literals per term by one. Thus simple search rearrangement backtracking leads to a dramatic improvement in the expected running time.
暂无评论