An optimization search algorithm for multivariate systems is proposed. The proposed optimization search algorithm includes initialization-, displacement-shrink- and searching areas stages. The initialization stage gen...
详细信息
We consider the ANTS problem (Feinerman et al.) in which a group of agents collaboratively search for a target in a two-dimensional plane. Because this problem is inspired by the behavior of biological species, we arg...
详细信息
We consider the ANTS problem (Feinerman et al.) in which a group of agents collaboratively search for a target in a two-dimensional plane. Because this problem is inspired by the behavior of biological species, we argue that in addition to studying the time complexity of solutions it is also important to study the selection complexity, a measure of how likely a given algorithmic strategy is to arise in nature due to selective pressures. Intuitively, the larger the chi value, the more complicated the algorithm, and therefore the less likely it is to arise in nature. In more detail, we propose a new selection complexity metric chi, defined for algorithm A such that chi(A) = b + log l, where b is the number of memory bits used by each agent and l bounds the fineness of available probabilities (agents use probabilities of at least 1/2(l)). In this paper, we study the trade-off between the standard performance metric of speed-up, which measures how the expected time to find the target improves with n, and our new selection metric. Our goal is to determine the thresholds of algorithmic complexity needed to enable efficient search. In particular, consider n agents searching for a treasure located within some distance D from the origin (where n is subexponential in D). For this problem, we identify the threshold log log D to be crucial for our selection complexity metric. We first prove a newupper bound that achieves a near-optimal speed-up for chi(A) approximate to log log D + O(1). In particular, for l is an element of O(1), the speed-up is asymptotically optimal. By comparison, the existing results for this problem (Feinerman et al.) that achieve similar speed-up require chi(A) = Omega(log D). We then showthat this threshold is tight by describing a lower bound showing that if chi (A) < log log D - omega (1), then with high probability the target is not found in D2-o(1) moves per agent. Hence, there is a sizable gap with respect to the straightforward Omega(D-2/ n + D) l
We make a number of contributions to the study of the Quantified Constraint Satisfaction Problem (QCSP). The QCSP is an extension of the constraint satisfaction problem that can be used to model combinatorial problems...
详细信息
We make a number of contributions to the study of the Quantified Constraint Satisfaction Problem (QCSP). The QCSP is an extension of the constraint satisfaction problem that can be used to model combinatorial problems containing contingency or uncertainty. It allows for universally quantified variables that can model uncertain actions and events, such as the unknown weather for a future party, or an opponent's next move in a game. In this paper we report significant contributions to two very different methods for solving QCSPs. The first approach is to implement special purpose algorithms for QCSPs;and the second is to encode QCSPs as Quantified Boolean Formulas and then use specialized QBF solvers. The discovery of particularly effective encodings influenced the design of more effective algorithms: by analyzing the properties of these encodings, we identify the features in QBF solvers responsible for their efficiency. This enables us to devise analogues of these features in QCSPs, and implement them in special purpose algorithms, yielding an effective special purpose solver, QCSP-Solve. Experiments show that this solver and a highly optimized QBF encoding are several orders of magnitude more efficient than the initially developed algorithms. A final, but significant, contribution is the identification of flaws in simple methods of generating random QCSP instances, and a means of generating instances which are not known to be flawed. (c) 2007 Elsevier B.V. All rights reserved.
In this paper, we describe our progress during the last four years (1995-1999) in automatic transcription of broadcast news from radio and television using the BBN Byblos speech recognition system. Overall, we achieve...
详细信息
In this paper, we describe our progress during the last four years (1995-1999) in automatic transcription of broadcast news from radio and television using the BBN Byblos speech recognition system. Overall, we achieved steady progress as reflected through the results of the last four DARPA Hub-4 evaluations, with word error rates of 42.7%, 31.8%, 20.4% and 14.7% in 1995, 1996, 1997 and 1998, respectively. This progress can be attributed to improvements in acoustic modeling, channel and speaker adaptation, and search algorithms, as well as dealing with specific characteristics of the real-life variable speech found in broadcast news. Besides improving recognition accuracy, we also succeeded in developing several algorithms to achieve close-to-real-time recognition speed without a significant sacrifice in recognition accuracy. (C) 2002 Elsevier Science B.V. All rights reserved.
We study local optima of combinatorial optimization problems. We show that a local search algorithm can be represented as a digraph and apply recent results for spanning forests of a diagraph. We establish a correspon...
详细信息
We study local optima of combinatorial optimization problems. We show that a local search algorithm can be represented as a digraph and apply recent results for spanning forests of a diagraph. We establish a correspondence between the number of local optima and the algebraic multiplicities of eigenvalues of digraph laplacians. We apply our finding to the three-dimensional assignment problem.
We consider a generalization of the classical group testing problem. Let us be given a sample contaminated with a chemical substance. We want to estimate the unknown concentration c of this substance in the sample. Th...
详细信息
We consider a generalization of the classical group testing problem. Let us be given a sample contaminated with a chemical substance. We want to estimate the unknown concentration c of this substance in the sample. There is a threshold indicator which can detect whether the concentration is at least a known threshold. We consider both the case when the threshold indicator does not affect the tested units and the more difficult case when the threshold indicator destroys the tested units. For both cases, we present a family of efficient algorithms each of which achieves a good approximation of c using a small number of tests and of auxiliary resources. Each member of the family provides a different tradeoff between the number of tests and the use of other resources involved by the algorithm. Previously known algorithms for this problem use more tests than most of our algorithms do. (C) 2001 Elsevier Science B.V. All rights reserved.
Geometrical complexity considered in the present study is characterized by a large degree of freedom in placement of constituents in a composite system. The constituent sizes are not negligibly small compared to the s...
详细信息
Geometrical complexity considered in the present study is characterized by a large degree of freedom in placement of constituents in a composite system. The constituent sizes are not negligibly small compared to the system's expanse, so approximations of their configurations and placement patterns by some simple equivalents are not feasible. Numerical analysis is a means to estimate heat transfer performance of the system, but a strategy is required to navigate through a vast number of possible constituent patterns to find ones that produce higher heat transfer performance. In the proposed methodology the constituent pattern is captured as a two-dimensional mosaic image of solid cells embedded in a substrate. On an image assumed as a starter, the singular-value decomposition (SVD) analysis is performed to find its (SVD) building block elements. By shuffling the building block elements variants of the starter image are created. Heat transfer analysis is performed on sample systems that are picked up from the ensemble of variants. Using the Taguchi method and through a genetic algorithm-type reasoning, those element arrangements that do not significantly affect the heat transfer performance are weeded out. The methodology is demonstrated on the cases of heat conduction through composite slabs. (C) 2003 Elsevier Science Ltd. All rights reserved.
We develop a queueing model for analyzing resource replication strategies in wireless sensor networks. The model can be used to minimize either the total transmission rate of the network (an energy-centric approach) o...
详细信息
We develop a queueing model for analyzing resource replication strategies in wireless sensor networks. The model can be used to minimize either the total transmission rate of the network (an energy-centric approach) or to ensure the proportion of query failures does not exceed a predetermined threshold (a failure-centric approach). The model explicitly considers the limited availability of network resources, as well as the frequency of resource requests and query deadlines, to determine the optimal replication strategy for a network resource. While insufficient resource replication increases query failures and transmission rates, replication levels beyond the optimum result in only marginal decreases in the proportion of query failures at a cost of higher total energy expenditure and network traffic. Published by Elsevier B.V.
This paper focuses on two issues, first perusing the idea of algorithmic design through genetic programming (GP), and, second, introducing a novel approach for analyzing and understanding the evolved solution trees. C...
详细信息
This paper focuses on two issues, first perusing the idea of algorithmic design through genetic programming (GP), and, second, introducing a novel approach for analyzing and understanding the evolved solution trees. Considering the problem of list search, we evolve iterative algorithms for searching for a given key in an array of integers, showing that both correct linear-time and far more efficient logarithmic-time algorithms can be repeatedly designed by Darwinian means. Next, we turn to the (evolved) dish of spaghetti (code) served by GP. Faced with the all-too-familiar conundrum of understanding convoluted-and usually bloated-GP-evolved trees, we present a novel analysis approach, based on ideas borrowed from the field of bioinformatics. Our system, dubbed G-PEA (GP Post-Evolutionary Analysis), consists of two parts: (1) Defining a functionality-based similarity score between expressions, G-PEA uses this score to find subtrees that carry out similar semantic tasks;(2) Clustering similar sub-expressions from a number of independently evolved fit solutions, thus identifying important semantic building blocks ensconced within the hard-to-read GP trees. These blocks help identify the important parts of the evolved solutions and are a crucial step in understanding how they work. Other related GP aspects, such as code simplification, bloat control, and building-block preserving crossover, may be extended by applying the concepts we present.
Deep learning algorithms are used in various advanced applications, including computer vision, large language models and many others due to their increasing success over traditional approaches. Enabling these algorith...
详细信息
Deep learning algorithms are used in various advanced applications, including computer vision, large language models and many others due to their increasing success over traditional approaches. Enabling these algorithms on various embedded systems is essential to extend the success of deep learning methods in the cutting-edge real-world systems and applications. The configurability of the embedded systems and software applications make them adaptable to different performance requirements such as latency, power, memory and accuracy. However, the vast number of combinations of hardware-software configurations makes the multi-objective optimization with respect to multiple performance metrics highly complex. Additionally, the lack of analytical form of the problem makes it harder to identify Pareto optimal configurations. To address these challenges, here we propose a fast, accurate and scalable search algorithm to efficiently solve these search spaces. We evaluate our algorithm, together with state-of-the-art methods, with different deep learning applications running on different hardware configurations, creating different search spaces, and show that our algorithm outperforms the other existing approaches by finding the Pareto frontier more accurately and with up to 15 times faster speed.
暂无评论