Creating catchy slogans is a demanding and clearly creative job for ad agencies. The process of slogan creation by humans involves finding key concepts of the company and its products, and developing a memorable short...
详细信息
It is common knowledge that there is no single best strategy for graph clustering, which justifies a plethora of existing approaches. In this paper, we present a general memetic algorithm, VieClus, to tackle the graph...
详细信息
Fuzzing has become the de facto standard technique for finding software vulnerabilities. However, even state-of-theart fuzzers are not very efficient at finding hard-to-trigger software bugs. Most popular fuzzers use ...
详细信息
Fuzzing has become the de facto standard technique for finding software vulnerabilities. However, even state-of-theart fuzzers are not very efficient at finding hard-to-trigger software bugs. Most popular fuzzers use evolutionary guidance to generate inputs that can trigger different bugs. Such evolutionary algorithms, while fast and simple to implement, often get stuck in fruitless sequences of random mutations. Gradient-guided optimization presents a promising alternative to evolutionary guidance. Gradient-guided techniques have been shown to significantly outperform evolutionary algorithms at solving high-dimensional structured optimization problems in domains like machine learning by efficiently utilizing gradients or higher-order derivatives of the underlying function. However, gradient-guided approaches are not directly applicable to fuzzing as real-world program behaviors contain many discontinuities, plateaus, and ridges where the gradientbased methods often get stuck. We observe that this problem can be addressed by creating a smooth surrogate function approximating the target program's discrete branching behavior. In this paper, we propose a novel program smoothing technique using surrogate neural network models that can incrementally learn smooth approximations of a complex, real-world program's branching behaviors. We further demonstrate that such neural network models can be used together with gradient-guided input generation schemes to significantly increase the efficiency of the fuzzing process. Our extensive evaluations demonstrate that NEUZZ significantly outperforms 10 state-of-the-art graybox fuzzers on 10 popular real-world programs both at finding new bugs and achieving higher edge coverage. NEUZZ found 31 previously unknown bugs (including two CVEs) that other fuzzers failed to find in 10 real-world programs and achieved 3X more edge coverage than all of the tested graybox fuzzers over 24 hour runs. Furthermore, NEUZZ also outperformed existing
Modern deep learning systems rely on (a) a hand-tuned neural network topology, (b) massive amounts of labelled training data, and (c) extensive training over large-scale compute resources to build a system that can pe...
详细信息
Deep Reinforcement Learning (DRL) algorithms have been successfully applied to a range of challenging control tasks. However, these methods typically suffer from three core difficulties: temporal credit assignment wit...
详细信息
Convolutional neural networks belong to the most successul image classifiers, but the adaptation of their network architecture to a particular problem is computationally expensive. We show that an evolutionary algorit...
详细信息
A fast and accurate fit program is presented for deconvolution of one-dimensional solid-state quadrupolar NMR spectra of powdered materials. Computational costs of the synthesis of theoretical spectra are reduced by t...
详细信息
A fast and accurate fit program is presented for deconvolution of one-dimensional solid-state quadrupolar NMR spectra of powdered materials. Computational costs of the synthesis of theoretical spectra are reduced by the use of libraries containing simulated time/frequency domain data. These libraries are calculated once and with the use of second-party simulation software readily available in the NMR community, to ensure a maximum flexibility and accuracy with respect to experimental conditions. EASY-GOING deconvolution (EGdeconv) is equipped with evolutionary algorithms that provide robust many-parameter fitting and offers efficient parallellised computing. The program supports quantification of relative chemical site abundances and (dis)order in the solid-state by incorporation of (extended) Czjzek and order parameter models. To illustrate EGdeconv's current capabilities, we provide three case studies. Given the program's simple concept it allows a straightforward extension to include other NMR interactions. The program is available as is for 64-bit Linux operating systems. (C) 2011 Elsevier Inc. All rights reserved.
Many large combinatorial optimization problems tackled with evolutionary algorithms often require very high computational times, usually due to the fitness evaluation. This fact forces programmers to use clusters of c...
详细信息
Many large combinatorial optimization problems tackled with evolutionary algorithms often require very high computational times, usually due to the fitness evaluation. This fact forces programmers to use clusters of computers, a computational solution very useful for running applications of intensive calculus but having a high acquisition price and operation cost, mainly due to the Central Processing Unit (CPU) power consumption and refrigeration devices. A low-cost and high-performance alternative comes from reconfigurable computing, a hardware technology based on Field Programmable Gate Array devices (FPGAs). The main objective of the work presented in this paper is to compare implementations on FPGAs and CPUs of different fitness functions in evolutionary algorithms in order to study the performance of the floating-point arithmetic in FPGAs and CPUs that is often present in the optimization problems tackled by these algorithms. We have taken advantage of the parallelism at chip-level of FPGAs pursuing the acceleration of the fitness functions (and consequently, of the evolutionary algorithms) and showing the parallel scalability to reach low cost, low power and high performance computational solutions based on FPGA. Finally, the recent popularity of GPUs as computational units has moved us to introduce these devices in our performance comparisons. We analyze performance in terms of computation times and economic cost.
evolutionary algorithms are among the most successful approaches for solving a number of problems where systematic searches in huge domains must be performed. One problem of practical interest that falls into this cat...
详细信息
evolutionary algorithms are among the most successful approaches for solving a number of problems where systematic searches in huge domains must be performed. One problem of practical interest that falls into this category is known as The Root Identification Problem in Geometric Constraint Solving, where one solution to the geometric problem must be selected among a number of possible solutions bounded by an exponential number. In previous works we have shown that applying genetic algorithms, a category of evolutionary algorithms, to solve the Root Identification Problem is both feasible and effective. In this work, we report on an empirical statistical study conducted to establish the influence of the driving parameters in the PBIL and CHC evolutionary algorithms when they are used to solve the Root Identification Problem. We identify a set of values that optimize algorithms performance. The driving parameters considered for the PBIL algorithm are population size, mutation probability, mutation shift and learning rate. For the CHC algorithm we studied population size, divergence rate, differential threshold and the set of best individuals. In both cases we applied unifactorial and multifactorial analysis, post hoc tests and best parameter level selection. Experimental results show that CHC outperforms PBIL when applied to solve the Root Identification Problem. (C) 2010 Elsevier B.V. All rights reserved.
The ability to solve inventive problems is at the core of the innovation process: however, the standard procedure to deal with them is to utilize random trial and error, despite the existence of several theories and m...
详细信息
The ability to solve inventive problems is at the core of the innovation process: however, the standard procedure to deal with them is to utilize random trial and error, despite the existence of several theories and methods. TRIZ and evolutionary algorithms (EA) have shown results that support the idea that inventiveness can be understood and developed systematically. This article presents a strategy based on dialectical negation in which both approaches converge, creating a new conceptual framework for enhancing computer-aided problem solving. Two basic ideas presented are the inversion of the traditional EA selection ("survival of the fittest"), and the incorporation of new dialectical negation operators in evolutionary algorithms based on TRIZ principles. Two case studies are the starting point to discuss what kind of results can be expected using this "Dialectical Negation Algorithm" (DNA). (C) 2010 Elsevier B.V. All rights reserved.
暂无评论