作者:
Abreu, NunoMatos, AnibalINESC
TEC Campus FEUPRua Dr Roberto Frias 378 P-4200465 Oporto Portugal FEUP
DEEC P-4200465 Oporto Portugal
Autonomous underwater vehicles (AUVs) are increasingly being used to perform mine countermeasures (MCM) operations but its capabilities are limited by the efficiency of the planning process. Here we study the problem ...
详细信息
Autonomous underwater vehicles (AUVs) are increasingly being used to perform mine countermeasures (MCM) operations but its capabilities are limited by the efficiency of the planning process. Here we study the problem of multiobjective MCM mission planning with AUVs. The vehicle should cover the operating area while maximizing the probability of detecting the targets and minimizing the required energy and time to complete the mission. A multi-stage algorithm is proposed and evaluated. Our algorithm combines an evolutionary algorithm (EA) with a local search procedure, aiming at a more flexible and effective exploration and exploitation of the search space. An artificial neural network (ANN) model was also integrated in the evolutionary procedure to guide the search. The combination of different techniques creates another problem, related to the high amount of parameters that needs to be tuned. Thus, the effect of these parameters on the quality of the obtained Pareto Front was assessed. This allowed us to define an adaptive tuning procedure to control the parameters while the algorithm is executed. Our algorithm is compared against an implementation of a known EA as well as another mission planner and the results from the experiments show that the proposed strategy can efficiently identify a higher quality solution set.
Computational time complexity analyzes of evolutionary algorithms (EAs) have been performed since the mid-nineties. The first results were related to very simple algorithms, such as the (1+1)-EA, on toy problems....
详细信息
Computational time complexity analyzes of evolutionary algorithms (EAs) have been performed since the mid-nineties. The first results were related to very simple algorithms, such as the (1+1)-EA, on toy problems. These efforts produced a deeper understanding of how EAs perform on different kinds of fitness landscapes and general mathematical tools that may be extended to the analysis of more complicated EAs on more realistic problems. In fact, in recent years, it has been possible to analyze the (1+1)-EA on combinatorial optimization problems with practical applications and more realistic population-based EAs on structured toy problems. This paper presents a survey of the results obtained in the last decade along these two research lines. The most common mathematical techniques are introduced, the basic ideas behind them are discussed and their elective applications are highlighted. Solved problems that were still open are enumerated as are those still awaiting for a solution. New questions and problems arisen in the meantime are also considered.
This paper presents a novel method for tracking and characterizing adherent cells in monolayer culture. A system of cell tracking employing computer vision techniques was applied to time-lapse videos of replicate norm...
详细信息
This paper presents a novel method for tracking and characterizing adherent cells in monolayer culture. A system of cell tracking employing computer vision techniques was applied to time-lapse videos of replicate normal human uro-epithelial cell cultures exposed to different concentrations of adenosine triphosphate (ATP) and a selective purinergic P2X antagonist (PPADS), acquired over a 24 h period. Subsequent analysis following feature extraction demonstrated the ability of the technique to successfully separate the modulated classes of cell using evolutionary algorithms. Specifically, a Cartesian Genetic Program (CGP) network was evolved that identified average migration speed, in-contact angular velocity, cohesivity and average cell clump size as the principal features contributing to the separation. Our approach not only provides non-biased and parsimonious insight into modulated class behaviours, but can be extracted as mathematical formulae for the parameterization of computational models. (C) 2016 The Authors. Published by Elsevier Ireland Ltd.
Although evolutionary algorithms(EAs for brevity) have been successfully applied to optimization in discrete search space, theoretical developments remain weak, in particular for population-based EAs. This paper focus...
详细信息
Although evolutionary algorithms(EAs for brevity) have been successfully applied to optimization in discrete search space, theoretical developments remain weak, in particular for population-based EAs. This paper focuses on the expected first hitting time of (N,N) evolutionary algorithms based on maintaining elitist(ME-EAs). A new approach to estimating the upper and lower bounds of the expected first hitting time of ME-EAs in finite search spaces is proposed. The upper bounds are determined by the parameters of ME-EAs. It is the first attempt that the upper bounds of the mean first hitting time of ME-EAs are related with parameters of algorithm directly. Moreover, this approach can be not only used to analyze EAs based on binary code but also used to analyze EAs based on nonbinary code. The results obtained and the analytic methods adopted in this paper are widely valid for EAs in finite search spaces. Finally, based on a representative arbitrary binary problem, the theoretical results in this paper are verified effectively through analyzing subset sum problem.
This paper applies an evolutionary algorithm to the problem of knowledge discovery on blue-green algae dynamics in a hypertrophic lake. Patterns in chemical and physical parameters of the lake and corresponding presen...
详细信息
This paper applies an evolutionary algorithm to the problem of knowledge discovery on blue-green algae dynamics in a hypertrophic lake. Patterns in chemical and physical parameters of the lake and corresponding presence or absence of highly abundant blue-green algae species such as Microcystis spp, Oscillatoria spp and Phormidium spp are discovered by the machine learning algorithm. Learnt patterns are represented explicitly as classification rules, which allow their underlying hypothesis to be examined. Models are developed for the filamentous blue-green algae Oscillatoria spp and Phormidium spp, and the colonial blue-green algae Microcystis spp. Hypothesized environmental conditions which favour blooms of the three species are contrasted and examined. The models are evaluated on independent test data to demonstrate that models can be evolved which differentiate algae species on the basis of the environmental attributes provided. (C) 2001 Elsevier Science B.V. All rights reserved.
One of the major challenges facing Artificial Intelligence in the future is the design of trustworthy algorithms. The development of trustworthy algorithms will be a key challenge in Artificial Intelligence for years ...
详细信息
One of the major challenges facing Artificial Intelligence in the future is the design of trustworthy algorithms. The development of trustworthy algorithms will be a key challenge in Artificial Intelligence for years to come. Cultural algorithms (CAs) are viewed as one framework that can be employed to produce a trustable evolutionary algorithm. They contain features to support both sustainable and explainable computation that satisfy requirements for trustworthy algorithms proposed by Cox [Nine experts on the single biggest obstacle facing AI and algorithms in the next five years, Emerging Tech Brew, January 22, 2021]. Here, two different configurations of CAs are described and compared in terms of their ability to support sustainable solutions over the complete range of dynamic environments, from static to linear to nonlinear and finally chaotic. The Wisdom of the Crowds method was selected for the one configuration since it has been observed to work in both simple and complex environments and requires little long-term memory. The Common Value Auction (CVA) configuration was selected to represent those mechanisms that were more data centric and required more long-term memory content. Both approaches were found to provide sustainable performance across all the dynamic environments tested from static to chaotic. Based upon the information collected in the Belief Space, they produced this behavior in different ways. First, the topologies that they employed differed in terms of the "in degree" for different complexities. The CVA approach tended to favor reduced "indegree/outdegree", while the WM exhibited a higher indegree/outdegree in the best topology for a given environment. These differences reflected the fact the CVA had more information available for the agents about the network in the Belief Space, whereas the agents in the WM had access to less available knowledge. It therefore needed to spread the knowledge that it currently had more widely throughout the pop
Cognitive radio is a promising technology for efficient spectrum utilization. It explores dynamic spectrum access features while satisfying interference constraints. In this work, a joint power and spectrum allocation...
详细信息
Cognitive radio is a promising technology for efficient spectrum utilization. It explores dynamic spectrum access features while satisfying interference constraints. In this work, a joint power and spectrum allocation algorithm is proposed to maximize the cognitive network throughput while satisfying interference constraints of both primary and secondary users in the network. evolutionary algorithms are used to solve the joint power and spectrum allocation problem. Furthermore, the algorithmic performance is compared in terms of quality of solution. And also we optimized the maximum utilization of the network and capacity of each user simultaneously by using Multi-Objective Differential Evolution (MODE) and Nondominated Sorting Genetic Algorithm II (NSGA-II). Simulation results show that the pareto optimal fronts provide the trade-off solutions between total network utilization and individual sum capacity of user.
In the study of neurosciences, and of complex biological systems in general, there is frequently a need to fit mathematical models with large numbers of parameters to highly complex datasets. Here we consider algorith...
详细信息
In the study of neurosciences, and of complex biological systems in general, there is frequently a need to fit mathematical models with large numbers of parameters to highly complex datasets. Here we consider algorithms of two different classes, gradient following (GF) methods and evolutionary algorithms (EA) and examine their performance in fitting a 9-parameter model of a filter-based visual neuron to real data recorded from a sample of 107 neurons in macaque primary visual cortex (V1). Although the GF method converged very rapidly on a solution, it was highly susceptible to the effects of local minima in the error surface and produced relatively poor fits unless the initial estimates of the parameters were already very good. Conversely, although the EA required many more iterations of evaluating the model neuron's response to a series of stimuli, it ultimately found better solutions in nearly all cases and its performance was independent of the starting parameters of the model. Thus, although the fitting process was lengthy in terms of processing time, the relative lack of human intervention in the evolutionary algorithm, and its ability ultimately to generate model fits that could be trusted as being close to optimal, made it far superior in this particular application than the gradient following methods. This is likely to be the case in many further complex systems, as are often found in neuroscience.
With the increasing power of computers, the amount of data that can be processed in small periods of time has grown exponentially, as has the importance of classifying large-scale data efficiently. Support vector mach...
详细信息
With the increasing power of computers, the amount of data that can be processed in small periods of time has grown exponentially, as has the importance of classifying large-scale data efficiently. Support vector machines have shown good results classifying large amounts of high-dimensional data, such as data generated by protein structure prediction, spam recognition, medical diagnosis, optical character recognition and text classiffication, etc. Most state of the art approaches for large-scale learning use traditional optimization methods, such as quadratic programming or gradient descent, which makes the use of evolutionary algorithms for training support vector machines an area to be explored. The present paper proposes an approach that is simple to implement based on evolutionary algorithms and Kernel-Adatron for solving large-scale classiffication problems, focusing on protein structure prediction. The functional properties of proteins depend upon their three-dimensional structures. Knowing the structures of proteins is crucial for biology and can lead to improvements in areas such as medicine, agriculture and biofuels.
Over the last years, the effects of neutrality have attracted the attention of many researchers in the evolutionary algorithms (EAs) community. A mutation from one gene to another is considered as neutral if this modi...
详细信息
Over the last years, the effects of neutrality have attracted the attention of many researchers in the evolutionary algorithms (EAs) community. A mutation from one gene to another is considered as neutral if this modification does not affect the phenotype. This article provides a general overview on the work carried out on neutrality in EAs. Using as a framework the origin of neutrality and its study in different paradigms of EAs (e.g., Genetic algorithms, Genetic Programming), we discuss the most significant works and findings on this topic. This work points towards open issues, which we belive the community needs to address.
暂无评论