Novelty search is a tool in evolutionary and swarm robotics for maintaining the diversity of population needed for continuous robotic operation. It enables nature-inspired algorithms to evaluate solutions on the basis...
详细信息
Novelty search is a tool in evolutionary and swarm robotics for maintaining the diversity of population needed for continuous robotic operation. It enables nature-inspired algorithms to evaluate solutions on the basis of the distance to their k-nearest neighbors in the search space. Besides this, the fitness function represents an additional measure for evaluating the solution, with the purpose of preserving the so-named novelty solutions into the next generation. In this study, a differential evolution was hybridized with novelty search. The differential evolution is a well-known algorithm for global optimization, which is applied to improve the results obtained by the other solvers on the CEC-14 benchmark function suite. Furthermore, functions of different dimensions were taken into consideration, and the influence of the various novelty search parameters was analyzed. The results of experiments show a great potential for using novelty search in global optimization. (C) 2018 Elsevier Inc. All rights reserved.
This article deals with the reactive control of an autonomous robot, which moves safely in a crowded real-world unknown environment and reaches a specified target by avoiding static as well as dynamic obstacles. The i...
详细信息
This article deals with the reactive control of an autonomous robot, which moves safely in a crowded real-world unknown environment and reaches a specified target by avoiding static as well as dynamic obstacles. The inputs to the proposed neural controller consist of left, right, and front obstacle distance to its locations and the target angle between a robot and a specified target acquired by an array of sensors. A four-layer neural network has been used to design and develop the neural controller to solve the path and time optimization problem of mobile robots, which deals with cognitive tasks such as learning, adaptation, generalization, and optimization. The back-propagation method is used to train the network. This article analyses the kinematical modelling of mobile robots as well as the design of control systems for the autonomous motion of the robot. Training of the neural net and control performances analysis were carried out in a real experimental set-up. The simulation results are compared with the experimental results and they show very good agreement.
We introduce AutoMoDe: a novel approach to the automatic design of control software for robot swarms. The core idea in AutoMoDe recalls the approach commonly adopted in machine learning for dealing with the bias-varia...
详细信息
We introduce AutoMoDe: a novel approach to the automatic design of control software for robot swarms. The core idea in AutoMoDe recalls the approach commonly adopted in machine learning for dealing with the bias-variance tradedoff: to obtain suitably general solutions with low variance, an appropriate design bias is injected. AutoMoDe produces robot control software by selecting, instantiating, and combining preexisting parametric modules-the injected bias. The resulting control software is a probabilistic finite state machine in which the topology, the transition rules and the values of the parameters are obtained automatically via an optimization process that maximizes a task-specific objective function. As a proof of concept, we define AutoMoDe-Vanilla, which is a specialization of AutoMoDe for the e-puck robot. We use AutoMoDe-Vanilla to design the robot control software for two different tasks: aggregation and foraging. The results show that the control software produced by AutoMoDe-Vanilla (i) yields good results, (ii) appears to be robust to the so called reality gap, and (iii) is naturally human-readable.
We propose Hyb-CCEA, a cooperative coevolutionary algorithm for the evolution of genetically heterogeneous multiagent teams. The proposed approach extends the cooperative coevolution architecture with operators that p...
详细信息
We propose Hyb-CCEA, a cooperative coevolutionary algorithm for the evolution of genetically heterogeneous multiagent teams. The proposed approach extends the cooperative coevolution architecture with operators that put the number of coevolving populations under evolutionary control. Populations are dynamically merged based on behavioral similarity, thus decreasing team heterogeneity, and stochastic population splits are used to explore increased team heterogeneity. Hyb-CCEA is capable of converging to suitable team compositions for the given task, be it a completely homogeneous team where all agents share the same control logic, a heterogeneous team where each agent has distinct control logic, or a partially heterogeneous team. By placing both team composition and agent controllers under evolutionary control, Hyb-CCEA can be applied to domains for which the experimenter has limited or no knowledge about possible solutions. We study Hyb-CCEA extensively in an abstract domain, and conduct a series of validation experiments with four simulated multirobot tasks: two multirover foraging tasks and two robotic soccer tasks. The results show that Hyb-CCEA takes advantage of partial heterogeneity and frequently outperforms the standard cooperative coevolution approach, both in terms of fitness scores achieved and number of evaluations needed to evolve solutions.
While quadruped robots usually have good stability and load capacity, bipedal robots offer a higher level of flexibility / adaptability to different tasks and environments. A multi-modal legged robot can take the best...
详细信息
While quadruped robots usually have good stability and load capacity, bipedal robots offer a higher level of flexibility / adaptability to different tasks and environments. A multi-modal legged robot can take the best of both worlds. In this paper, we propose a multi-modal locomotion framework that is composed of a hand-crafted transition motion and a learning-based bipedal controller-learnt by a novel algorithm called Automated Residual Reinforcement Learning. This framework aims to endow arbitrary quadruped robots with the ability to walk bipedally. In particular, we 1) design an additional supporting structure for a quadruped robot and a sequential multi-modal transition strategy;2) propose a novel class of Reinforcement Learning algorithms for bipedal control and evaluate their performances in both simulation and the real world. Experimental results show that our proposed algorithms have the best performance in simulation and maintain a good performance in a real-world robot. Overall, our multi-modal robot could successfully switch between biped and quadruped, and walk in both modes.
This article describes the simulation of distributed autonomous robots for search and rescue operations. The simulation system is utilized to perform experiments with various control strategies for the robot team and ...
详细信息
This article describes the simulation of distributed autonomous robots for search and rescue operations. The simulation system is utilized to perform experiments with various control strategies for the robot team and team organizations, evaluating the comparative performance of the strategies and organizations. The objective of the robot team is to, once deployed in an environment (floor-plan) with multiple rooms, cover as many rooms as possible. The simulated robots are capable of navigation through the environment, and can communicate using simple messages. The simulator maintains the world, provides each robot with sensory information, and carries out the actions of the robots. The simulator keeps track of the rooms visited by robots and the elapsed time, in order to evaluate the performance of the robot teams. The robot teams are composed of homogenous robots, i.e., identical control strategies are used to generate the behavior of each robot in the team. The ability to deploy. autonomous robots, as opposed to humans, in hazardous search and rescue missions could provide immeasurable benefits. (C) 2003 Elsevier Science Ltd. All rights reserved.
According to Hebbian theory, synaptic plasticity is the ability of neurons to strengthen or weaken the synapses among them in response to stimuli. It plays a fundamental role in the processes of learning and memory of...
详细信息
According to Hebbian theory, synaptic plasticity is the ability of neurons to strengthen or weaken the synapses among them in response to stimuli. It plays a fundamental role in the processes of learning and memory of biological neural networks. With plasticity, biological agents can adapt on multiple timescales and outclass artificial agents, the majority of which still rely on static artificial neural network (ANN) controllers. In this work, we focus on voxel-based soft robots ( VSRs), a class of simulated artificial agents, composed as aggregations of elastic cubic blocks. We propose a Hebbian ANN controller where every synapse is associated with a Hebbian rule that controls the way the weight is adapted during the VSR lifetime. For a given task and morphology, we optimize the controller for the task of locomotion by evolving, rather than the weights, the parameters of Hebbian rules. Our results show that the Hebbian controller is comparable, often better than a non-Hebbian baseline and that it is more adaptable to damages. We also provide novel insights into the inner workings of plasticity and demonstrate that "true" learning does take place, as the evolved controllers improve over the lifetime and generalize well.
We have used an automatic programming method called genetic programming (GP) for control of a miniature robot. Our earlier work on real-time learning suffered from the drawback of the learning time being limited by th...
详细信息
We have used an automatic programming method called genetic programming (GP) for control of a miniature robot. Our earlier work on real-time learning suffered from the drawback of the learning time being limited by the response dynamics of the robot's environment. In order to overcome this problem we have devised a new technique which allows learning from past experiences that are stored in memory. The new method shows its advantage when perfect behavior emerges in experiments quickly and reliably. It is tested on two control tasks, obstacle avoiding and wall following behavior, both in simulation and on the real robot platform Khepera. (C) 1998 Elsevier Science B.V. All rights reserved.
This paper focuses on generating the collective step-climbing behavior of a multi-legged robotic swarm. Most studies on swarm robotics develop collective behaviors in a flat environment using mobile robots equipped wi...
详细信息
This paper focuses on generating the collective step-climbing behavior of a multi-legged robotic swarm. Most studies on swarm robotics develop collective behaviors in a flat environment using mobile robots equipped with wheels. However, these types of robots could only show relatively simple behavior, which limits a task that could be addressed by a robotic swarm. This paper deals with a step-climbing task, in which a robotic swarm climbs a step that is too high for a single robot. The robots have to use other robots as a foothold to achieve the task. To generate such three-dimensional behavior, a robotic swarm is conducted using the multi-legged robot inspired by ants. The robot controller is obtained by the combination of the neuroevolution approach with manual designed methods. The results of the computer simulations show that the designed controller successfully achieve the step-climbing task.
Redundancy in the number of robots is a fundamental feature of robotic swarms to confer robustness, flexibility, and scalability. However, robots tend to interfere with each other in a case, where multiple robots gath...
详细信息
Redundancy in the number of robots is a fundamental feature of robotic swarms to confer robustness, flexibility, and scalability. However, robots tend to interfere with each other in a case, where multiple robots gather in a spatially limited environment. The aim of this paper is to understand how a robotic swarm develops an effective strategy to manage congestion. The controllers of the robots are obtained by an evolutionary robotics approach. The strategy of managing congestion is observed in the process of generating a collective path of robots visiting two landmarks alternately. The robotic swarm exhibits autonomous specialization that the robots traveling inside the path activate the LEDs, while the robots in the outer side deactivate them. We found that the congestion is regulated in an emergent way of autonomous specialization by the result of an artificial evolution.
暂无评论