In this work, we developed algorithm for coordinated multi-robot system. It is developed for simultaneous localization and mapping (SLAM). Many novel applications of multi-robot system especially, in the industrial in...
详细信息
Reinforcement learning (RL) is a machine learning technique that enables an agent to learn optimal behaviors within an environment through a process of trial and error. This is achieved by the agent receiving rewards ...
详细信息
In this work, we developed algorithm for coordinated multi-robot system. It is developed for simultaneous localization and mapping (SLAM). Many novel applications of multi-robot system especially, in the industrial in...
In this work, we developed algorithm for coordinated multi-robot system. It is developed for simultaneous localization and mapping (SLAM). Many novel applications of multi-robot system especially, in the industrial indoor environment and complex/dangerous environment are being envisioned. The proposed algorithm focuses on optimizing navigation and execution of tasks within an environment. We used TurtleBot3 Waffle model and robot operating system (ROS) platform and simulated in Gazebo for localization, navigation and comprehensive coverage. The algorithm achieves effective mapping, localization, navigation, and coverage. Algorithm also ensures seamless coordination among multiple robots within a unified transform-tree (tf-tree) facilitating synchronized movement and a holistic understanding of the environment. Furthermore, an innovative coverage path planning and submap division is achieved using this technique. It is expected that such development would lead to practical applications in the real environment.
Reinforcement learning (RL) is a machine learning technique that enables an agent to learn optimal behaviors within an environment through a process of trial and error. This is achieved by the agent receiving rewards ...
Reinforcement learning (RL) is a machine learning technique that enables an agent to learn optimal behaviors within an environment through a process of trial and error. This is achieved by the agent receiving rewards or punishments based on its actions. In this study, we compare the performance of two RL algorithms, SARSA and Q-Learning, in a simulated environment using the Gazebo Simulator. The goal of the simulation is to navigate a ground robot towards pre-defined goals. By manipulating various training parameters, we investigate the impact on learning speed and robot behavior. To ensure meaningful comparisons, we vary the navigation goal and the complexity of the simulation environment. Through extensive simulations, our results highlight the effectiveness of RL-based navigation for ground robots and offer insights into the influential parameters that significantly affect optimal navigation performance. It underscores the importance of algorithm selection and parameter optimization in achieving optimal navigation performance.
This research paper delves into the intricacies of efficient environment exploration and coverage path planning using multi-robot systems. With a focus on unknown environments, the study addresses the mapping of the e...
详细信息
ISBN:
(数字)9798350385922
ISBN:
(纸本)9798350385939
This research paper delves into the intricacies of efficient environment exploration and coverage path planning using multi-robot systems. With a focus on unknown environments, the study addresses the mapping of the environment in 2D, distributing coverage tasks among robots, and coordinating their movements for comprehensive coverage. Various strategies are explored, including Simultaneous localization and mapping (SLAM) using G-mapping, dynamic task distribution, waypoint generation, and coordination mechanisms. Leveraging the collective intelligence of multiple robots, this work aims to optimize coverage while minimizing redundancy and resource consumption. Through simulations and analysis, the effectiveness of the proposed methodology is demonstrated with a coverage efficiency of 98.9%, highlighting the potential of multi-robot system in revolutionizing exploration and coverage path planning in diverse real-world applications.
暂无评论