Navigating unsignalized intersections in urban environments poses a complex challenge for self-driving vehicles, where issues such as view obstructions, unpredictable pedestrian crossings, and diverse traffic particip...
详细信息
In collaborative environments, real-time motion planning is crucial for industrial robots to navigate safely and efficiently. Traditional planning algorithms, such as Rapidly-exploring Random Trees (RRT) or Probabilis...
In collaborative environments, real-time motion planning is crucial for industrial robots to navigate safely and efficiently. Traditional planning algorithms, such as Rapidly-exploring Random Trees (RRT) or Probabilistic Roadmaps (PRM), often face challenges in coping with dynamic environments due to their inherent computational complexity. To address this issue, we propose an approach based on Deep Reinforcement Learning (DRL) for real-time motion planning of industrial robots. Our method leverages the power of machine learning and neural networks to enable robots to make intelligent decisions in real-time, ensuring prompt and adaptive navigation. However, applying DRL to industrial robots poses unique challenges, as vision-based training is difficult and distance sensors commonly used in mobile robots are unavailable. To overcome these challenges, we employ depth cameras to generate distance information and convert the obtained point cloud into voxels using the Open3D library. The obstacles are then loaded into the simulation environment in real-time, allowing the agent to perceive and react to the dynamic environment. To achieve a low simulation-to-real-gap, we propose a hardware-in-the-loop (HIL) approach, where the real robot mimics the movements of the simulated robot. We demonstrate the effectiveness of our system through real-world experiments. Our code is available on GitHub [1].
The lack of trust in algorithms is usually an issue when using Reinforcement Learning (RL) agents for control in real-world domains such as production plants, autonomous vehicles, or traffic-related infrastructure, pa...
详细信息
In recent years, Deep Reinforcement Learning emerged as a promising approach for autonomous navigation of robots and has been utilized in various areas of navigation such as obstacle avoidance, motion planning, or dec...
In recent years, Deep Reinforcement Learning emerged as a promising approach for autonomous navigation of robots and has been utilized in various areas of navigation such as obstacle avoidance, motion planning, or decision making in crowded environments. However, most research works either focus on providing an end-to-end solution training the whole system using Deep Reinforcement Learning or focus on one specific aspect such as local motion planning. This however, comes along with a number of problems such as catastrophic forgetfulness, inefficient navigation behavior, and non-optimal synchronization between different entities of the navigation stack. In this paper, we propose a holistic Deep Reinforcement Learning training approach in which the training procedure is involving all entities of the navigation stack. This should enhance the synchronization between- and understanding of all entities of the navigation stack and as a result, improve navigational performance in crowded environments. We trained several agents with a number of different observation spaces to study the impact of different input on the navigation behavior of the agent. In profound evaluations against multiple learning-based and classic model-based navigation approaches, our proposed agent could outperform the baselines in terms of efficiency and safety attaining shorter path lengths, less roundabout paths, and less collisions especially in situations with a high number of pedestrians.
Traditionally, collision-free path planning for industrial robots is realized by sampling-based algorithms such as RRT (Rapidly-exploring Random Tree), PRM (Probabilistic Roadmap), etc. Sampling-based algorithms requi...
Traditionally, collision-free path planning for industrial robots is realized by sampling-based algorithms such as RRT (Rapidly-exploring Random Tree), PRM (Probabilistic Roadmap), etc. Sampling-based algorithms require long computation times, especially in complex environments. Furthermore, the environment in which they are employed needs to be known beforehand. When utilizing these approaches in new environments, a tedious engineering effort in setting hyperparameters needs to be conducted, which is time- and cost-intensive. On the other hand, DRL (Deep Reinforcement Learning) has shown remarkable results in dealing with complex environments, generalizing new problem instances, and solving motion planning problems efficiently. On that account, this paper proposes a Deep-Reinforcement-Learning-based motion planner for robotic manipulators. We propose an easily reproducible method to train an agent in randomized scenarios achieving generalization for unknown environments. We evaluated our model against state-of-the-art sampling- and DRL-based planners in several experiments containing static and dynamic obstacles. Results show the adaptability of our agent in new environments and the superiority in terms of path length and execution time compared to conventional methods. Our code is available on GitHub [1].
Over the past decades, countless autonomous navigation and dynamic obstacle avoidance approaches have been proposed by various research works. However, to bridge the gap between research and industries, these approach...
Over the past decades, countless autonomous navigation and dynamic obstacle avoidance approaches have been proposed by various research works. However, to bridge the gap between research and industries, these approaches are required to be extensively evaluated and benchmarked within various different setting, scenarios, and maps. However, conducting these test runs is tedious and time-consuming. Furthermore, simulation runs and test on real robots can not always cover all potentially occurring scenarios or are inaccurate in certain settings and circumstances especially when a high number of pedestrians or other dynamic entities are involved. In this paper, we propose an approach to predict the navigational performance of navigation approaches for new and unknown maps, scenarios, and robots without the necessity to conduct the actual test runs. Therefore, we acquire a large dataset consisting of thousands of evaluation runs within crowded environments from both simulation and real-world runs, which were conducted using the arena-bench platform of our previous works [1] and trained several neural network architectures to predict relevant navigational performance metrics such as collision rates or path efficiency. We demonstrate the feasibility of our neural networks by predicting the most relevant metrics with up to 95 percent accuracy compared to the groundtruth data acquired by an actual simulation run. Using this approach could prove beneficial for a number of applications and save valuable time and costs in that the performance of new navigation algorithms for crowded environments can be estimated and predicted on new maps, scenarios, and on new robots. We made the code publicly available at https://***/ignc-research/navprediction.
The advancement of computer vision and machine learning has made datasets crucial for further research and applications. However, the creation and development of indoor mobile robots with advanced recognition capabili...
The advancement of computer vision and machine learning has made datasets crucial for further research and applications. However, the creation and development of indoor mobile robots with advanced recognition capabilities are hindered by the lack of appropriate datasets. Existing image or video processing datasets are unable to depict observations from a moving robot accurately, and they do not contain the kinematics information necessary for robotic tasks. Synthetic data, on the other hand, are cost-effective to create and offer greater flexibility for adapting to various applications. Hence, they are widely utilized in both research and industry. In this paper, we propose the dataset HabitatDyn, which contains synthetic RGB videos, semantic labels, depth information, as well as kinetics information. HabitatDyn was created from the perspective of a mobile robot with a moving camera and contains 30 scenes featuring six different types of moving objects with varying velocities. To demonstrate the usability of our dataset, two existing algorithms are used for evaluation and an approach to estimate the distance between the object and camera is implemented based on these segmentation methods and evaluated through the dataset. With the availability of this dataset, we aspire to foster further advancements in the field of mobile robotics, leading to more capable and intelligent robots that can navigate and interact with their environments more effectively. The code is publicly available at https://***/ignc-research/HabitatDyn.
Navigating unsignalized intersections in urban environments poses a complex challenge for self-driving vehicles, where issues such as view obstructions, unpredictable pedestrian crossings, and diverse traffic particip...
详细信息
ISBN:
(数字)9798350348811
ISBN:
(纸本)9798350348828
Navigating unsignalized intersections in urban environments poses a complex challenge for self-driving vehicles, where issues such as view obstructions, unpredictable pedestrian crossings, and diverse traffic participants demand a great focus on crash prevention. In this paper, we propose a novel state representation for Reinforcement Learning (RL) agents centered around the information perceivable by an autonomous agent, enabling the safe navigation of previously uncharted road *** approach surpasses several baseline models by a significant margin in terms of safety and energy consumption metrics. These improvements are achieved while maintaining a competitive average travel speed. Our findings pave the way for more robust and reliable autonomous navigation strategies, promising safer and more efficient urban traffic environments.
In recent years, Deep Reinforcement learning has made remarkable progress in various application areas such as control of robots and vehicles, simulation, and natural language processing. In recent years, various rese...
In recent years, Deep Reinforcement learning has made remarkable progress in various application areas such as control of robots and vehicles, simulation, and natural language processing. In recent years, various research works applied DRL to conduct different kinds of tasks for autonomous navigation of vehicles and robots such as lane changing, cruise control, or obstacle avoidance. However, DRL training is still a tedious and difficult process due to long training times, catastrophic forgetfulness and the myopic nature of DRL agents. In this paper, we integrate and explore the effect of a variety of state-of-the-art optimization approaches for DRL agents such as imitation learning, behavior cloning, frame stacking, and hindsight replay for autonomous obstacle avoidance. We evaluate the effect of each of these changes to the DRL agent in terms of training- as well as navigational performance. The resulting agents are compared against baseline approaches without those optimizations and we found an increase in navigational performance for some of these methods while other optimization approaches surprisingly resulted in decreased performing. The findings of this paper should aid in development of DRL approaches for autonomous navigation approaches.
The lack of trust in algorithms is usually an issue when using Reinforcement Learning (RL) agents for control in real-world domains such as production plants, autonomous vehicles, or traffic-related infrastructure, pa...
详细信息
ISBN:
(数字)9798350348811
ISBN:
(纸本)9798350348828
The lack of trust in algorithms is usually an issue when using Reinforcement Learning (RL) agents for control in real-world domains such as production plants, autonomous vehicles, or traffic-related infrastructure, partly due to the lack of verifiability of the model itself. In such scenarios, Petri nets (PNs) are often available for flowcharts or process steps, as they are versatile and standardized. In order to facilitate integration of RL models and as a step towards increasing AI trustworthiness, we propose an approach that uses PNs with three main advantages over typical RL approaches: Firstly, the agent can now easily be modeled with a combined state including both external environmental observations and agent-specific state information from a given PN. Secondly, we can enforce constraints for state-dependent actions through the inherent PN model. And lastly, we can increase trustworthiness by verifying PN properties through techniques such as model checking. We test our approach on a typical four-way intersection traffic light control setting and present our results, beating cycle-based baselines.
暂无评论