As the field of robotics evolves, robots become increasingly multi-functional and complex. Currently, there is a need for solutions that enhance flexibility and computational power without compromising real-time perfo...
详细信息
As the field of robotics evolves, robots become increasingly multi-functional and complex. Currently, there is a need for solutions that enhance flexibility and computational power without compromising real-time performance. The emergence of fog computing and cloud-native approaches addresses these challenges. In this paper, we integrate a microservice-based architecture with cloud-native fog robotics to investigate its performance in managing complex robotic systems and handling real-time tasks. Additionally, we apply model-based systems engineering (MBSE) to achieve automatic configuration of the architecture and to manage resource allocation efficiently. To demonstrate the feasibility and evaluate the performance of this architecture, we conduct comprehensive evaluations using both bare-metal and cloud setups, focusing particularly on real-time and machine-learning-based tasks. The experimental results indicate that a microservice-based cloud-native fog architecture offers a more stable computational environment compared to a bare-metal one, achieving over 20% reduction in the standard deviation for complex algorithms across both CPU and GPU. It delivers improved startup times, along with a 17% (wireless) and 23% (wired) faster average message transport time. Nonetheless, it exhibits a 37% slower execution time for simple CPU tasks and 3% for simple GPU tasks, though this impact is negligible in cloud-native environments where such tasks are typically deployed on bare-metal systems.
The introduction of robotics and machine learning to architectural construction is leading to more efficient construction practices. So far, robotic construction has largely been implemented on standardized materials,...
详细信息
The introduction of robotics and machine learning to architectural construction is leading to more efficient construction practices. So far, robotic construction has largely been implemented on standardized materials, conducting simple, predictable, and repetitive tasks. We present a novel mobile robotic system and corresponding learning approach that takes a step towards assembly of natural materials with anisotropic mechanical properties for more sustainable architectural construction. Through experiments both in simulation and in the real world, we demonstrate a dynamically adjusted curriculum and randomization approach for the problem of learning manipulation tasks involving materials with biological variability, namely bamboo. Using our approach, robots are able to transport bamboo bundles and reach to goal-positions during the assembly of bamboo structures.
A 3D occupancy map that is accurately modeled after real-world environments is essential for reliably performing robotic tasks. Probabilistic volumetric mapping (PVM) is a well-known environment mapping method using v...
详细信息
A 3D occupancy map that is accurately modeled after real-world environments is essential for reliably performing robotic tasks. Probabilistic volumetric mapping (PVM) is a well-known environment mapping method using volumetric voxel grids that represent the probability of occupancy. The main bottleneck of current CPU-based PVM, such as OctoMap, is determining voxel grids with occupied and free states using ray-shooting. In this letter, we propose an octree-based PVM, called OctoMap-RT, using a hybrid of off-the-shelf ray-tracing GPUs and CPUs to substantially improve CPU-based PVM. OctoMap-RT employs massively parallel ray-shooting using GPUs to generate occupied and free voxel grids and to update their occupancy states in parallel, and it exploits CPUs to restructure the PVM using the updated voxels. Our experiments using various large-scale real-world benchmarking environments with dense and high-resolution sensor measurements demonstrate that OctoMap-RT builds maps up to 41.2 times faster than OctoMap and 9.3 times faster than the recent SuperRay CPU implementation. Moreover, OctoMap-RT constructs a map with 0.52% higher accuracy, in terms of the number of occupancy grids, than both OctoMap and SuperRay.
The goal of the Mars Sample Return campaign is to collect soil samples from the surface of Mars and return them to Earth for further study. The samples will be acquired and stored in metal tubes by the Perseverance ro...
详细信息
The goal of the Mars Sample Return campaign is to collect soil samples from the surface of Mars and return them to Earth for further study. The samples will be acquired and stored in metal tubes by the Perseverance rover and deposited on the Martian surface. As part of this campaign, it is expected that the Sample Fetch Rover will be in charge of localizing and gathering up to 35 sample tubes over 150 Martian sols. Autonomous capabilities are critical for the success of the overall campaign and for the Sample Fetch Rover in particular. This work proposes a novel system architecture for the autonomous detection and pose estimation of the sample tubes. For the detection stage, a Deep Neural Network and transfer learning from a synthetic dataset are proposed. The dataset is created from photorealistic 3D simulations of Martian scenarios. Additionally, the sample tubes poses are estimated using Computer Vision techniques such as contour detection and line fitting on the detected area. Finally, laboratory tests of the Sample Localization procedure are performed using the ExoMars Testing Rover on a Mars-like testbed. These tests validate the proposed approach in different hardware architectures, providing promising results related to the sample detection and pose estimation.
We revisit pairwise Markov Random Field (MRF) formulations for RGB-D scene flow and leverage novel advances in processor design for real-time implementations. We consider scene flow approaches which consist of data te...
详细信息
We revisit pairwise Markov Random Field (MRF) formulations for RGB-D scene flow and leverage novel advances in processor design for real-time implementations. We consider scene flow approaches which consist of data terms enforcing intensity consistency between consecutive images, together with regularisation terms which impose smoothness over the flow field. To achieve real-time operation, previous systems leveraged GPUs and implemented regularisation only between variables corresponding to neighbouring pixels. Such systems could estimate continuously deforming flow fields but the lack of global regularisation over the whole field made them ineffective for visual odometry. We leverage the GraphCore Intelligence Processing Unit (IPU) graph processor chip, which consists of 1216 independent cores called tiles, each with 256 kB local memory. The tiles are connected to an ultrafast all-to-all communication fabric which enables efficient data transmission between the tiles in an arbitrary communication pattern. We propose a distributed formulation for dense RGB-D scene flow based on Gaussian Belief Propagation which leverages the architecture of this processor to implement both local and global regularisation. Local regularisation is enforced for pairs of flow estimates whose corresponding pixels are neighbours, while global regularisation is defined for flow estimate pairs whose corresponding pixels are far from each other on the image plane. Using both types of regularisation allows our algorithm to handle a variety of in-scene motion and makes it suitable for estimating deforming scene flow, piece-wise rigid scene flow and visual odometry within the same system.
This letter explores a deep reinforcement learning (DRL) approach for designing image-based control for edge robots to be implemented on Field Programmable Gate Arrays (FPGAs). Although FPGAs are more power-efficient ...
详细信息
This letter explores a deep reinforcement learning (DRL) approach for designing image-based control for edge robots to be implemented on Field Programmable Gate Arrays (FPGAs). Although FPGAs are more power-efficient than CPUs and GPUs, a typical DRL method cannot be applied since they are composed of many Logic Blocks (LBs) for high-speed logical operations but low-speed real-number operations. To cope with this problem, we propose a novel DRL algorithm called Binarized P-Network (BPN), which learns image-input control policies using Binarized Convolutional Neural Networks (BCNNs). To alleviate the instability of reinforcement learning caused by a BCNN with low function approximation accuracy, our BPN adopts a robust value update scheme called Conservative Value Iteration, which is tolerant of function approximation errors. We confirmed the BPN's effectiveness through applications to a visual tracking task in simulation and real-robot experiments with FPGA.
Polymer-based soft robots are difficult to characterize due to their non-linear nature. This difficulty is compounded by multiple additional degrees of movement freedom which adds complexity to any control strategy pr...
详细信息
Polymer-based soft robots are difficult to characterize due to their non-linear nature. This difficulty is compounded by multiple additional degrees of movement freedom which adds complexity to any control strategy proposed. The following work proposes and demonstrates a modular framework to test, debug and characterize soft robots using the robot operating system (ROS), to enable modeless deep reinforcement learning control strategies through hardware-in-the-loop system training. The framework is demonstrated using an actor-critic algorithm to learn a locomotion policy for a two-actuator pneu-net soft robot with integrated resistive flex sensors. The result of convergent locomotion studies was an 89.5% increase in the likelihood of reaching the end of frame design goal versus random oracle actuation vectors.
Computing the gradient of rigid body dynamics is a central operation in many state-of-the-art planning and control algorithms in robotics. Parallel computing platforms such as GPUs and FPGAs can offer performance gain...
详细信息
Computing the gradient of rigid body dynamics is a central operation in many state-of-the-art planning and control algorithms in robotics. Parallel computing platforms such as GPUs and FPGAs can offer performance gains for algorithms with hardware-compatible computational structures. In this letter, we detail the designs of three faster than state-of-the-art implementations of the gradient of rigid body dynamics on a CPU, GPU, and FPGA. Our optimized FPGA and GPU implementations provide as much as a 3.0x end-to-end speedup over our optimized CPU implementation by refactoring the algorithm to exploit its computational features, e.g., parallelism at different granularities. We also find that the relative performance across hardware platforms depends on the number of parallel gradient evaluations required.
暂无评论