QUB-PHEO introduces a visual-based, dyadic dataset with the potential of advancing human-robot interaction (HRI) research in assembly operations and intention inference. This dataset captures rich multimodal interacti...
详细信息
QUB-PHEO introduces a visual-based, dyadic dataset with the potential of advancing human-robot interaction (HRI) research in assembly operations and intention inference. This dataset captures rich multimodal interactions between two participants, one acting as a 'robot surrogate,' across a variety of assembly tasks that are further broken down into 36 distinct subtasks. With rich visual annotations-facial landmarks, gaze, hand movements, object localization, and more-for 70 participants, QUB-PHEO offers two versions: full video data for 50 participants and visual cues for all 70. Designed to improve machine learning models for HRI, QUB-PHEO enables deeper analysis of subtle interaction cues and intentions, promising contributions to the field.
vision is an important component of robotic perception systems due to the rich information provided by high resolution image sensors, but computervisionalgorithms can be computationally expensive and ill-suited to r...
详细信息
ISBN:
(纸本)9798350377712;9798350377705
vision is an important component of robotic perception systems due to the rich information provided by high resolution image sensors, but computervisionalgorithms can be computationally expensive and ill-suited to resource-constrained robotic systems. Here, we present a mm-scale vision system capable of performing absolute pose estimation at 16.5 FPS. This novel vision system uses a commercial-off-the-shelf sensor and microcontroller unit, as well as planar light-based landmarks in the environment to simplify feature detection. We exploit the structure of the planar pose problem to reduce algorithmic complexity and improve latency and energy consumption through software-, processor-, and hardware-in-the-loop testing. The end-to-end system consumes 49 mA of current and computes absolute pose estimates within 15 mm over a number of reference trajectories.
The focus of this paper is on investigating the use of visual communication between two humanoid robots, in order to enhance the coordination of tasks between them. The problem continues to be an interesting and fruit...
详细信息
The focus of this paper is on investigating the use of visual communication between two humanoid robots, in order to enhance the coordination of tasks between them. The problem continues to be an interesting and fruitful area of research from the days of using multiple robot manipulator arms for manufacturing as well as space robotics to current research in medical robotics. The approach here is to employ several off-the-shelf algorithms, software and hardware such as the NAO robot and support software, including Choregraphe, OpenCV to capture and process images, the SVM to classify objects in images, and the Python programming environment. Five robotic actions were studied and three modes. The experiments used one robot as the "viewer" and the second robot as the "subject" being analyzed. Results show that the visual communication system has an accuracy of 90% in correctly identifying the five movements. This research has shown an original solution, as a model that can enable robots to run in the complex service tasks consisting of multiple connected actions in a dynamic environment. This methodology can also let the intelligent operation of the robots serve in different scenes according to their actual requirements. This research focuses on enhancing the prototype robot vision function and development of additional value for consolidation manageable platform that increases service robots in the home environment of intelligent control capability.
Search and rescue robots gained a significant attention in the past, as they assist firefighters during their rescue missions. The opportunity to move autonomously or remotely controlled with intelligent sensor techno...
详细信息
Search and rescue robots gained a significant attention in the past, as they assist firefighters during their rescue missions. The opportunity to move autonomously or remotely controlled with intelligent sensor technology, to detect victims in unknown fire smoke environments, introduces a growing technology in fire engineering. Since sensor systems are a component of mobile robots, there is a demand for intelligent robot vision, especially for human detection in fire smoke environments. In this article, an overview of sensor technologies and their algorithms for human detection in regular and smoky environments is presented. These sensor technologies are categorized into single sensor and multi-sensor systems. Novel sensor approaches are led by artificial intelligence, 3D mapping and multi-sensor fusion. The article provides a contribution for future research directions in algorithms and applications and supports decision-makers in fire engineering to get knowledge in trends, novel applications and challenges in this field of research.
Existing studies on indoor position recognition employ diverse evaluation methods, which complicates direct accuracy comparisons across techniques. To address this issue, this study proposes a novel framework for eval...
详细信息
Existing studies on indoor position recognition employ diverse evaluation methods, which complicates direct accuracy comparisons across techniques. To address this issue, this study proposes a novel framework for evaluating the accuracy of indoor position recognition methods. The proposed framework evaluates accuracy by using the position recognition results of a grid-pattern-tracking autonomous mobile robot (GPT-AMR) as a benchmark. To validate the proposed evaluation method, a comparative analysis was conducted on four position recognition algorithms: (1) a computervision-based algorithm, (2) a Bluetooth Low Energy (BLE)-based trilateration algorithm, (3) a BLE-based adaptive trilateration algorithm, and (4) a least squares method (LSM)-based algorithm. Experimental results demonstrated that the proposed evaluation method, which employs GPT-AMR, offers improved speed, accuracy, and practical applicability compared to conventional approaches. Furthermore, this method enables objective comparisons and evaluations of a wide range of indoor position recognition technologies, including both computervision- and BLE-based algorithms, using a standardized criterion. Future research will focus on systematically validating the generalizability of the proposed method across different indoor environments and operational conditions. This study aims to advance indoor position recognition technology for autonomous mobile robots (AMRs) and improve their applicability in various service robotics domains.
In recent years, there has been a significant amount of research on algorithms and control methods for distributed collaborative robots. However, the emergence of collective behavior in a swarm is still difficult to p...
详细信息
In recent years, there has been a significant amount of research on algorithms and control methods for distributed collaborative robots. However, the emergence of collective behavior in a swarm is still difficult to predict and control. Nevertheless, human interaction with the swarm helps render the swarm more predictable and controllable, as human operators can utilize intuition or knowledge that is not always available to the swarm. Therefore, this letter designs the Dynamic Visualization Research Platform for Multimodal Human-Swarm Interaction (DVRP-MHSI), which is an innovative open system that can perform real-time dynamic visualization and is specifically designed to accommodate a multitude of interaction modalities (such as brain-computer, eye-tracking, electromyographic, and touch-based interfaces), thereby expediting progress in human-swarm interaction research. Specifically, the platform consists of custom-made low-cost omnidirectional wheeled mobile robots, multitouch screens and two workstations. In particular, the mutitouch screens can recognize human gestures and the shapes of objects placed on them, and they can also dynamically render diverse scenes. One of the workstations processes communication information within robots and the other one implements human-robot interaction methods. The development of DVRP-MHSI frees researchers from hardware or software details and allows them to focus on versatile swarm algorithms and human-swarm interaction methods without being limited to predefined and static scenarios, tasks, and interfaces. The effectiveness and potential of the platform for human-swarm interaction studies are validated by several demonstrative experiments.
Background: SLAM plays an important role in the navigation of robots, unmanned aeri-al vehicles, and unmanned vehicles. The positioning accuracy will affect the accuracy of obstacle avoidance. The quality of map const...
详细信息
Although deep learning has achieved satisfactory performance in computervision, a large volume of im-ages is required. However, collecting images is often expensive and challenging. Many image augmenta-tion algorithm...
详细信息
Although deep learning has achieved satisfactory performance in computervision, a large volume of im-ages is required. However, collecting images is often expensive and challenging. Many image augmenta-tion algorithms have been proposed to alleviate this issue. Understanding existing algorithms is, therefore, essential for finding suitable and developing novel methods for a given task. In this study, we perform a comprehensive survey of image augmentation for deep learning using a novel informative taxonomy. To examine the basic objective of image augmentation, we introduce challenges in computervision tasks and vicinity distribution. The algorithms are then classified among three categories: model-free, model-based, and optimizing policy-based. The model-free category employs the methods from image process-ing, whereas the model-based approach leverages image generation models to synthesize images. In con-trast, the optimizing policy-based approach aims to find an optimal combination of operations. Based on this analysis, we believe that our survey enhances the understanding necessary for choosing suitable methods and designing novel algorithms.(c) 2023 The Author(s). Published by Elsevier Ltd. This is an open access article under the CC BY-NC-ND license ( http://***/licenses/by-nc-nd/4.0/ )
computervision focuses on optimizing computers to understand and interpret visual data from photos or movies, while image recognition specializes in detecting and categorizing objects or patterns in photographs. Tech...
详细信息
In robotics, route planning is essential to ensure the safe and efficient movement of robots within the workplace. This process involves determining a trajectory, usually a series of points in the workspace, to achiev...
详细信息
ISBN:
(纸本)9798331522759;9798331522742
In robotics, route planning is essential to ensure the safe and efficient movement of robots within the workplace. This process involves determining a trajectory, usually a series of points in the workspace, to achieve a specific goal. It is essential to consider criteria such as reducing the length of the route, the number of manoeuvres and the avoidance of obstacles. Route planning techniques generally require modelling the environment, representing both the structure and the obstacles (fixed or mobile), and the implementation of algorithms that generate the trajectory through the free areas of the environment. This approach often includes constructing a graph of possible trajectories and using minimum path search algorithms, such as A*. This article presents a route planning algorithm that uses Voronoi diagrams and uses artificial visionalgorithms. In addition, a case study is described in which the proposed technique is applied to guide an automated system through a maze drawn on a whiteboard by a user.
暂无评论