Assistance systems for manual assembly offer various advantages and help managing the complexity of modern assembly processes. The setup of such assembly assistance systems and the creation of assembly instructions du...
详细信息
Assistance systems for manual assembly offer various advantages and help managing the complexity of modern assembly processes. The setup of such assembly assistance systems and the creation of assembly instructions during the setup process are currently mostly manual, resulting in a time-consuming and inefficient procedure. Traditional input modalities, such as mouse and keyboard, are the only options available thus far. This leads to inefficient and error prone setup procedures with textual inputs, image capturing of the assembled parts and programming of systems. This article proposes a novel system aimed at simplifying and speed-up this setup process. Based on an analysis of process- and technical requirements, this article presents a concept at software and hardware level, which conceptualizes how assembly assistance systems could be instructed through demonstration. We further present a novel approach on human activity recognition through the integration of logic trees to improve the creation of assembly instructions. As a result, this article presents an initial approach to integration and identifies further research needs.
When an end-user instructs a taskable robot on a new task, it is important for the robot to learn the user's intention for the task. Knowing the user's intention, represented as desired goal conditions, allows...
详细信息
When an end-user instructs a taskable robot on a new task, it is important for the robot to learn the user's intention for the task. Knowing the user's intention, represented as desired goal conditions, allows the robot to generalize across variations of the learned task seen at execution time. However, it has proven challenging to learn goal conditions due to the large, noisy, and complex space of goal conditions expressed by human users. This paper introduces Semantic Robot programming with Multiple demonstrations (SRP-MD) to learn a generative model of latent end-user task goal conditions from multiple end-user demonstrations in a shared workspace. By learning a generative model of the goal conditions, SRP-MD generalizes to task instances even when the quantity of objects to be arranged is not in the training set or novel object instances are included. At test time, a new goal is pulled from the learned generative model given the objects present in the initial scene. The efficacy of SRP-MD as a step toward taskable robots is shown on a Fetch robot learning and executing bin packing tasks in a simulated environment with grocery items.& COPY;2023 Elsevier B.V. All rights reserved.
programming by demonstration (PbD) is used to transfer a task from a human teacher to a robot, where it is of high interest to understand the underlying structure of what has been demonstrated. Such a demonstrated tas...
详细信息
programming by demonstration (PbD) is used to transfer a task from a human teacher to a robot, where it is of high interest to understand the underlying structure of what has been demonstrated. Such a demonstrated task can be represented as a sequence of so-called actions or skills. This work focuses on the recognition part of the task transfer. We propose a framework that recognizes skills online during a kinesthetic demonstration by means of position and force-torque (wrench) sensing. Therefore, our framework works independently of visual perception. The recognized skill sequence constitutes a task representation that lets the user intuitively understand what the robot has learned. The skill recognition algorithm combines symbolic skill segmentation, which makes use of pre- and post-conditions, and data-driven prediction, which uses support vector machines for skill classification. This combines the advantages of both techniques, which is inexpensive evaluation of symbols and usage of data-driven classification of complex observations. The framework is thus able to detect a larger variety of skills, such as manipulation and force-based skills that can be used in assembly tasks. The applicability of our framework is proven in a user study that achieves a 96% accuracy in the online skill recognition capabilities and highlights the benefits of the generated task representation in comparison to a baseline representation. The results show that the task load could be reduced, trust and explainability could be increased, and, that the users were able to debug the robot program using the generated task representation. (c) 2023 The Authors. Published by Elsevier B.V. This is an open access article under the CC BY license (http://***/licenses/by/4.0/).
To facilitate the use of robots in small and medium-sized enterprises (SMEs), they have to be easily and quickly deployed by non-expert users. programming by demonstration (PbD) is considered a fast and intuitive appr...
详细信息
To facilitate the use of robots in small and medium-sized enterprises (SMEs), they have to be easily and quickly deployed by non-expert users. programming by demonstration (PbD) is considered a fast and intuitive approach to handle this requirement. However, one of the major drawbacks of pure PbD is that it may suffer from poor generalisation capabilities, as it is mainly capable of motion -level representations. This work proposes a method to semantically represent a demonstrated skill, so as to identify the elements of the workspace that are relevant for the characterisation of the skill itself, as well as its preconditions and effects. This way, the robot can automatically abstract from the demonstration and memorise the skill in a more general way. An experimental case study consisting in a manipulation task is reported to validate the approach. & COPY;2023 Elsevier B.V. All rights reserved.
Algot is a newly developed visual programming language that seeks to bridge the syntax-semantics gap in programming via a novel implementation of programming by demonstration. Preliminary research, which will be prese...
详细信息
ISBN:
(纸本)9798400704246
Algot is a newly developed visual programming language that seeks to bridge the syntax-semantics gap in programming via a novel implementation of programming by demonstration. Preliminary research, which will be presented separately at SIGCSE this year, suggests that Algot may be useful for teaching foundational computer science concepts at both secondary and tertiary levels. In this proposed SIGCSE demo session, attendees will have a chance to interact with Algot and learn about its potential benefits in their own classrooms.
Flexible robotics will be a major enabling technology for the application of robot-based automation in other than traditionally suitable automotive or electronics production with high volumes. Increased demand for fle...
详细信息
Flexible robotics will be a major enabling technology for the application of robot-based automation in other than traditionally suitable automotive or electronics production with high volumes. Increased demand for flexibility due to individualized production typical for most SMEs require an increased level of flexibility - also for robots that should be able to learn as well as provide an increased level of autonomy due to improved skills and extended reasoning capabilities. This publication tries to find out if novel ANN methodology that is able to process 3D surface data is applicable to generalize process knowledge in a one shot learning by demonstration situation in order to be able to execute tasks on similar but geometrically unequal objects in future settings. The methodology generalizes not on symbolic or trajectory level but on surface geometry level and was applied to a simple geometric object on lab scale. The algorithms introduced are applicable to more complex objects with practical relevance. (C) 2021 The Authors. Published by Elsevier Ltd.
While Alexa can perform over 100,000 skills, its capability covers only a fraction of what is possible on the web. Individuals need and want to automate a long tail of web-based tasks which often involve visiting diff...
详细信息
ISBN:
(纸本)9781450383912
While Alexa can perform over 100,000 skills, its capability covers only a fraction of what is possible on the web. Individuals need and want to automate a long tail of web-based tasks which often involve visiting different websites and require programming concepts such as function composition, conditional, and iterative evaluation. This paper presents diya (Do-It-Yourself Assistant), a new system that empowers users to create personalized web-based virtual assistant skills that require the full generality of composable control constructs, without having to learn a formal programming language. With DIYA, the user demonstrates their task of interest in the browser and issues a few simple voice commands, such as naming the skills and adding conditions on the action. diya turns these multi-modal specifications into voice-invocable skills written in the ThingTalk 2.0 programming language we designed for this purpose. diya is a prototype that works in the Chrome browser. Our user studies show that 81% of the proposed routines can be expressed using diya. diya is easy to learn, and 80% of users surveyed find diya useful.
The progressive automation framework allows the seamless transition of a robot from kinesthetic guidance to autonomous operation mode during programming by demonstration of discrete motion tasks. This is achieved by t...
详细信息
The progressive automation framework allows the seamless transition of a robot from kinesthetic guidance to autonomous operation mode during programming by demonstration of discrete motion tasks. This is achieved by the synergetic action of dynamic movement primitives (DMPs), virtual fixtures, and variable impedance control. The proposed DMPs encode the demonstrated trajectory and synchronize with the current demonstration from the user so that the reference generated motion follows the humans demonstration. The proposed virtual fixtures assist the user in repeating the learned kinematic behavior but allow penetration so that the user can make modifications to the learned trajectory if needed. The tracking error in combination with the interaction forces and torques is used by a variable stiffness strategy to adjust the progressive automation level and transition the leading role between the human and the robot. An energy tank approach is utilized to apply the designed controller and to prove the passivity of the overall control method. An experimental evaluation of the proposed framework is presented for a pick and place task and results show that the transition to autonomous mode is achieved in few demonstrations.
The manufacturing industry is seeing an increase in demand for more custom-made, low-volume production. This type of production is rarely automated and is to a large extent still performed manually. To keep up with th...
详细信息
The manufacturing industry is seeing an increase in demand for more custom-made, low-volume production. This type of production is rarely automated and is to a large extent still performed manually. To keep up with the competition and market demands, manufacturers will have to undertake the effort to automate such manufacturing processes. However, automating low-volume production is no small feat as the solution should be adaptable and future proof to unexpected changes in customers' demands. In this paper, we propose a reconfigurable robot workcell aimed at automating low-volume production. The developed workcell can adapt to the changes in manufacturing processes by employing a number of passive, reconfigurable hardware elements, supported by the ROS-based, modular control software. To further facilitate and expedite the setup process, we integrated intuitive, user-friendly robot programming methods with the available hardware. The system was evaluated by implementing five production processes from different manufacturing industries.
暂无评论