Explainable Artificial Intelligence (XAI) attempts to help humans understand machine learning decisions better and has been identified as a critical component toward increasing the trustworthiness of complex black-box...
详细信息
Explainable Artificial Intelligence (XAI) attempts to help humans understand machine learning decisions better and has been identified as a critical component toward increasing the trustworthiness of complex black-box systems, such as deep neural networks. In this article, we propose a generic and comprehensive framework named SNIPPET and create a user interface for the subjective evaluation of visual explanations, focusing on finding human-friendly explanations. SNIPPET considers human-centered evaluation tasks and incorporates the collection of human annotations. These annotations can serve as valuable feedback to validate the qualitative results obtained from the subjective assessment tasks. Moreover, we consider different user background categories during the evaluation process to ensure diverse perspectives and comprehensive evaluation. We demonstrate SNIPPET on a DeepFake face dataset. Distinguishing real from fake faces is a non-trivial task even for humans that depends on rather subtle features, making it a challenging use case. Using SNIPPET, we evaluate four popular XAI methods which provide visual explanations: Gradient-weighted Based on our experimental results, we observe preference variations among different user categories. We find that most people are more favorable to the explanations of rollout. Moreover, when it comes to XAI-assisted understanding, those who have no or lack relevant background knowledge often consider that visual explanations are insufficient to help them understand. We open-source our framework for continued data collection and annotation at https://***/XAI-SubjEvaluation/SNIPPET.
Remote robot manipulation with human control enables applications in which safety and environmental constraints are adverse to humans (e.g., underwater, space robotics and disaster response) or the complexity of the t...
详细信息
Remote robot manipulation with human control enables applications in which safety and environmental constraints are adverse to humans (e.g., underwater, space robotics and disaster response) or the complexity of the task demands human-level cognition and dexterity (e.g., robotic surgery and manufacturing). These systems typically use direct teleoperation at the motion level and are usually limited to low-DOF arms and two-dimensional (2D) perception. Improving dexterity and situational awareness demands new interaction and planning workflows. We explore the use of human-robot teaming through teleautonomy with assisted planning for remote control of a dual-arm dexterous robot for multi-step manipulation, and conduct a within-subjects experimental assessment (n = 12 expert users) to compare it with direct teleoperation with an imitation controller with 2D and three-dimensional (3D) perception, as well as teleoperation through a teleautonomy interface. The proposed assisted planning approach achieves task times comparable with direct teleoperation while improving other objective and subjective metrics, including re-grasps, collisions, and TLX workload. Assisted planning in the teleautonomy interface achieves faster task execution and removes a significant interaction with the operator's expertise level, resulting in a performance equalizer across users. Our study protocol, metrics, and models for statistical analysis might also serve as a general benchmarking framework in teleoperation domains. Accompanying video and reference R code: https://***/cdarpino/THRIteleop/
暂无评论