版权所有:内蒙古大学图书馆 技术提供:维普资讯• 智图
内蒙古自治区呼和浩特市赛罕区大学西街235号 邮编: 010021
作者机构:Concordia University Department of Psychology MontrealQCH3G 1M8 Canada United States Military Academy Department of Electrical Engineering & Computer Science West PointNY10996 United States
出 版 物:《IEEE Transactions on Technology and Society》 (IEEE Trans. Technol. Soc.)
年 卷 期:2024年第5卷第1期
页 面:61-70页
核心收录:
基 金:Office of Naval Research FY21 Multi-University Research Initiative C5ISR
摘 要:While previous studies of trust in artificial intelligence have focused on perceived user trust, the paper examines how an external agent (e.g., an auditor) assigns responsibility, perceives trustworthiness, and explains the successes and failures of AI. In two experiments, participants (university students) reviewed scenarios about automation failures and assigned perceived responsibility, trustworthiness, and preferred explanation type. Participants cumulative responsibility ratings for three agents (operators, developers, and AI) exceeded 100%, implying that participants were not attributing trust in a wholly rational manner, and that trust in the AI might serve as a proxy for trust in the human software developer. Dissociation between responsibility and trustworthiness suggested that participants used different cues, with the kind of technology and perceived autonomy affecting judgments. Finally, we additionally found that the kind of explanation used to understand a situation differed based on whether the AI succeeded or failed. © 2020 IEEE.