版权所有:内蒙古大学图书馆 技术提供:维普资讯• 智图
内蒙古自治区呼和浩特市赛罕区大学西街235号 邮编: 010021
作者机构:Indraprastha Institute of Information Technology Delhi Department of Computer Science and Engineering New Delhi110020 India Indian Institute of Technology Jodhpur Department of Computer Science and Engineering Jodhpur342011 India University at Buffalo Department of Computer Science and Engineering BuffaloNY14260 United States
出 版 物:《IEEE Transactions on Technology and Society》 (IEEE Trans. Technol. Soc.)
年 卷 期:2023年第4卷第1期
页 面:54-67页
核心收录:
基 金:the Tata Consultancy Services (TCS) Ph.D. Fellowship Swarnajayanti Fellowship by the Government of India the Facebook’s Ethics in AI Award
主 题:Face recognition
摘 要:Humans are known to favor other individuals who exist in similar groups as them, exhibiting biased behavior, which is termed as in-group bias. The groups could be formed on the basis of ethnicity, age, or even a favorite sports team. Taking cues from aforementioned observation, we inspect if deep learning networks also mimic this human behavior, and are affected by in-group and out-group biases. In this first of its kind research, the behavior of face recognition models is evaluated to understand if similar to humans, models also encode group-specific features for face recognition, along with where bias is encoded in these models. Analysis has been performed for two use-cases of bias: age and ethnicity in face recognition models. Thorough experimental evaluation leads us to several insights: (i) deep learning models focus on different facial regions for different ethnic groups and age groups, and (ii) large variation in face verification performance is also observed across different sub-groups for both known and our own trained deep networks. Based on the observations, a novel bias index is presented for evaluating a trained model s level of bias. We believe that a better understanding of how deep learning models work and encode bias, along with the proposed bias index would enable researchers to address the challenge of bias in AI, and develop more robust and fairer algorithms for mitigating bias as well as developing fairer models. © 2020 IEEE.