版权所有:内蒙古大学图书馆 技术提供:维普资讯• 智图
内蒙古自治区呼和浩特市赛罕区大学西街235号 邮编: 010021
作者机构:China Telecom Research Institute Beijing 102209 Shanghai200122 China Shenzhen518055 China Shenzhen518055 China Shenzhen518107 China Beijing100190 China School of Computer Science and Engineering Macau University of Science and Technology China Tianjin University of Science & Technology Tianjin300457 China Center for Research on Intelligent Perception and Computing State Key Laboratory of Multimodal Artificial Intelligence Systems Institute of Automation Chinese Academy of Sciences Beijing100190 China
出 版 物:《arXiv》 (arXiv)
年 卷 期:2024年
核心收录:
主 题:Economic and social effects
摘 要:Iris recognition is widely used in high-security scenarios due to its stability and distinctiveness. However, iris images captured by different devices exhibit certain and device-related consistent differences, which has a greater impact on the classification algorithm for anti-spoofing. The iris of various races would also affect the classification, causing the risk of identity theft. So it is necessary to improve the cross-domain capabilities of the iris anti-spoofing (IAS) methods to enable it more robust in facing different races and devices. However, there is no existing protocol that is comprehensively available. To address this gap, we propose an Iris Anti-Spoofing Cross-Domain-Testing (IAS-CDT) Protocol, which involves 10 datasets, belonging to 7 databases, published by 4 institutions, and collected with 6 different devices. It contains three sub-protocols hierarchically, aimed at evaluating average performance, cross-racial generalization, and cross-device generalization of IAS models. Moreover, to address the cross-device generalization challenge brought by the IAS-CDT Protocol, we employ multiple model parameter sets to learn from the multiple sub-datasets. Specifically, we utilize the Mixture of Experts (MoE) to fit complex data distributions using multiple sub-neural networks. To further enhance the generalization capabilities, we propose a novel method Masked-MoE (MMoE), which randomly masks a portion of tokens for some experts and requires their outputs to be similar to the unmasked experts, which can effectively mitigate the overfitting issue of MoE. For the evaluation, we selected ResNet50, VIT-B/16, CLIP, and FLIP as representative Copyright © 2024, The Authors. All rights reserved.