版权所有:内蒙古大学图书馆 技术提供:维普资讯• 智图
内蒙古自治区呼和浩特市赛罕区大学西街235号 邮编: 010021
作者机构:Zhejiang Univ State Key Lab Blockchain & Data Secur Hangzhou 310027 Peoples R China Hangzhou High Tech Zone Binjiang Inst Blockchain & Hangzhou 310027 Peoples R China Zhejiang Univ Coll Comp Sci & Technol Hangzhou 310027 Peoples R China Fudan Univ Dept Informat Management & Business Intelligence Shanghai 200437 Peoples R China
出 版 物:《IEEE TRANSACTIONS ON DEPENDABLE AND SECURE COMPUTING》 (IEEE Trans. Dependable Secure Comput.)
年 卷 期:2025年第22卷第3期
页 面:2687-2704页
核心收录:
学科分类:0808[工学-电气工程] 08[工学] 0835[工学-软件工程] 0812[工学-计算机科学与技术(可授工学、理学学位)]
基 金:National Natural Science Foundation of China [62102353 62072398]
主 题:Data models Predictive models Shape Training Statistical distributions Vectors Multiprotocol label switching Machine learning Computational modeling Accuracy Data privacy machine learning security membership inference
摘 要:Neural networks are vulnerable to data inference attacks, including the membership inference attack, the model inversion attack, and the attribute inference attack. In this paper, we propose Purifier to defend against membership inference attacks by quantifying the differences between dataset members and non-members in three dimensions: individual shape, statistical distribution, and prediction label. Purifier involves transforming the confidence scores produced by the target classifier, resulting in purified confidence scores that are indistinguishable across the dimensions above. We conduct experiments on widely-used datasets and models. The results show that Purifier offers robust defense against membership inference attacks with superior efficacy compared to prior defense techniques while maintaining minimal utility degradation (e.g., less than 0.7% classification accuracy drop of most datasets). Additionally, our extended experiments explore the effectiveness of Purifier in defending against the model inversion attack and the attribute inference attack.