咨询与建议

看过本文的还看了

相关文献

该作者的其他文献

文献详情 >Quantifying Privacy Leakage in... 收藏
arXiv

Quantifying Privacy Leakage in Split Inference via Fisher-Approximated Shannon Information Analysis

作     者:Deng, Ruijun Lu, Zhihui Duan, Qiang 

作者机构:School of Computer Science Fudan University Shanghai China Engineering Research Center of Cyber Security Auditing and Monitoring Ministry of Education Shanghai China Shanghai Blockchain Engineering Research Center Shanghai China Information Sciences & Technology Department Pennsylvania State University AbingtonPA United States 

出 版 物:《arXiv》 (arXiv)

年 卷 期:2025年

核心收录:

主  题:Deep neural networks 

摘      要:Split inference (SI) partitions deep neural networks into distributed sub-models, enabling privacy-preserving collaborative learning. Nevertheless, it remains vulnerable to Data Reconstruction Attacks (DRAs), wherein adversaries exploit exposed smashed data to reconstruct raw inputs. Despite extensive research on adversarial attack-defense games, a shortfall remains in the fundamental analysis of privacy risks. This paper establishes a theoretical framework for privacy leakage quantification using information theory, defining it as the adversary’s certainty and deriving both average-case and worst-case error bounds. We introduce Fisher-approximated Shannon information (FSInfo), a novel privacy metric utilizing Fisher Information (FI) for operational privacy leakage computation. We empirically show that our privacy metric correlates well with empirical attacks and investigate some of the factors that affect privacy leakage, namely the data distribution, model size, and overfitting. © 2025, CC BY.

读者评论 与其他读者分享你的观点

用户名:未登录
我的评分