咨询与建议

看过本文的还看了

相关文献

该作者的其他文献

文献详情 >Feature Reconstruction Attacks... 收藏

Feature Reconstruction Attacks and Countermeasures of DNN Training in Vertical Federated Learning

作     者:Ye, Peng Jiang, Zhifeng Wang, Wei Li, Bo Li, Baochun 

作者机构:Hong Kong University of Science and Technology Department of Computer Science and Engineering Hong Kong University of Toronto Department of Electrical and Computer Engineering TorontoM5S 1A1 Canada 

出 版 物:《IEEE Transactions on Dependable and Secure Computing》 (IEEE Trans. Dependable Secure Comput.)

年 卷 期:2025年第22卷第3期

页      面:2659-2669页

核心收录:

学科分类:0808[工学-电气工程] 08[工学] 0835[工学-软件工程] 0812[工学-计算机科学与技术(可授工学、理学学位)] 

基  金:The work was supported in part by a RGC RIF under Grant R6021-20  in part by RGC TRS under Grant T43-513/23N-2  in part by RGC CRF under Grant C7004-22 G  Grant C1029-22 G  and Grant C6015-23 G  and in part by RGC GRF under Grant 16200221  Grant 16207922  and Grant 16207423 

主  题:Federated learning 

摘      要:Federated learning (FL) has increasingly been deployed, in its vertical form, among organizations to facilitate secure collaborative training. In vertical FL (VFL), participants hold disjoint features of the same set of sample instances. The one with labels - the active party, initiates training and interacts with other participants - the passive parties. It remains largely unknown whether and how an active party can extract private feature data owned by passive parties, especially when training deep neural network (DNN) models. This work examines the feature security problem of DNN training in VFL. We consider a DNN model partitioned between active and passive parties, where the passive party holds a subset of the input layer with some features of binary values. Though proved to be NP-hard. we demonstrate that, unless the feature dimension is exceedingly large, it remains feasible, both theoretically and practically, to launch a reconstruction attack with an efficient search-based algorithm that prevails over current feature protection. We propose a novel feature protection scheme by perturbing intermediate results and fabricated input features, which effectively misleads reconstruction attacks towards pre-specified random values. The evaluation shows it sustains feature reconstruction attack in various VFL applications with negligible impact on model performance. © 2004-2012 IEEE.

读者评论 与其他读者分享你的观点

用户名:未登录
我的评分