咨询与建议

看过本文的还看了

相关文献

该作者的其他文献

文献详情 >Feature Reconstruction Attacks... 收藏

Feature Reconstruction Attacks and Countermeasures of DNN Training in Vertical Federated Learning

作     者:Ye, Peng Jiang, Zhifeng Wang, Wei Li, Bo Li, Baochun 

作者机构:Hong Kong Univ Sci & Technol Dept Comp Sci & Engn Hong Kong Peoples R China Univ Toronto Dept Elect & Comp Engn Toronto ON M5S 1A1 Canada 

出 版 物:《IEEE TRANSACTIONS ON DEPENDABLE AND SECURE COMPUTING》 (IEEE Trans. Dependable Secure Comput.)

年 卷 期:2025年第22卷第3期

页      面:2659-2669页

核心收录:

学科分类:0808[工学-电气工程] 08[工学] 0835[工学-软件工程] 0812[工学-计算机科学与技术(可授工学、理学学位)] 

基  金:RGC RIF [R6021-20] RGC TRS [T43-513/23N-2] RGC CRF [C7004-22 G, C1029-22 G, C6015-23 G, 16200221, 16207922, 16207423] 

主  题:Training Vectors Data models Artificial neural networks Adaptation models Feature extraction Computational modeling Security Protection Federated learning DNN vertical federated learning feature recovery attack feature protection scheme 

摘      要:Federated learning (FL) has increasingly been deployed, in its vertical form, among organizations to facilitate secure collaborative training. In vertical FL (VFL), participants hold disjoint features of the same set of sample instances. The one with labels - the active party, initiates training and interacts with other participants - the passive parties. It remains largely unknown whether and how an active party can extract private feature data owned by passive parties, especially when training deep neural network (DNN) models. This work examines the feature security problem of DNN training in VFL. We consider a DNN model partitioned between active and passive parties, where the passive party holds a subset of the input layer with some features of binary values. Though proved to be NP-hard. we demonstrate that, unless the feature dimension is exceedingly large, it remains feasible, both theoretically and practically, to launch a reconstruction attack with an efficient search-based algorithm that prevails over current feature protection. We propose a novel feature protection scheme by perturbing intermediate results and fabricated input features, which effectively misleads reconstruction attacks towards pre-specified random values. The evaluation shows it sustains feature reconstruction attack in various VFL applications with negligible impact on model performance.

读者评论 与其他读者分享你的观点

用户名:未登录
我的评分