咨询与建议

看过本文的还看了

相关文献

该作者的其他文献

文献详情 >Towards Adversarial Evaluation... 收藏
arXiv

Towards Adversarial Evaluations for Inexact Machine Unlearning

作     者:Goel, Shashwat Prabhu, Ameya Sanyal, Amartya Lim, Ser-Nam Torr, Philip Kumaraguru, Ponnurangam 

作者机构:IIIT Hyderabad India University of Oxford United Kingdom ETH Zurich Switzerland MPI-IS Germany Meta AI United States 

出 版 物:《arXiv》 (arXiv)

年 卷 期:2022年

核心收录:

主  题:Learning algorithms 

摘      要:Machine Learning models face increased concerns regarding the storage of personal user data and adverse impacts of corrupted data like backdoors or systematic bias. Machine Unlearning can address these by allowing post-hoc deletion of affected training data from a learned model. Achieving this task exactly is computationally expensive;consequently, recent works have proposed inexact unlearning algorithms to solve this approximately as well as evaluation methods to test the effectiveness of these algorithms. In this work, we first outline some necessary criteria for evaluation methods and show no existing evaluation satisfies them all. Then, we design a stronger black-box evaluation method called the Interclass Confusion (IC) test which adversarially manipulates data during training to detect the insufficiency of unlearning procedures. We also propose two analytically motivated baseline methods (EU-k and CF-k) which outperform several popular inexact unlearning methods. Overall, we demonstrate how adversarial evaluation strategies can help in analyzing various unlearning phenomena which can guide the development of stronger unlearning algorithms. © 2022, CC BY.

读者评论 与其他读者分享你的观点

用户名:未登录
我的评分