版权所有:内蒙古大学图书馆 技术提供:维普资讯• 智图
内蒙古自治区呼和浩特市赛罕区大学西街235号 邮编: 010021
作者机构:Nanjing Univ Sci & Technol Sch Cyber Sci & Engn Nanjing 210094 Peoples R China Xidian Univ State Key Lab Integrated Serv Networks Xian 710071 Peoples R China Zhejiang Univ Coll Comp Sci & Technol Hangzhou 310058 Peoples R China Univ Western Australia Dept Comp Sci & Software Engn Perth WA 6009 Australia Nanjing Univ Sci & Technol Sch Comp Sci & Engn Nanjing 210094 Peoples R China
出 版 物:《IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS》 (IEEE Trans. Neural Networks Learn. Sys.)
年 卷 期:2025年第PP卷
页 面:PP页
核心收录:
学科分类:0808[工学-电气工程] 08[工学] 0812[工学-计算机科学与技术(可授工学、理学学位)]
基 金:National Natural Science Foundation of China [62072239, 62372236] Open Foundation of the State Key Laboratory of Integrated Services Networks [ISN24-15] Qing Lan Project of Jiangsu Province
主 题:Data models Surveys Measurement Training Electronic mail Data privacy Taxonomy General Data Protection Regulation Computational modeling Approximation algorithms federated learning (FL) large language model (LLM) machine learning (ML) machine unlearning (MU)
摘 要:Personal digital data is a critical asset, and governments worldwide have enforced laws and regulations to protect data privacy. Data users have been endowed with the right to be forgotten (RTBF) of their data. In the course of machine learning (ML), the forgotten right requires a model provider to delete user data and its subsequent impact on ML models upon user requests. Machine unlearning (MU) emerges to address this, which has garnered ever-increasing attention from both industry and academia. Specifically, MU allows model providers to eliminate the influence of unlearned data without retraining the model from scratch, ensuring the model behaves as if it never encountered this data. While the area has developed rapidly, there is a lack of comprehensive surveys to capture the latest advancements. Recognizing this shortage, we conduct an extensive exploration to map the landscape of MU including the (fine-grained) taxonomy of unlearning algorithms under centralized and distributed settings, debate on approximate unlearning, verification and evaluation metrics, and challenges and solutions across various applications. We also focus on the motivations, challenges, and specific methods for deploying unlearning in large language models (LLMs), as well as the potential attacks targeting unlearning processes. The survey concludes by outlining potential directions for future research, hoping to serve as a beacon for interested scholars.