版权所有:内蒙古大学图书馆 技术提供:维普资讯• 智图
内蒙古自治区呼和浩特市赛罕区大学西街235号 邮编: 010021
作者机构: School of Information Engineering Beijing100190 China City University of Hong Kong Shenzhen Research Institute Hong Kong Hong Kong City University of Hong Kong Department of Computer Science Hong Kong Hong Kong Shenzhen University College of Computer Science and Software Engineering Shenzhen518060 China
出 版 物:《IEEE Internet of Things Journal》 (IEEE Internet Things J.)
年 卷 期:2025年第12卷第9期
页 面:12008-12020页
核心收录:
学科分类:0808[工学-电气工程] 08[工学] 0835[工学-软件工程] 0812[工学-计算机科学与技术(可授工学、理学学位)]
基 金:This work was supported in part by the National Key Research and Development Program of China under Grant 2023YFE0208800 in part by the Research Grants Council of the Hong Kong Special Administrative Region, China, under Project CityU 11202124 and Project CityU 11201422 in part by NSF of Guangdong Province under Project 2024A1515010192 and in part by the Innovation and Technology Commission of Hong Kong under Project MHP/072/23
主 题:Contrastive Learning
摘 要:Split learning (SL) is widely regarded as a promising distributed machine learning framework with superior privacy-preserving properties, lower communication and computation costs. However, in real Internet of Things (IoT) scenarios, existing SL may not perform well because the local data of IoT devices often do not follow the same distribution. This leads to the model continuously adapting to the current data distribution in each training epoch, resulting in a catastrophic forgetting phenomenon. Existing methods typically attempt to add raw or generated data from previous devices in the current training epoch to review knowledge, but direct access to the local data of other devices carries serious privacy risks. Data augmentation techniques based on generative networks often have poor robustness and increase the computation cost on the device side. To address these challenges, we propose a new SL framework called SL without Forgetting (SLwF). To mitigate catastrophic forgetting without accessing any previous data, we propose a contrastive learning-based training method that leverages current training data to review previous knowledge, and learn new knowledge better. Furthermore, we adopt an exponential moving average (EMA)-based model update strategy to preserve lost knowledge, further alleviating the forgetting problem. We implement the SLwF framework in real IoT scenarios and extensively evaluated its performance using four publicly available datasets. Compared to other related research (e.g., IoTSL), SLwF performs better in terms of final accuracy and robustness while avoiding excessive device energy consumption. © 2014 IEEE.