版权所有:内蒙古大学图书馆 技术提供:维普资讯• 智图
内蒙古自治区呼和浩特市赛罕区大学西街235号 邮编: 010021
作者机构:Department of Electrical and Electronic Engineering University of Hong Kong Pok Fu Lam Hong Kong Institute of Space Internet Fudan University Shanghai China School of Computer Science Fudan University Shanghai China Faculty of Applied Sciences Macao Polytechnic University China School of Computer Engineering Nanyang Technological University Singapore
出 版 物:《arXiv》 (arXiv)
年 卷 期:2024年
核心收录:
摘 要:As AI models expand in size, it has become increasingly challenging to deploy federated learning (FL) on resource-constrained edge devices. To tackle this issue, split federated learning (SFL) has emerged as an FL framework with reduced workload on edge devices via model splitting;it has received extensive attention from the research community in recent years. Nevertheless, most prior works on SFL focus only on a two-tier architecture without harnessing multi-tier cloud-edge computing resources. In this paper, we intend to analyze and optimize the learning performance of SFL under multi-tier systems. Specifically, we propose the hierarchical SFL (HSFL) framework and derive its convergence bound. Based on the theoretical results, we formulate a joint optimization problem for model splitting (MS) and model aggregation (MA). To solve this rather hard problem, we then decompose it into MS and MA sub-problems that can be solved via an iterative descending algorithm. Simulation results demonstrate that the tailored algorithm can effectively optimize MS and MA for SFL within virtually any multi-tier system. Copyright © 2024, The Authors. All rights reserved.