作者:
Liu, FangboYang, FengGuangxi Univ
Sch Comp & Elect & Informat Nanning 530004 Guangxi Peoples R China Guangxi Univ
Guangxi Key Lab Multimedia Commun Network Technol Nanning 530004 Guangxi Peoples R China Guangxi Univ
Key Lab Parallel Distributed & Intelligent Comp Educ Dept Guangxi Zhuang Autonomous Reg Nanning Guangxi Peoples R China
Federated learning promises to alleviate this problem of low labelled data in medical image segmentation while protecting the privacy and security of the data. However, medical image segmentation under federated learn...
详细信息
ISBN:
(数字)9789819947492
ISBN:
(纸本)9789819947485;9789819947492
Federated learning promises to alleviate this problem of low labelled data in medical image segmentation while protecting the privacy and security of the data. However, medical image segmentation under federated learning also has many problems, such as how to achieve high-precision segmentation using federated models in the presence of data imbalance, whether the communication efficiency in the federated process can be effectively improved, and how to effectively solve the model gradient explosion in federated distillation. Based on the above difficulties, this paper proposes a new optimization algorithm for federated distillation. First, we design a small-scale network model in the communication between the client and the central server to reduce the communication overhead;then, we design a distillation method to keep the local model stable. Finally, we add a coordinator for the central server before aggregation and introduce a model filtering mechanism to effectively filter and evaluate the client model parameters and weights to keep the global model optimization, while preventing the gradient explosion problem under malicious or extreme models and improving the accuracy of target domain segmentation. We conducted experiments on two medical image segmentation tasks and demonstrated that our approach achieves effective results on non-IID data, where the average DICE coefficient can reach 82.79% while the communication overhead is reduced by a factor of 16.
Kernelization is a strong and widely-applied technique in parameterized complexity. A kernelization algorithm, or simply a kernel, is a polynomial-time transformation that transforms any given parameterized instance t...
详细信息
Kernelization is a strong and widely-applied technique in parameterized complexity. A kernelization algorithm, or simply a kernel, is a polynomial-time transformation that transforms any given parameterized instance to an equivalent instance of the same problem, with size and parameter bounded by a function of the parameter in the input. A kernel is polynomial if the size and parameter of the output are polynomially-bounded by the parameter of the input. In this paper we develop a framework which allows showing that a wide range of FPT problems do not have polynomial kernels. Our evidence relies on hypothesis made in the classical world (i.e. non-parametric complexity), and revolves around a new type of algorithm for classical decision problems, called a distillation algorithm, which is of independent interest. Using the notion of distillation algorithms, we develop a generic lower-bound engine that allows us to show that a variety of FPT problems, fulfilling certain criteria, cannot have polynomial kernels unless the polynomial hierarchy collapses. These problems include k-PATH, k-CYCLE, k-EXACT CYCLE, k-SHORT CHEAP TOUR, k-GRAPH MINOR ORDER TEST, k-CUTWIDTH, k-SEARCH NUMBER, k-PATHWIDTH, k-TREEWIDTH. k-BRANCHWIDTH, and several optimization problems parameterized by treewidth and other structural parameters. (C) 2009 Elsevier Inc. All rights reserved.
暂无评论