coding schemes for several problems in network information theory are constructed starting from point-to-point channel codes that are designed for symmetric channels. Given that the point-to-point codes satisfy certai...
详细信息
coding schemes for several problems in network information theory are constructed starting from point-to-point channel codes that are designed for symmetric channels. Given that the point-to-point codes satisfy certain properties pertaining to the rate, the error probability, and the distribution of decoded sequences, bounds on the performance of the coding schemes are derived and shown to hold irrespective of other properties of the codes. In particular, we consider the problems of lossless and lossy source coding, Slepian-Wolf coding, Wyner-Ziv coding, berger-tung coding, multiple description coding, asymmetric channel coding, Gelfand-Pinsker coding, coding for multiple access channels, Marton coding for broadcast channels, and coding for cloud radio access networks (C-RAN's). We show that the coding schemes can achieve the best known inner bounds for these problems, provided that the constituent point-to-point channel codes are rate-optimal. This would allow one to leverage commercial off-the-shelf codes for point-to-point symmetric channels in the practical implementation of codes over networks. Simulation results demonstrate the gain of the proposed coding schemes compared to existing practical solutions to these problems.
Recently, federated learning (FL), which replaces data sharing with model sharing, has emerged as an efficient and privacy-friendly machine learning (ML) paradigm. One of the main challenges in FL is the huge communic...
详细信息
Recently, federated learning (FL), which replaces data sharing with model sharing, has emerged as an efficient and privacy-friendly machine learning (ML) paradigm. One of the main challenges in FL is the huge communication cost for model aggregation. Many compression/quantization schemes have been proposed to reduce the communication cost for model aggregation. However, the following question remains unanswered: What is the fundamental trade-off between the communication cost and the FL convergence performance? In this paper, we manage to answer this question. Specifically, we first put forth a general framework for model aggregation performance analysis based on the rate-distortion theory. Under the proposed analysis framework, we derive an inner bound of the rate-distortion region of model aggregation. We then conduct an FL convergence analysis to connect the aggregation distortion and the FL convergence performance. We formulate an aggregation distortion minimization problem to improve the FL convergence performance. Two algorithms are developed to solve the above problem. Numerical results on aggregation distortion, convergence performance, and communication cost demonstrate that the baseline model aggregation schemes still have great potential for further improvement.
Recently, federated learning (FL), which replaces data sharing with model sharing, has emerged as an efficient and privacy-friendly machine learning paradigm. One of the main challenges in FL is the huge communication...
详细信息
ISBN:
(纸本)9781538674628
Recently, federated learning (FL), which replaces data sharing with model sharing, has emerged as an efficient and privacy-friendly machine learning paradigm. One of the main challenges in FL is the huge communication cost for model aggregation. Many compression/quantization schemes have been proposed to reduce the communication cost of model aggregation, with remarkable results. However, the following two questions remain unanswered: What are the performance limits of model aggregation? How much can FL convergence performance be improved by modifying existing model aggregation schemes? In this paper, we manage to answer these two questions. Specifically, for the first question, we put forth a general model aggregation performance analysis framework based on the rate-distortion theory. We then derive an inner bound of the rate-distortion region of model aggregation. For the second question, we develop an algorithm to search for the minimum achievable aggregation distortion, which, combined with the achievability scheme of the derived inner bound, induces a method to approximate the limits of FL convergence performance numerically. Numerical results demonstrate that the baseline model aggregation schemes still have great potential for further improvement.
暂无评论