咨询与建议

限定检索结果

文献类型

  • 9 篇 会议
  • 7 篇 期刊文献

馆藏范围

  • 16 篇 电子文献
  • 0 种 纸本馆藏

日期分布

学科分类号

  • 16 篇 工学
    • 13 篇 计算机科学与技术...
    • 8 篇 电气工程
    • 6 篇 信息与通信工程
    • 1 篇 软件工程
  • 2 篇 理学
    • 2 篇 物理学
  • 1 篇 医学
    • 1 篇 临床医学

主题

  • 16 篇 distributed stoc...
  • 4 篇 deep learning
  • 2 篇 gradient communi...
  • 2 篇 merged-gradient
  • 2 篇 neural networks
  • 2 篇 speech recogniti...
  • 2 篇 internet-of-thin...
  • 2 篇 federated learni...
  • 2 篇 random access
  • 2 篇 gpu
  • 1 篇 gradient reconst...
  • 1 篇 data privacy in ...
  • 1 篇 compressive sens...
  • 1 篇 compressed sensi...
  • 1 篇 deep neural netw...
  • 1 篇 deep learning tr...
  • 1 篇 decentralize spa...
  • 1 篇 sparse matrix ve...
  • 1 篇 compressed and s...
  • 1 篇 communication-ef...

机构

  • 2 篇 natl inst inform...
  • 2 篇 postech dept ele...
  • 2 篇 amazon com seatt...
  • 1 篇 deakin univ sch ...
  • 1 篇 princeton univ d...
  • 1 篇 univ cambridge c...
  • 1 篇 univ hildesheim ...
  • 1 篇 deakin univ sch ...
  • 1 篇 swiss fed inst t...
  • 1 篇 meiji univ kanag...
  • 1 篇 kddi res inc sai...
  • 1 篇 univ chinese aca...
  • 1 篇 univ toronto ece...
  • 1 篇 hong kong univ s...
  • 1 篇 korea univ sch e...
  • 1 篇 chinese acad sci...
  • 1 篇 princeton univ d...
  • 1 篇 univ warwick cov...
  • 1 篇 natl res tomsk p...
  • 1 篇 hong kong univ s...

作者

  • 2 篇 chu xiaowen
  • 2 篇 shi shaohuai
  • 2 篇 poor h. vincent
  • 2 篇 phong le trieu
  • 2 篇 phuong tran thi
  • 2 篇 li bo
  • 2 篇 jeon yo-seb
  • 2 篇 strom nikko
  • 2 篇 choi jinho
  • 1 篇 ladkat pranav
  • 1 篇 oh yongjeong
  • 1 篇 boudreau gary
  • 1 篇 stainer julien
  • 1 篇 rybakov oleg
  • 1 篇 an zhulin
  • 1 篇 demirci gunduz v...
  • 1 篇 blanchard peva
  • 1 篇 amiri mohammad m...
  • 1 篇 lee namyoon
  • 1 篇 guerraoui rachid

语言

  • 16 篇 英文
检索条件"主题词=Distributed Stochastic Gradient Descent"
16 条 记 录,以下是1-10 订阅
排序:
distributed stochastic gradient descent With Compressed and Skipped Communication
收藏 引用
IEEE ACCESS 2023年 11卷 99836-99846页
作者: Phuong, Tran Thi Phong, Le Trieu Fukushima, Kazuhide KDDI Res Inc Saitama 3568502 Japan Natl Inst Informat & Commun Technol NICT Tokyo 1848795 Japan
This paper introduces CompSkipDSGD, a new algorithm for distributed stochastic gradient descent that aims to improve communication efficiency by compressing and selectively skipping communication. In addition to compr... 详细信息
来源: 评论
Communication-Efficient Federated Learning via Quantized Compressed Sensing
收藏 引用
IEEE TRANSACTIONS ON WIRELESS COMMUNICATIONS 2023年 第2期22卷 1087-1100页
作者: Oh, Yongjeong Lee, Namyoon Jeon, Yo-Seb Poor, H. Vincent POSTECH Dept Elect Engn Pohang 37673 South Korea Korea Univ Sch Elect Engn Seoul 02841 South Korea Princeton Univ Dept Elect & Comp Engn Princeton NJ 08544 USA
In this paper, we present a communication-efficient federated learning framework inspired by quantized compressed sensing. The presented framework consists of gradient compression for wireless devices and gradient rec... 详细信息
来源: 评论
NOVEL gradient SPARSIFICATION ALGORITHM VIA BAYESIAN INFERENCE  34
NOVEL GRADIENT SPARSIFICATION ALGORITHM VIA BAYESIAN INFEREN...
收藏 引用
34th International Workshop on Machine Learning for Signal Processing
作者: Bereyhi, Ali Liang, Ben Boudreau, Gary Afana, Ali Univ Toronto ECE Dept Toronto ON Canada Ericsson Canada Ottawa ON Canada
Error accumulation is an essential component of the TOP-k sparsification method in distributed gradient descent. It implicitly scales the learning rate and prevents the slow-down of lateral movement, but it can also d... 详细信息
来源: 评论
distributed differentially-private learning with communication efficiency
收藏 引用
JOURNAL OF SYSTEMS ARCHITECTURE 2022年 128卷
作者: Phuong, Tran Thi Phong, Le Trieu Meiji Univ Tokyo Kanagawa 2148571 Japan Natl Inst Informat & Commun Technol NICT Tokyo 1848795 Japan
In this paper, we propose a new algorithm for learning over distributed data such as in the IoT environment, in a privacy-preserving way. Our algorithm is a differentially private variant of distributed synchronous st... 详细信息
来源: 评论
Communication-Efficient distributed SGD Using Random Access for Over-the-Air Computation
IEEE JOURNAL ON SELECTED AREAS IN INFORMATION THEORY
收藏 引用
IEEE JOURNAL ON SELECTED AREAS IN INFORMATION THEORY 2022年 第2期3卷 206-216页
作者: Choi, Jinho Deakin Univ Sch Informat Technol Geelong Vic 3220 Australia
In this paper, we study communication-efficient distributed stochastic gradient descent (SGD) with data sets of users distributed over a certain area and communicating through wireless channels. Since the time for one... 详细信息
来源: 评论
Sketch-fusion: A gradient compression method with multi-layer fusion for communication-efficient distributed training
收藏 引用
JOURNAL OF PARALLEL AND distributed COMPUTING 2024年 185卷
作者: Dai, Lingfei Gong, Luqi An, Zhulin Xu, Yongjun Diao, Boyu Chinese Acad Sci Inst Comp Technol Beijing Peoples R China Univ Chinese Acad Sci Coll Comp Sci Beijing Peoples R China
gradient compression is an effective technique for improving the efficiency of distributed training. However, introducing gradient compression can reduce model accuracy and training efficiency. Furthermore, we also fi... 详细信息
来源: 评论
A Random Access based Approach to Communication-Efficient distributed SGD
A Random Access based Approach to Communication-Efficient Di...
收藏 引用
IEEE International Conference on Communications (ICC)
作者: Choi, Jinho Deakin Univ Sch Informat Technol Geelong Vic Australia
In this paper, we study communication-efficient distributed stochastic gradient descent (SGD) with data sets of users distributed over a certain area and communicating through wireless channels. Since the time for one... 详细信息
来源: 评论
A Compressive Sensing Approach for Federated Learning Over Massive MIMO Communication Systems
收藏 引用
IEEE TRANSACTIONS ON WIRELESS COMMUNICATIONS 2021年 第3期20卷 1990-2004页
作者: Jeon, Yo-Seb Amiri, Mohammad Mohammadi Li, Jun Poor, H. Vincent POSTECH Dept Elect Engn Pohang 37673 South Korea Princeton Univ Dept Elect Engn Princeton NJ 08544 USA Nanjing Univ Sci & Technol Sch Elect & Opt Engn Nanjing 210094 Peoples R China Natl Res Tomsk Polytech Univ Sch Comp Sci & Robot Tomsk 634050 Russia
Federated learning is a privacy-preserving approach to train a global model at a central server by collaborating with wireless devices, each with its own local training data set. In this paper, we present a compressiv... 详细信息
来源: 评论
MG-WFBP: Merging gradients Wisely for Efficient Communication in distributed Deep Learning
收藏 引用
IEEE TRANSACTIONS ON PARALLEL AND distributed SYSTEMS 2021年 第8期32卷 1903-1917页
作者: Shi, Shaohuai Chu, Xiaowen Li, Bo Hong Kong Univ Sci & Technol Dept Comp Sci & Engn Kowloon Hong Kong Peoples R China Hong Kong Baptist Univ Dept Comp Sci Kowloon Hong Kong Peoples R China
distributed synchronous stochastic gradient descent has been widely used to train deep neural networks (DNNs) on computer clusters. With the increase of computational power, network communications generally limit the ... 详细信息
来源: 评论
Partitioning Sparse Deep Neural Networks for Scalable Training and Inference  21
Partitioning Sparse Deep Neural Networks for Scalable Traini...
收藏 引用
35th ACM International Conference on Supercomputing (ICS)
作者: Demirci, Gunduz Vehbi Ferhatosmanoglu, Hakan Univ Warwick Coventry W Midlands England
The state-of-the-art deep neural networks (DNNs) have significant computational and data management requirements. The size of both training data and models continue to increase. Sparsification and pruning methods are ... 详细信息
来源: 评论