咨询与建议

看过本文的还看了

相关文献

该作者的其他文献

文献详情 >Interlocking Backpropagation: ... 收藏

Interlocking Backpropagation: Improving depthwise model-parallelism

作     者:Gomez, Aidan N. Key, Oscar Perlin, Kuba Gou, Stephen Frosst, Nick Dean, Jeff Gal, Yarin 

作者机构:Univ Oxford Oxford England Cohere Toronto ON Canada Google Mountain View CA 94043 USA 

出 版 物:《JOURNAL OF MACHINE LEARNING RESEARCH》 (机器学习研究杂志)

年 卷 期:2022年第23卷第1期

页      面:1-28页

核心收录:

学科分类:08[工学] 0811[工学-控制科学与工程] 0812[工学-计算机科学与技术(可授工学、理学学位)] 

基  金:Engineering and Physical Sciences Research Council (EPSRC) [EP/S021566/1] 

主  题:Model Parallelism Distributed Optimisation Large-scale Modelling Parallel Distributed Processing Efficient Training 

摘      要:The number of parameters in state of the art neural networks has drastically increased in recent years. This surge of interest in large scale neural networks has motivated the development of new distributed training strategies enabling such models. One such strategy is model-parallel distributed training. Unfortunately, model-parallelism can suffer from poor resource utilisation, which leads to wasted resources. In this work, we improve upon recent developments in an idealised model-parallel optimisation setting: local learning. Motivated by poor resource utilisation in the global setting and poor task performance in the local setting, we introduce a class of intermediary strategies between local and global learning referred to as interlocking backpropagation. These strategies preserve many of the compute-efficiency advantages of local optimisation, while recovering much of the task performance achieved by global optimisation. We assess our strategies on both image classification ResNets and Transformer language models, finding that our strategy consistently out-performs local learning in terms of task performance, and out-performs global learning in training efficiency.

读者评论 与其他读者分享你的观点

用户名:未登录
我的评分