咨询与建议

看过本文的还看了

相关文献

该作者的其他文献

文献详情 >Accelerating Large Scale Knowl... 收藏
arXiv

Accelerating Large Scale Knowledge Distillation via Dynamic Importance Sampling

作     者:Li, Minghan Zuo, Tanli Li, Ruicheng White, Martha Zheng, Weishi 

作者机构:School of Data and Computer Science Sun Yat-Sen University Department of Computing Science University of Alberta 

出 版 物:《arXiv》 (arXiv)

年 卷 期:2018年

核心收录:

主  题:Students 

摘      要:Knowledge distillation is an effective technique that transfers knowledge from a large teacher model to a shallow student. However, just like massive classification, large scale knowledge distillation also imposes heavy computational costs on training models of deep neural networks, as the softmax activations at the last layer involve computing probabilities over numerous classes. In this work, we apply the idea of importance sampling which is often used in Neural Machine Translation on large scale knowledge distillation. We present a method called dynamic importance sampling, where ranked classes are sampled from a dynamic distribution derived from the interaction between the teacher and student in full distillation. We highlight the utility of our proposal prior which helps the student capture the main information in the loss function. Our approach manages to reduce the computational cost at training time while maintaining the competitive performance on CIFAR-100 and Market-1501 person re-identification datasets. Copyright © 2018, The Authors. All rights reserved.

读者评论 与其他读者分享你的观点

用户名:未登录
我的评分