咨询与建议

看过本文的还看了

相关文献

该作者的其他文献

文献详情 >Deep Model Compression for Mob... 收藏

Deep Model Compression for Mobile Platforms:A Survey

Deep Model Compression for Mobile Platforms:A Survey

作     者:Kaiming Nan Sicong Liu Junzhao Du Hui Liu 

作者机构:School of Computer Science and TechnologyXidian UniversityXi'an 710071China School of Software and Institute of Software EngineeringXidian UniversityXi'an 710071China 

出 版 物:《Tsinghua Science and Technology》 (清华大学学报(自然科学版(英文版))

年 卷 期:2019年第24卷第6期

页      面:677-693页

核心收录:

学科分类:07[理学] 08[工学] 

基  金:supported by the National Key Research and Development Program of China (No. 2018YFB1003605) Foundations of CARCH (No. CARCH201704) the National Natural Science Foundation of China (No. 61472312) Foundations of Shaanxi Province and Xi’an Science Technology Plan (Nos. B018230008 and BD34017020001) the Foundations of Xidian University (No. JBZ171002) 

主  题:deep learning model compression run-time resource management cost optimization 

摘      要:Despite the rapid development of mobile and embedded hardware, directly executing computationexpensive and storage-intensive deep learning algorithms on these devices’ local side remains constrained for sensory data analysis. In this paper, we first summarize the layer compression techniques for the state-of-theart deep learning model from three categories: weight factorization and pruning, convolution decomposition, and special layer architecture designing. For each category of layer compression techniques, we quantify their storage and computation tunable by layer compression techniques and discuss their practical challenges and possible improvements. Then, we implement Android projects using TensorFlow Mobile to test these 10 compression methods and compare their practical performances in terms of accuracy, parameter size, intermediate feature size,computation, processing latency, and energy consumption. To further discuss their advantages and bottlenecks,we test their performance over four standard recognition tasks on six resource-constrained Android ***, we survey two types of run-time Neural Network(NN) compression techniques which are orthogonal with the layer compression techniques, run-time resource management and cost optimization with special NN architecture,which are orthogonal with the layer compression techniques.

读者评论 与其他读者分享你的观点

用户名:未登录
我的评分