咨询与建议

看过本文的还看了

相关文献

该作者的其他文献

文献详情 >AdaPT: Fast Emulation of Appro... 收藏

AdaPT: Fast Emulation of Approximate DNN Accelerators in PyTorch

作     者:Danopoulos, Dimitrios Zervakis, Georgios Siozios, Kostas Soudris, Dimitrios Henkel, Joerg 

作者机构:Natl Tech Univ Athens Sch Elect & Comp Engn Athens 15780 Greece Karlsruhe Inst Technol Chair Embedded Syst D-76131 Karlsruhe Germany Aristotle Univ Thessaloniki Dept Phys Thessaloniki 54124 Greece 

出 版 物:《IEEE TRANSACTIONS ON COMPUTER-AIDED DESIGN OF INTEGRATED CIRCUITS AND SYSTEMS》 (IEEE Trans Comput Aided Des Integr Circuits Syst)

年 卷 期:2023年第42卷第6期

页      面:2074-2078页

核心收录:

学科分类:0808[工学-电气工程] 08[工学] 0812[工学-计算机科学与技术(可授工学、理学学位)] 

基  金:E.C. German Research Foundation (DFG) [HE 2343/16-1] 

主  题:Accelerator approximate computing deep neural network (DNN) PyTorch quantization 

摘      要:Current state-of-the-art employs approximate multipliers to address the highly increased power demands of deep neural network (DNN) accelerators. However, evaluating the accuracy of approximate DNNs is cumbersome due to the lack of adequate support for approximate arithmetic in DNN frameworks. We address this inefficiency by presenting AdaPT, a fast emulation framework that extends PyTorch to support approximate inference as well as approximation-aware retraining. AdaPT can be seamlessly deployed and is compatible with the most DNNs. We evaluate the framework on several DNN models and application fields, including CNNs, LSTMs, and GANs for a number of approximate multipliers with distinct bitwidth values. The results show substantial error recovery from approximate retraining and reduced inference time up to 53.9x with respect to the baseline approximate implementation.

读者评论 与其他读者分享你的观点

用户名:未登录
我的评分