咨询与建议

看过本文的还看了

相关文献

该作者的其他文献

文献详情 >TT@CIM: A Tensor-Train In-Memo... 收藏

TT@CIM: A Tensor-Train In-Memory-Computing Processor Using Bit-Level-Sparsity Optimization and Variable Precision Quantization

作     者:Guo, Ruiqi Yue, Zhiheng Si, Xin Li, Hao Hu, Te Tang, Limei Wang, Yabing Sun, Hao Liu, Leibo Chang, Meng-Fan Li, Qiang Wei, Shaojun Yin, Shouyi 

作者机构:Tsinghua University Beijing National Research Center for Information Science and Technology School of Integrated Circuits and the Beijing Advanced Innovation Center Beijing100084 China Southeast University National ASIC System Engineering Research Center School of Electronic Science and Engineering Nanjing210096 China  Department of Electrical Engineering Hsinchu300 Taiwan  Chengdu610054 China 

出 版 物:《IEEE Journal of Solid-State Circuits》 (IEEE J Solid State Circuits)

年 卷 期:2023年第58卷第3期

页      面:852-866页

核心收录:

学科分类:0710[理学-生物学] 0808[工学-电气工程] 0809[工学-电子科学与技术(可授工学、理学学位)] 08[工学] 0807[工学-动力工程及工程热物理] 0835[工学-软件工程] 0836[工学-生物工程] 0701[理学-数学] 0812[工学-计算机科学与技术(可授工学、理学学位)] 

基  金:NSFC National Key Research and Development Program Beijing Science and Technology Project Beijing Advanced Innovation Center 

主  题:Deep neural networks 

摘      要:Computing-in-memory (CIM) is an attractive approach for energy-efficient deep neural network (DNN) processing, especially for low-power edge devices. However, today s typical DNNs usually exceed CIM-static random access memory (SRAM) capacity. The introduced off-chip communication covers up the benefits of CIM technique, meaning that CIM processors still encounter the memory bottleneck. To eliminate this bottleneck, we propose a CIM processor, called TT@CIM, which applies the tensor-Train decomposition (TTD) method to compress the entire DNN to fit within CIM-SRAM. However, the cost of storage reduction by TTD is to introduce multiple serial small-size matrix multiplications, resulting in massive inefficient multiply-And-Accumulate (MAC) and quantization operations (QuantOps). To achieve high energy efficiency, three optimization techniques are proposed in TT@CIM. First, TTD-CIM-matched dataflow is proposed to maximize CIM utilization and minimize additional MAC operations. Second, a bit-level-sparsity-optimized CIM macro with high bit-level-sparsity encoding scheme is designed to reduce the power consumption of one MAC operation. Third, a variable precision quantization method and a lookup table-based quantization unit are presented to improve the performance and energy efficiency of QuantOp. Fabricated in 28-nm CMOS and tested on 4/8-bit decomposed DNNs, TT@CIM achieves 5.99-To-691.13-TOPS/W peak energy efficiency depending on the operating voltage. © 1966-2012 IEEE.

读者评论 与其他读者分享你的观点

用户名:未登录
我的评分