computation-in-memory(CIM) chips offer an energy-efficient approach to artificial intelligence computing workloads. Resistive random-access memory(RRAM)-based CIM chips have proven to be a promising solution for overc...
详细信息
computation-in-memory(CIM) chips offer an energy-efficient approach to artificial intelligence computing workloads. Resistive random-access memory(RRAM)-based CIM chips have proven to be a promising solution for overcoming the von Neumann bottleneck. In this paper, we review our recent studies on the architecture-circuit-technology co-optimization of scalable CIM chips and related hardware demonstrations. To further minimize data movements between memory and computing units, architecture optimization methods have been introduced. Then, we propose a device-architecture-algorithm co-design simulator to provide guidelines for designing CIM systems. A physics-based compact RRAM model and an array-level analog computing model were embedded in the simulator. In addition, a CIM compiler was proposed to optimize the on-chip dataflow. Finally, research perspectives are proposed for future development.
暂无评论