In-memorycomputing is a promising architecture to meet the exploding demand for data-intensive workloads, including deep neural networks. In particular, analog in-memorycomputings (AIMCs) is a promising way to build...
详细信息
ISBN:
(纸本)9798350300116
In-memorycomputing is a promising architecture to meet the exploding demand for data-intensive workloads, including deep neural networks. In particular, analog in-memorycomputings (AIMCs) is a promising way to build matrix multiplication accelerators that take full advantage of data parallelism and reusability. However, most AIMCs use voltage readout circuits that have no benefit from CMOS scaling, which is an obstacle to improving computational density. We propose a method that combines capacitive AIMC and readout with near-memory time-subtraction, which is theoretically scalable concerning miniaturization and row/column parallelism and is adjustable with output resolution. We have evaluated the signed multi-bit dot product operation in post-layout simulation using circuits designed with a 180-nm process. Even with x 16 increase in row-parallelism (9 to 144), the time resolution required for readout was successfully reduced to a variation of 0.39%.
暂无评论