KAUST DepartmentComputer, Electrical and Mathematical Sciences and Engineering (CEMSE) Division
Electrical Engineering Program
Physical Science and Engineering (PSE) Division
Permanent link to this recordhttp://hdl.handle.net/10754/666958
MetadataShow full item record
AbstractTraditional convolutional neural network (CNN) architectures suffer from two bottlenecks: computational complexity and memory access cost. In this study, an efficient in-memory convolution accelerator (IMCA) is proposed based on associative in-memory processing to alleviate these two problems directly. In the IMCA, the convolution operations are directly performed inside the memory as in-place operations. The proposed memory computational structure allows for a significant improvement in computational metrics, namely, TOPS/W. Furthermore, due to its unconventional computation style, the IMCA can take advantage of many potential opportunities, such as constant multiplication, bit-level sparsity, and dynamic approximate computing, which, while supported by traditional architectures, require extra overhead to exploit, thus reducing any potential gains. The proposed accelerator architecture exhibits a significant efficiency in terms of area and performance, achieving around 0.65 GOPS and 1.64 TOPS/W at 16-bit fixed-point precision with an area less than 0.25 mm².
CitationYantir, H. E., Eltawil, A. M., & Salama, K. N. (2021). IMCA: An Efficient In-Memory Convolution Accelerator. IEEE Transactions on Very Large Scale Integration (VLSI) Systems, 1–14. doi:10.1109/tvlsi.2020.3047641