Show simple item record

dc.contributor.authorGaryfallou, Dimitrios
dc.contributor.authorVagenas, Anastasis
dc.contributor.authorAntoniadis, Charalampos
dc.contributor.authorMassoud, Yehia Mahmoud
dc.contributor.authorStamoulis, George
dc.date.accessioned2022-06-05T12:05:25Z
dc.date.available2022-06-05T12:05:25Z
dc.date.issued2022-06-02
dc.identifier.citationGaryfallou, D., Vagenas, A., Antoniadis, C., Massoud, Y., & Stamoulis, G. (2022). Leveraging Machine Learning for Gate-level Timing Estimation Using Current Source Models and Effective Capacitance. Proceedings of the Great Lakes Symposium on VLSI 2022. https://doi.org/10.1145/3526241.3530343
dc.identifier.doi10.1145/3526241.3530343
dc.identifier.urihttp://hdl.handle.net/10754/678598
dc.description.abstractWith process technology scaling, accurate gate-level timing analysis becomes even more challenging. Highly resistive on-chip interconnects have an ever-increasing impact on timing, signals no longer resemble smooth saturated ramps, while gate-interconnect interdependencies are stronger. Moreover, efficiency is a serious concern since repeatedly invoking a signoff tool during incremental optimization of modern VLSI circuits has become a major bottleneck. In this paper, we introduce a novel machine learning approach for timing estimation of gate-level stages using current source models and the concept of multiple slew and effective capacitance values. First, we exploit a fast iterative algorithm for initial stage timing estimation and feature extraction, and then we employ four artificial neural networks to correlate the initial delay and slew estimates for both the driver and interconnect with golden SPICE results. Contrary to prior works, our method uses fewer and more accurate features to represent the stage, leading to more efficient models. Experimental evaluation on driver-interconnect stages implemented in 7 nm FinFET technology indicates that our method leads to 0.99% (0.90 ps) and 2.54% (2.59 ps) mean error against SPICE for stage delay and slew, respectively. Furthermore, it has a small memory footprint (1.27 MB) and performs 35× faster than a commercial signoff tool. Thus, it may be integrated into timing-driven optimization steps to provide signoff accuracy and expedite timing closure.
dc.publisherACM
dc.relation.urlhttps://dl.acm.org/doi/10.1145/3526241.3530343
dc.rightsArchived with thanks to ACM
dc.titleLeveraging Machine Learning for Gate-level Timing Estimation Using Current Source Models and Effective Capacitance
dc.typeConference Paper
dc.contributor.departmentComputer, Electrical and Mathematical Science and Engineering (CEMSE) Division
dc.contributor.departmentElectrical and Computer Engineering Program
dc.contributor.departmentInnovative Technologies Laboratories (ITL)
dc.conference.dateJune 6 - 8, 2022
dc.conference.nameGLSVLSI '22: Great Lakes Symposium on VLSI 2022
dc.conference.locationIrvine CA USA
dc.eprint.versionPost-print
dc.contributor.institutionUniversity of Thessaly, Volos, Greece
kaust.personAntoniadis, Charalampos
kaust.personMassoud, Yehia Mahmoud
refterms.dateFOA2022-06-06T06:41:35Z


Files in this item

Thumbnail
Name:
Leveraging Machine Learning for Gate-level Timing Estimation.pdf
Size:
1.368Mb
Format:
PDF
Description:
Accepted Manuscript

This item appears in the following Collection(s)

Show simple item record