• Login
    View Item 
    •   Home
    • Theses and Dissertations
    • PhD Dissertations
    • View Item
    •   Home
    • Theses and Dissertations
    • PhD Dissertations
    • View Item
    JavaScript is disabled for your browser. Some features of this site may not work without it.

    Browse

    All of KAUSTCommunitiesIssue DateSubmit DateThis CollectionIssue DateSubmit Date

    My Account

    Login

    Quick Links

    Open Access PolicyORCID LibguideTheses and Dissertations LibguideSubmit an Item

    Statistics

    Display statistics

    Energy-Efficient Neuromorphic Computing Systems

    • CSV
    • RefMan
    • EndNote
    • BibTex
    • RefWorks
    Thumbnail
    Name:
    PhD dissertation.pdf
    Size:
    4.224Mb
    Format:
    PDF
    Description:
    PhD Dissertation
    Embargo End Date:
    2024-03-09
    Download
    Type
    Dissertation
    Authors
    Guo, Wenzhe cc
    Advisors
    Salama, Khaled N. cc
    Committee members
    Eltawil, Ahmed cc
    Keyes, David E. cc
    Fahmy, Suhaib A. cc
    Neftci, Emre
    Program
    Electrical and Computer Engineering
    KAUST Department
    Computer, Electrical and Mathematical Science and Engineering (CEMSE) Division
    Date
    2023-03-09
    Embargo End Date
    2024-03-09
    Permanent link to this record
    http://hdl.handle.net/10754/690213
    
    Metadata
    Show full item record
    Access Restrictions
    At the time of archiving, the student author of this dissertation opted to temporarily restrict access to it. The full text of this dissertation will become available to the public after the expiration of the embargo on 2024-03-09.
    Abstract
    Neuromorphic computing has emerged as a new and promising computing principle that emulates how human brains process information. The underlying spiking neural networks (SNNs) are well-known for having higher energy efficiency than artificial neural networks (ANNs). Neuromorphic systems enable highly parallel computation and reduce memory bandwidth limitations, making hardware performance scalable with the ever-increasing model complexities. Inefficiency in designing neuromorphic systems generally originates from redundant parameters, nonoptimized models, lacking computing parallelism, and sequential training algorithms. This dissertation aims to address these problems and propose effective solutions. Over-parameterization and redundant computations are common problems in neural networks. As the first stage of this dissertation, we introduce various strategies for pruning neurons and weights while training in an unsupervised SNN by exploring neural dynamics and firing activity. Both methods are demonstrated to be effective at network compression and the preservation of good classification performance. In the second stage of this dissertation, we propose to optimize neuromorphic systems from both algorithmic and hardware perspectives. The network model is optimized from the software level through a biological hyperparameter optimization strategy, resulting in a hardware-friendly network setting. Different computational methods are analyzed to guide hardware implementation. The hardware implementation strategy features distributed neural memory and parallel memory organization. A more than 300× improvement in training speed and 180× reduction in energy are demonstrated in the proposed system compared with a previous study. Moreover, an efficient on-chip training algorithm is essential for low-energy processing. In the third stage, we dive into the design of local-training-enabled neuromorphic systems, introducing a spatially local backpropagation algorithm. The proposed digital architecture explores spike sparsity, computing parallelism, and parallel training. At the same accuracy level, the design achieves 3.2× lower energy and 1.8× lower latency compared with an ANN. Moreover, the spatially local training mechanism is extended into a temporal dimension using a Backpropagation Through Time–based training algorithm. Local training mechanisms in both dimensions work synergistically to improve algorithmic performance. A significant reduction in computational cost is achieved, including 89.94% in GPU memory, 10.79% in memory access, and 99.64% in MAC operations compared with the standard method.
    Citation
    Guo, W. (2023). Energy-Efficient Neuromorphic Computing Systems [KAUST Research Repository]. https://doi.org/10.25781/KAUST-1145M
    DOI
    10.25781/KAUST-1145M
    ae974a485f413a2113503eed53cd6c53
    10.25781/KAUST-1145M
    Scopus Count
    Collections
    PhD Dissertations; Electrical and Computer Engineering Program; Computer, Electrical and Mathematical Science and Engineering (CEMSE) Division

    entitlement

     
    DSpace software copyright © 2002-2023  DuraSpace
    Quick Guide | Contact Us | KAUST University Library
    Open Repository is a service hosted by 
    Atmire NV
     

    Export search results

    The export option will allow you to export the current search results of the entered query to a file. Different formats are available for download. To export the items, click on the button corresponding with the preferred download format.

    By default, clicking on the export buttons will result in a download of the allowed maximum amount of items. For anonymous users the allowed maximum amount is 50 search results.

    To select a subset of the search results, click "Selective Export" button and make a selection of the items you want to export. The amount of items that can be exported at once is similarly restricted as the full export.

    After making a selection, click one of the export format buttons. The amount of items that will be exported is indicated in the bubble next to export format.