• Login
    View Item 
    •   Home
    • Office of Sponsored Research (OSR)
    • KAUST Funded Research
    • Publications Acknowledging KAUST Support
    • View Item
    •   Home
    • Office of Sponsored Research (OSR)
    • KAUST Funded Research
    • Publications Acknowledging KAUST Support
    • View Item
    JavaScript is disabled for your browser. Some features of this site may not work without it.

    Browse

    All of KAUSTCommunitiesIssue DateSubmit DateThis CollectionIssue DateSubmit Date

    My Account

    Login

    Quick Links

    Open Access PolicyORCID LibguideTheses and Dissertations LibguideSubmit an Item

    Statistics

    Display statistics

    Differentially Quantized Gradient Methods

    • CSV
    • RefMan
    • EndNote
    • BibTex
    • RefWorks
    Type
    Article
    Authors
    Lin, Chung-Yi
    Kostina, Victoria cc
    Hassibi, Babak
    Date
    2022-04-28
    Permanent link to this record
    http://hdl.handle.net/10754/681138
    
    Metadata
    Show full item record
    Abstract
    Consider the following distributed optimization scenario. A worker has access to training data that it uses to compute the gradients while a server decides when to stop iterative computation based on its target accuracy or delay constraints. The server receives all its information about the problem instance from the worker via a rate-limited noiseless communication channel. We introduce the principle we call differential quantization (DQ) that prescribes compensating the past quantization errors to direct the descent trajectory of a quantized algorithm towards that of its unquantized counterpart. Assuming that the objective function is smooth and strongly convex, we prove that differentially quantized gradient descent (DQ-GD) attains a linear contraction factor of $\max \{\sigma _{\mathrm {GD}}, \rho _{n} 2^{-R}\}$ , where $\sigma _{\mathrm {GD}}$ is the contraction factor of unquantized gradient descent (GD), $\rho _{n} \geq 1$ is the covering efficiency of the quantizer, and $R$ is the bitrate per problem dimension $n$ . Thus at any $R\geq \log _{2} \rho _{n} /\sigma _{\mathrm {GD}}$ bits, the contraction factor of DQ-GD is the same as that of unquantized GD, i.e., there is no loss due to quantization. We show a converse demonstrating that no algorithm within a certain class can converge faster than $\max \{\sigma _{\mathrm {GD}}, 2^{-R}\}$ . Since quantizers exist with $\rho _{n} \to 1$ as $n \to \infty $ (Rogers, 1963), this means that DQ-GD is asymptotically optimal. In contrast, naively quantized GD where the worker directly quantizes the gradient barely attains $\sigma _{\mathrm {GD}} + \rho _{n}2^{-R}$ . The principle of differential quantization continues to apply to gradient methods with momentum such as Nesterov’s accelerated gradient descent, and Polyak’s heavy ball method. For these algorithms as well, if the rate is above a certain threshold, there is no loss in contraction factor obtained by the differentially quantized algorithm compared to its unquantized counterpart, and furthermore, the differentially quantized heavy ball method attains the optimal contraction achievable among all (even unquantized) gradient methods. Experimental results on least-squares problems validate our theoretical analysis.
    Citation
    Lin, C.-Y., Kostina, V., & Hassibi, B. (2022). Differentially Quantized Gradient Methods. IEEE Transactions on Information Theory, 68(9), 6078–6097. https://doi.org/10.1109/tit.2022.3171173
    Sponsors
    This work was supported in part by the National Science Foundation (NSF) under Grant CCF-1751356, Grant CCF-1956386, Grant CNS-0932428, Grant CCF-1018927, Grant CCF-1423663, and Grant CCF-1409204; in part by a grant from Qualcomm Inc.; in part by the National Aeronautics and Space Administration’s (NASA) Jet Propulsion Laboratory through the President and Director’s Fund; and in part by the King Abdullah University of Science and Technology. The authors would like to thank Dr. Himanshu Tyagi for pointing out related works [19], [55]; Dr. Vincent Tan for bringing a known result on the worst-case contraction factor of unquantized GD [75] to their attention; Dr. Victor Kozyakin for a helpful discussion about joint spectral radius; and two anonymous reviewers for detailed comments.
    Publisher
    Institute of Electrical and Electronics Engineers (IEEE)
    Journal
    IEEE TRANSACTIONS ON INFORMATION THEORY
    DOI
    10.1109/TIT.2022.3171173
    Additional Links
    https://ieeexplore.ieee.org/document/9764884/
    ae974a485f413a2113503eed53cd6c53
    10.1109/TIT.2022.3171173
    Scopus Count
    Collections
    Publications Acknowledging KAUST Support

    entitlement

     
    DSpace software copyright © 2002-2023  DuraSpace
    Quick Guide | Contact Us | KAUST University Library
    Open Repository is a service hosted by 
    Atmire NV
     

    Export search results

    The export option will allow you to export the current search results of the entered query to a file. Different formats are available for download. To export the items, click on the button corresponding with the preferred download format.

    By default, clicking on the export buttons will result in a download of the allowed maximum amount of items. For anonymous users the allowed maximum amount is 50 search results.

    To select a subset of the search results, click "Selective Export" button and make a selection of the items you want to export. The amount of items that can be exported at once is similarly restricted as the full export.

    After making a selection, click one of the export format buttons. The amount of items that will be exported is indicated in the bubble next to export format.