Show simple item record

dc.contributor.authorWang, Di
dc.contributor.authorXu, Jinhui
dc.date.accessioned2021-04-05T06:09:06Z
dc.date.available2021-04-05T06:09:06Z
dc.date.issued2021-02-25
dc.identifier.citationWang, D., & Xu, J. (2021). Escaping Saddle Points of Empirical Risk Privately and Scalably via DP-Trust Region Method. Lecture Notes in Computer Science, 90–106. doi:10.1007/978-3-030-67664-3_6
dc.identifier.isbn9783030676636
dc.identifier.issn1611-3349
dc.identifier.issn0302-9743
dc.identifier.doi10.1007/978-3-030-67664-3_6
dc.identifier.urihttp://hdl.handle.net/10754/668527
dc.description.abstractIt has been shown recently that many non-convex objective/loss functions in machine learning are known to be strict saddle. This means that finding a second-order stationary point (i.e., approximate local minimum) and thus escaping saddle points are sufficient for such functions to obtain a classifier with good generalization performance. Existing algorithms for escaping saddle points, however, all fail to take into consideration a critical issue in their designs, that is, the protection of sensitive information in the training set. Models learned by such algorithms can often implicitly memorize the details of sensitive information, and thus offer opportunities for malicious parties to infer it from the learned models. In this paper, we investigate the problem of privately escaping saddle points and finding a second-order stationary point of the empirical risk of non-convex loss function. Previous result on this problem is mainly of theoretical importance and has several issues (e.g., high sample complexity and non-scalable) which hinder its applicability, especially, in big data. To deal with these issues, we propose in this paper a new method called Differentially Private Trust Region, and show that it outputs a second-order stationary point with high probability and less sample complexity, compared to the existing one. Moreover, we also provide a stochastic version of our method (along with some theoretical guarantees) to make it faster and more scalable. Experiments on benchmark datasets suggest that our methods are indeed more efficient and practical than the previous one.
dc.publisherSpringer Nature
dc.relation.urlhttp://link.springer.com/10.1007/978-3-030-67664-3_6
dc.rightsArchived with thanks to Springer International Publishing
dc.titleEscaping Saddle Points of Empirical Risk Privately and Scalably via DP-Trust Region Method
dc.typeConference Paper
dc.contributor.departmentKing Abdullah University of Science and Technology, Thuwal, Saudi Arabia
dc.conference.date2020-09-14 to 2020-09-18
dc.conference.nameEuropean Conference on Machine Learning and Knowledge Discovery in Databases, ECML PKDD 2020
dc.conference.locationVirtual, Online
dc.eprint.versionPost-print
dc.contributor.institutionDepartment of Computer Science and Engineering, State University of New York at Buffalo, Buffalo, NY, 14260, USA
dc.identifier.volume12459 LNAI
dc.identifier.pages90-106
kaust.personWang, Di
dc.identifier.eid2-s2.0-85103254282
refterms.dateFOA2021-04-05T09:46:04Z


Files in this item

Thumbnail
Name:
sub_604.pdf
Size:
2.139Mb
Format:
PDF
Description:
Accepted Manuscript

This item appears in the following Collection(s)

Show simple item record