KAUST DepartmentComputer Science Program
Computer, Electrical and Mathematical Sciences and Engineering (CEMSE) Division
Online Publication Date2018-06-17
Print Publication Date2018
Permanent link to this recordhttp://hdl.handle.net/10754/628272
MetadataShow full item record
AbstractRobust machine learning algorithms have been widely studied in adversarial environments where the adversary maliciously manipulates data samples to evade security systems. In this paper, we propose randomized SVMs against generalized adversarial attacks under uncertainty, through learning a classifier distribution rather than a single classifier in traditional robust SVMs. The randomized SVMs have advantages on better resistance against attacks while preserving high accuracy of classification, especially for non-separable cases. The experimental results demonstrate the effectiveness of our proposed models on defending against various attacks, including aggressive attacks with uncertainty.
CitationChen Y, Wang W, Zhang X (2018) Randomizing SVM Against Adversarial Attacks Under Uncertainty. Lecture Notes in Computer Science: 556–568. Available: http://dx.doi.org/10.1007/978-3-319-93040-4_44.
SponsorsThis work was supported by the King Abdullah University of Science and Technology, and by Natural Science Foundation of China, under grant U1736114 and 61672092, and in part by National Key R&D Program of China (2017YFB0802805).
Conference/Event name22nd Pacific-Asia Conference on Advances in Knowledge Discovery and Data Mining, PAKDD 2018