Adding Robustness to Support Vector Machines Against Adversarial Reverse Engineering

Handle URI:
http://hdl.handle.net/10754/565844
Title:
Adding Robustness to Support Vector Machines Against Adversarial Reverse Engineering
Authors:
Alabdulmohsin, Ibrahim ( 0000-0002-9387-5820 ) ; Gao, Xin ( 0000-0002-7108-3574 ) ; Zhang, Xiangliang ( 0000-0002-3574-5665 )
Abstract:
Many classification algorithms have been successfully deployed in security-sensitive applications including spam filters and intrusion detection systems. Under such adversarial environments, adversaries can generate exploratory attacks against the defender such as evasion and reverse engineering. In this paper, we discuss why reverse engineering attacks can be carried out quite efficiently against fixed classifiers, and investigate the use of randomization as a suitable strategy for mitigating their risk. In particular, we derive a semidefinite programming (SDP) formulation for learning a distribution of classifiers subject to the constraint that any single classifier picked at random from such distribution provides reliable predictions with a high probability. We analyze the tradeoff between variance of the distribution and its predictive accuracy, and establish that one can almost always incorporate randomization with large variance without incurring a loss in accuracy. In other words, the conventional approach of using a fixed classifier in adversarial environments is generally Pareto suboptimal. Finally, we validate such conclusions on both synthetic and real-world classification problems. Copyright 2014 ACM.
KAUST Department:
Computer, Electrical and Mathematical Sciences and Engineering (CEMSE) Division; Computer Science Program; Computational Bioscience Research Center (CBRC); Structural and Functional Bioinformatics Group; Machine Intelligence & kNowledge Engineering Lab
Publisher:
Association for Computing Machinery (ACM)
Journal:
Proceedings of the 23rd ACM International Conference on Conference on Information and Knowledge Management - CIKM '14
Conference/Event name:
23rd ACM International Conference on Information and Knowledge Management, CIKM 2014
Issue Date:
2014
DOI:
10.1145/2661829.2662047
Type:
Conference Paper
Appears in Collections:
Conference Papers; Structural and Functional Bioinformatics Group; Computer Science Program; Computational Bioscience Research Center (CBRC); Computer, Electrical and Mathematical Sciences and Engineering (CEMSE) Division

Full metadata record

DC FieldValue Language
dc.contributor.authorAlabdulmohsin, Ibrahimen
dc.contributor.authorGao, Xinen
dc.contributor.authorZhang, Xiangliangen
dc.date.accessioned2015-08-11T13:43:00Zen
dc.date.available2015-08-11T13:43:00Zen
dc.date.issued2014en
dc.identifier.doi10.1145/2661829.2662047en
dc.identifier.urihttp://hdl.handle.net/10754/565844en
dc.description.abstractMany classification algorithms have been successfully deployed in security-sensitive applications including spam filters and intrusion detection systems. Under such adversarial environments, adversaries can generate exploratory attacks against the defender such as evasion and reverse engineering. In this paper, we discuss why reverse engineering attacks can be carried out quite efficiently against fixed classifiers, and investigate the use of randomization as a suitable strategy for mitigating their risk. In particular, we derive a semidefinite programming (SDP) formulation for learning a distribution of classifiers subject to the constraint that any single classifier picked at random from such distribution provides reliable predictions with a high probability. We analyze the tradeoff between variance of the distribution and its predictive accuracy, and establish that one can almost always incorporate randomization with large variance without incurring a loss in accuracy. In other words, the conventional approach of using a fixed classifier in adversarial environments is generally Pareto suboptimal. Finally, we validate such conclusions on both synthetic and real-world classification problems. Copyright 2014 ACM.en
dc.publisherAssociation for Computing Machinery (ACM)en
dc.subjectAdversarial learningen
dc.subjectLinear SVMen
dc.subjectReverse engineeringen
dc.titleAdding Robustness to Support Vector Machines Against Adversarial Reverse Engineeringen
dc.typeConference Paperen
dc.contributor.departmentComputer, Electrical and Mathematical Sciences and Engineering (CEMSE) Divisionen
dc.contributor.departmentComputer Science Programen
dc.contributor.departmentComputational Bioscience Research Center (CBRC)en
dc.contributor.departmentStructural and Functional Bioinformatics Groupen
dc.contributor.departmentMachine Intelligence & kNowledge Engineering Laben
dc.identifier.journalProceedings of the 23rd ACM International Conference on Conference on Information and Knowledge Management - CIKM '14en
dc.conference.date3 November 2014 through 7 November 2014en
dc.conference.name23rd ACM International Conference on Information and Knowledge Management, CIKM 2014en
kaust.authorGao, Xinen
kaust.authorZhang, Xiangliangen
kaust.authorAlabdulmohsin, Ibrahimen
All Items in KAUST are protected by copyright, with all rights reserved, unless otherwise indicated.