Show simple item record

dc.contributor.advisorGhanem, Bernard
dc.contributor.authorAlfadly, Modar
dc.date.accessioned2018-04-17T05:43:44Z
dc.date.available2018-04-17T05:43:44Z
dc.date.issued2018-04-12
dc.identifier.doi10.25781/KAUST-Y7627
dc.identifier.urihttp://hdl.handle.net/10754/627554
dc.description.abstractDespite the impressive performance of deep neural networks (DNNs) on numerous vision tasks, they still exhibit yet-to-understand uncouth behaviours. One puzzling behaviour is the reaction of DNNs to various noise attacks, where it has been shown that there exist small adversarial noise that can result in a severe degradation in the performance of DNNs. To rigorously treat this, we derive exact analytic expressions for the first and second moments (mean and variance) of a small piecewise linear (PL) network with a single rectified linear unit (ReLU) layer subject to general Gaussian input. We experimentally show that these expressions are tight under simple linearizations of deeper PL-DNNs, especially popular architectures in the literature (e.g. LeNet and AlexNet). Extensive experiments on image classification show that these expressions can be used to study the behaviour of the output mean of the logits for each class, the inter-class confusion and the pixel-level spatial noise sensitivity of the network. Moreover, we show how these expressions can be used to systematically construct targeted and non-targeted adversarial attacks. Then, we proposed a special estimator DNN, named mixture of linearizations (MoL), and derived the analytic expressions for its output mean and variance, as well. We employed these expressions to train the model to be particularly robust against Gaussian attacks without the need for data augmentation. Upon training this network on a loss that is consolidated with the derived output probabilistic moments, the network is not only robust under very high variance Gaussian attacks but is also as robust as networks that are trained with 20 fold data augmentation.
dc.language.isoen
dc.subjectNeural Networks (Computer)
dc.subjectnetwork moments
dc.subjectGaussian input
dc.titleAnalytic Treatment of Deep Neural Networks Under Additive Gaussian Noise
dc.typeThesis
dc.contributor.departmentComputer, Electrical and Mathematical Sciences and Engineering (CEMSE) Division
thesis.degree.grantorKing Abdullah University of Science and Technology
dc.contributor.committeememberHeidrich, Wolfgang
dc.contributor.committeememberWonka, Peter
thesis.degree.disciplineComputer Science
thesis.degree.nameMaster of Science
refterms.dateFOA2018-06-14T04:17:43Z


Files in this item

Thumbnail
Name:
MS_Thesis.pdf
Size:
11.36Mb
Format:
PDF
Description:
Thesis Full Text
Thumbnail
Name:
MS_Defense.pptx
Size:
20.43Mb
Format:
Microsoft PowerPoint 2007
Description:
Thesis Defense Slides
Thumbnail
Name:
MoL_Results.zip
Size:
779.3Kb
Format:
Unknown
Description:
MoL Results

This item appears in the following Collection(s)

Show simple item record