Discriminative Transfer Learning for General Image Restoration

Handle URI:
http://hdl.handle.net/10754/626486
Title:
Discriminative Transfer Learning for General Image Restoration
Authors:
Xiao, Lei; Heide, Felix; Heidrich, Wolfgang ( 0000-0002-4227-8508 ) ; Schölkopf, Bernhard; Hirsch, Michael
Abstract:
Recently, several discriminative learning approaches have been proposed for effective image restoration, achieving convincing trade-off between image quality and computational efficiency. However, these methods require separate training for each restoration task (e.g., denoising, deblurring, demosaicing) and problem condition (e.g., noise level of input images). This makes it time-consuming and difficult to encompass all tasks and conditions during training. In this paper, we propose a discriminative transfer learning method that incorporates formal proximal optimization and discriminative learning for general image restoration. The method requires a single-pass training and allows for reuse across various problems and conditions while achieving an efficiency comparable to previous discriminative approaches. Furthermore, after being trained, our model can be easily transferred to new likelihood terms to solve untrained tasks, or be combined with existing priors to further improve image restoration quality.
KAUST Department:
Computer, Electrical and Mathematical Sciences and Engineering (CEMSE) Division; Computer Science Program; Visual Computing Center (VCC)
Publisher:
arXiv
Issue Date:
27-Mar-2017
ARXIV:
arXiv:1703.09245
Type:
Preprint
Additional Links:
http://arxiv.org/abs/1703.09245v1; http://arxiv.org/pdf/1703.09245v1
Appears in Collections:
Other/General Submission; Computer Science Program; Visual Computing Center (VCC); Computer, Electrical and Mathematical Sciences and Engineering (CEMSE) Division

Full metadata record

DC FieldValue Language
dc.contributor.authorXiao, Leien
dc.contributor.authorHeide, Felixen
dc.contributor.authorHeidrich, Wolfgangen
dc.contributor.authorSchölkopf, Bernharden
dc.contributor.authorHirsch, Michaelen
dc.date.accessioned2017-12-28T07:32:12Z-
dc.date.available2017-12-28T07:32:12Z-
dc.date.issued2017-03-27en
dc.identifier.urihttp://hdl.handle.net/10754/626486-
dc.description.abstractRecently, several discriminative learning approaches have been proposed for effective image restoration, achieving convincing trade-off between image quality and computational efficiency. However, these methods require separate training for each restoration task (e.g., denoising, deblurring, demosaicing) and problem condition (e.g., noise level of input images). This makes it time-consuming and difficult to encompass all tasks and conditions during training. In this paper, we propose a discriminative transfer learning method that incorporates formal proximal optimization and discriminative learning for general image restoration. The method requires a single-pass training and allows for reuse across various problems and conditions while achieving an efficiency comparable to previous discriminative approaches. Furthermore, after being trained, our model can be easily transferred to new likelihood terms to solve untrained tasks, or be combined with existing priors to further improve image restoration quality.en
dc.publisherarXiven
dc.relation.urlhttp://arxiv.org/abs/1703.09245v1en
dc.relation.urlhttp://arxiv.org/pdf/1703.09245v1en
dc.rightsArchived with thanks to arXiven
dc.titleDiscriminative Transfer Learning for General Image Restorationen
dc.typePreprinten
dc.contributor.departmentComputer, Electrical and Mathematical Sciences and Engineering (CEMSE) Divisionen
dc.contributor.departmentComputer Science Programen
dc.contributor.departmentVisual Computing Center (VCC)en
dc.eprint.versionPre-printen
dc.contributor.institutionUniversity of British Columbiaen
dc.contributor.institutionStanford Universityen
dc.contributor.institutionMPI for Intelligent Systemsen
dc.identifier.arxividarXiv:1703.09245en
kaust.authorHeidrich, Wolfgangen
All Items in KAUST are protected by copyright, with all rights reserved, unless otherwise indicated.