Uncertainty-guided Continual Learning with Bayesian Neural Networks
Type
Conference PaperDate
2020-03-11Preprint Posting Date
2019-06-06Submitted Date
2019-09-25Permanent link to this record
http://hdl.handle.net/10754/662644
Metadata
Show full item recordAbstract
Continual learning aims to learn new tasks without forgetting previously learned ones. This is especially challenging when one cannot access data from previous tasks and when the model has a fixed capacity. Current regularization-based continual learning algorithms need an external representation and extra computation to measure the parameters' $\textit{importance}$. In contrast, we propose Uncertainty-guided Continual Bayesian Neural Networks (UCB), where the learning rate adapts according to the uncertainty defined in the probability distribution of the weights in networks. Uncertainty is a natural way to identify $\textit{what to remember}$ and $\textit{what to change}$ as we continually learn, and thus mitigate catastrophic forgetting. We also show a variant of our model, which uses uncertainty for weight pruning and retains task performance after pruning by saving binary masks per tasks. We evaluate our UCB approach extensively on diverse object classification datasets with short and long sequences of tasks and report superior or on-par performance compared to existing approaches. Additionally, we show that our model does not necessarily need task information at test time, i.e. it does not presume knowledge of which task a sample belongs to.Sponsors
Work done while at Facebook AI Research.Publisher
ICLRConference/Event name
International Conference on Learning Representations (ICLR) 2020arXiv
1906.02425Relations
Is Supplemented By:- [Software]
Title: SaynaEbrahimi/UCB: Original PyTorch implementation of Uncertainty-guided Continual Learning with Bayesian Neural Networks, ICLR 2020. Publication Date: 2019-02-04. github: SaynaEbrahimi/UCB Handle: 10754/667505