End-to-End Video Compressive Sensing Using Anderson-Accelerated Unrolled Networks
dc.contributor.author | Li, Yuqi | |
dc.contributor.author | Qi, Miao | |
dc.contributor.author | Gulve, Rahul | |
dc.contributor.author | Wei, Mian | |
dc.contributor.author | Genov, Roman | |
dc.contributor.author | Kutulakos, Kiriakos N. | |
dc.contributor.author | Heidrich, Wolfgang | |
dc.date.accessioned | 2020-06-24T08:44:26Z | |
dc.date.available | 2020-06-24T08:44:26Z | |
dc.date.issued | 2020-06-02 | |
dc.identifier.citation | Li, Y., Qi, M., Gulve, R., Wei, M., Genov, R., Kutulakos, K. N., & Heidrich, W. (2020). End-to-End Video Compressive Sensing Using Anderson-Accelerated Unrolled Networks. 2020 IEEE International Conference on Computational Photography (ICCP). doi:10.1109/iccp48838.2020.9105237 | |
dc.identifier.isbn | 978-1-7281-5231-8 | |
dc.identifier.issn | 2164-9774 | |
dc.identifier.doi | 10.1109/ICCP48838.2020.9105237 | |
dc.identifier.uri | http://hdl.handle.net/10754/663826 | |
dc.description.abstract | Compressive imaging systems with spatial-temporal encoding can be used to capture and reconstruct fast-moving objects. The imaging quality highly depends on the choice of encoding masks and reconstruction methods. In this paper, we present a new network architecture to jointly design the encoding masks and the reconstruction method for compressive high-frame-rate imaging. Unlike previous works, the proposed method takes full advantage of denoising prior to provide a promising frame reconstruction. The network is also flexible enough to optimize full-resolution masks and efficient at reconstructing frames. To this end, we develop a new dense network architecture that embeds Anderson acceleration, known from numerical optimization, directly into the neural network architecture. Our experiments show the optimized masks and the dense accelerated network respectively achieve 1.5 dB and 1 dB improvements in PSNR without adding training parameters. The proposed method outperforms other state-of-the-art methods both in simulations and on real hardware. In addition, we set up a coded two-bucket camera for compressive high-frame-rate imaging, which is robust to imaging noise and provides promising results when recovering nearly 1,000 frames per second. | |
dc.publisher | Institute of Electrical and Electronics Engineers (IEEE) | |
dc.relation.url | https://ieeexplore.ieee.org/document/9105237/ | |
dc.relation.url | https://ieeexplore.ieee.org/document/9105237/ | |
dc.relation.url | https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=9105237 | |
dc.rights | Archived with thanks to IEEE | |
dc.subject | high-frame-rate imaging | |
dc.subject | deep neural network | |
dc.subject | computational camera | |
dc.title | End-to-End Video Compressive Sensing Using Anderson-Accelerated Unrolled Networks | |
dc.type | Conference Paper | |
dc.contributor.department | Computational Imaging Group | |
dc.contributor.department | Computer Science Program | |
dc.contributor.department | Computer, Electrical and Mathematical Sciences and Engineering (CEMSE) Division | |
dc.contributor.department | Electrical Engineering | |
dc.contributor.department | Visual Computing Center (VCC) | |
dc.conference.date | 24-26 April 2020 | |
dc.conference.name | 2020 IEEE International Conference on Computational Photography (ICCP) | |
dc.conference.location | Saint Louis, MO, USA | |
dc.eprint.version | Post-print | |
dc.contributor.institution | University of Toronto,Canada | |
kaust.person | Li, Yuqi | |
kaust.person | Qi, Miao | |
kaust.person | Heidrich, Wolfgang | |
refterms.dateFOA | 2020-06-29T05:27:53Z | |
dc.date.published-online | 2020-06-02 | |
dc.date.published-print | 2020-04 |
Files in this item
This item appears in the following Collection(s)
-
Conference Papers
-
Computer Science Program
For more information visit: https://cemse.kaust.edu.sa/cs -
Visual Computing Center (VCC)
-
Computer, Electrical and Mathematical Science and Engineering (CEMSE) Division
For more information visit: https://cemse.kaust.edu.sa/