Neural Proximal Gradient Descent with GANs for Compressive Imaging
Halls department, Hall 5
Wednesday, 26 December 2018
10:30 - 11:30
Recovering high-resolution images from limited sensory data typically leads to a serious ill-posed inverse problem, demanding inversion algorithms that effectively capture the prior information. Learning a good inverse mapping from training data faces severe challenges, including: (i) scarcity of training data; (ii) need for plausible reconstructions that are physically feasible; (iii) need for fast reconstruction, especially in real-time applications. We develop a successful system solving all these challenges, using as basic architecture the recurrent application of proximal gradient algorithm. Our so-called neural proximal gradient descent (NPGD) learns a proximal map that works well with real-world images based on residual networks trained using pixel-wise and/or adversarial GAN losses. Local convergence of the NPGD iterates are theoretically studied accompanied with empirical validation. In particular, we derive incoherence conditions that guarantee proximal map is contractive, and thus the iterates converge to a fixed point, which under certain reasonable conditions coincide with the true unknown. Extensive experiments are carried out under different settings: (a) reconstructing abdominal MRI of pediatric patients from highly undersampled Fourier-space data and (b) super-resolving natural face images. Our key findings include: 1. a recurrent ResNet with a single residual block unrolled from an iterative algorithm yields an effective proximal which accurately reveals MR image details. 2. Our architecture significantly outperforms conventional non-recurrent deep ResNets by 2dB SNR; it is also trained much more rapidly. 3. It outperforms state-of-the-art compressed-sensing Wavelet-based methods by 4dB SNR, with 100x speedups in reconstruction time. 4. the expert opinion of radiologists confirm the GAN-based proximal reconstruct images with high diagnostic value.
Morteza Mardani is currently a research scientist at Stanford University, Information Systems Labs at the Department of Electrical Engineering. He received his Ph.D. in Electrical Engineering and Mathematics (minor) from the University of Minnesota, Twin cities, 2015. He was a visiting scholar at the Electrical Engineering & Computer Science in UC Berkeley, Jan.-Jun. 2015, and then a Postdoctoral Fellow at Stanford university till Dec. 2017. His research interests lie in the area of machine learning and statistical signal processing for data science and artificial intelligence, where he is currently working on deep learning and generative adversarial neural networks for computational biomedical imaging. He is recipient of number of awards for his contributions to machine learning including the 2017 Young Author Best Paper Award of IEEE Signal Processing Society (2017), and the Best Student Paper Award of the 2012 IEEE Workshop on Signal Processing Advances in Wireless Communications.