Robustness to Adversarial Perturbations in Learning from Incomplete Data
Friday, 03 January 2020
11:00 - 11:45
What is the role of unlabeled data in an inference problem, when the presumed underlying distribution is adversarially perturbed? In this talk, I explain how we answer to this question by unifying two major learning frameworks: Semi-Supervised Learning (SSL) and Distributionally Robust Optimization (DRO). We develop a generalization theory for our framework based on a number of novel complexity measures, such as an adversarial extension of Rademacher complexity and its semi-supervised analogue. Moreover, our analysis is able to quantify the role of unlabeled data in the generalization process under a more general condition compared to existing works in SSL. Based on our framework, we also present a hybrid of DRL and EM algorithms that has a guaranteed convergence rate. When implemented with deep neural networks, our method shows a comparable performance to those of the state-of-the-art on a number of real-world benchmark datasets.
Amir Najafi received his B.Sc. and M.Sc. degrees in Electrical Engineering from Sharif University of Technology, Tehran, Iran, in 2012 and 2015, respectively. He is currently a Ph.D. student at Computer Engineering Dept. of Sharif University of Technology. He was with the Broad Institute of MIT and Harvard, Boston, MA, in 2016 as a visiting research scholar, and interned at Preferred Networks Inc., Tokyo, Japan, in 2018. His research interests include machine learning theory, information theory and bioinformatics.